The security slowdown: How to maximise cloud performance while keeping your data safe
Organisations are moving to public cloud infrastructures to do business faster with more customers. With the increase in traffic, though, comes a need for more inspection capacity that could potentially slow things down in the name of keeping data safe. Because performance is a baseline requirement for competing in the digital marketplace, security cannot be a bottleneck.
But how can you scale security performance to meet the growing performance demands of today’s cloud environments? You may think your security solution doesn’t matter when running on identical cloud-based infrastructures, but it does. Here’s what you need to keep in mind to maximise performance in the cloud while keeping your data safe.
There are two options for addressing the problem of meeting elastic performance: Scaling up and scaling out.
“Scaling out” refers to increasing performance by adjusting the number of separate instances of a solution. Scaling out allows a cloud user to automatically deploy more firewall capacity as traffic loads change. Traffic loads can change dramatically and cloud infrastructure can adapt dynamically, which is why cloud environments are so ideal. Some organisations that do a lot of seasonal or event driven activities will even temporarily put their applications and services into a cloud environment in order to meet unusually high but temporary spikes in demand, and then return them to their usual servers afterwards. Security needs to be able to scale seamlessly along with these changes.
Scale out capacity is not only about performance, but the price of the performance as well. It is critical that you compare the performance of solutions before building them into your cloud infrastructure. Deploying higher performance solutions means you don’t have to purchase additional firewall instances from the marketplace as often as you would with a slower solution.
This is critical for managing expenses while still meeting capacity requirements, especially when dealing with highly variable traffic.
“Scaling up” refers to increasing performance by adjusting the size of a single instance of the virtual hardware. One of the most important considerations in determining how large a VM you need, in order to effectively run your security solution on, is performance per core. If you require a large, multi-core CPU VM, how much throughput are you actually getting for each core you are paying for?
Looking carefully at scaling up capacity is a perfect example of how things that may look the same on the surface — i.e. all vendors may run their solutions on the same VM — but can really be quite different in their practical application. The truth is, the architectures used by different vendors can vary widely. One such example is that many vendors dedicate an entire core for management, which means that when you buy a two-core system to run their service, only half of those resources are available for processing traffic and data. This is truly an architectural problem. Another such example is when a vendor pins a single session (IKE SA) to a single core rather than being able to distribute IPSec traffic across multiple cores. This architectural design approach also results in diminishing returns for every additional core beyond one.
Not all cloud security solutions are created equal
When selecting a security vendor for your cloud or multi-cloud environment, the tricky part is making sure you are comparing apples to apples. In addition to a solution being able to operate seamlessly across different cloud environments, you also need to be able to evaluate true performance. Fortunately, CSPs (Cloud Service Providers) have evaluation programs where anyone can run performance tests against any environment you can create in or to their public clouds. Having this foundation as guidance will save you time and money.
It’s easy to assume that there is no performance advantage between vendors when you move to the cloud, since everything is running on the same hardware. But performance is the result of a lot more than just the hardware a network runs on. Performance also depends on a number of effective engineering techniques focused on optimisation, parallelisation, and hardware offloading.
Optimisation: Full stack optimisation, not just hardware optimisation. Some have argued that hardware vendors lose their performance advantage in a cloud environment. But the truth is, engineers have to dramatically optimise software in order for it to achieve necessary performance in a chip. That sort of optimisation across the stack is something that many software vendors never do. However, optimised software can significantly differentiate one vendor from another in the cloud because it can directly affect performance.
Parallelisation: Security operating systems loaded in a cloud environment need to be able to leverage a whole range of resources, including multi-core VMs, to achieve necessary performance. To maximise potential performance, engineers utilise a technique known as parallelisation. Basically, all computing prefers a parallel architecture built around factors of two (2, 4, 8, 16, 32). This enables maximum efficiency, and it’s an important reason why one vendor is able to achieve greater performance than another in the same cloud environment.
However, because of software architectural limitations, many vendors have to dedicate 1 in 4 of all available cores to control plane management. Which means that in an 8-core solution, your data is only being processed across six cores. This breaks the parallelisation model and can have a serious impact on efficiency and performance.
This sort of architectural limitation also simply means that fewer cores are available for inspection and processing. If you require (and pay for) an 8-core VM solution for your cloud firewall solution, but only have six cores available for data inspection, you have seriously impacted your ability to efficiently distribute and process that data. Which is why you should make sure your solution can parallelise performance across all available CPUs or resources.
Hardware offloading: Not all processing can be handled by the VMs they are assigned to. For high bandwidth data flows, Single Root I/O Virtualisation (SR-IOV) provides accelerated networking. SR-IOV bypasses the hypervisor’s kernel packet handling, the para-virtual interface (legacy), and uses the virtual function (VF) interface, thereby mapping the guest VM’s vNIC directly to the physical NIC.While this option is available to all vendors, the ability to take advantage of this function, especially when combined with optimisation and parallelisation efficiencies, can have a significant impact on the performance and function of a particular vendor’s solution.
Performance is a critical consideration when selecting any cloud-based security solution. Choosing scalable and high-performance security solutions enables organisations to meet the growing performance demands of today’s cloud environments. In addition, higher performing solutions are also more cost-effective. But not all cloud security solutions are built the same. Careful analysis will enable you to select the solution that best meets your organisation’s performance and budgetary requirements.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
- » Partnerships key for public cloud vendors to succeed in IoT analytics, says ABI Research
- » How SaaS, AI and machine learning are boosting sports broadcasting: A guide
- » Human error and misconfigurations primary source of Kubernetes security snafus, report says
- » IBM CEO Ginni Rometty to step down: Analysing the cloud strategy and in-tray for the new boss
- » 5G, the edge, and the disruption of the cloud: Why now is the time for change