Guest blog by Johna Johnson, CEO and founder, Nemertes Research
Sometimes when you push a security solution too far outside its comfort zone, performance plummets. Or, worse: the system can end up crashing. Worst of all, it can misbehave while still appearing to function as you expect.
A case in point: traffic-shaping tools can be, and often are, used as ad-hoc security systems. Security professionals regularly configure them to drop traffic with specific characteristics to head off emerging threats. However, in overload situations, when they can’t properly process all the packets they receive, these systems default to letting packets through rather than dropping them or delaying them. In other words, at scale, packet-shaping systems can silently cease to act as expected—with potentially devastating results.
To avoid this kind of mishap, explicitly consider scaling issues from the beginning of any security architecture. If you’re looking to perform internal network segmentation, for instance, think about the impact of pushing existing perimeter firewalls into performing that function. If the firewall appliances aren’t powerful enough, they may break. Many require hardware acceleration to operate at speed, which isn’t available in public cloud models, for instance. To get virtual appliances to work with sufficient power, you may wind up paying for lots of extra servers or instances. Or, you will wind up deploying throngs of your enterprise standard systems across which to load balance—without unified management, all too often, and without any increase in your security staff to make managing this survivable.
There are five performance considerations every business should think about when developing a security strategy:
- Have a solution built-in from the start to support internal traffic streams (even when “internal” applies to components running in an external cloud environment).
- Look for centralized, policy-driven management, which allows you to treat the mass of enforcement points as a holistic security system—like an immune system, distributed throughout the body it’s protecting.
- Focus protection on specific workloads. This approach addresses both operational scale and functional scale.
- Pay attention to the size (and number) of agents running on your workloads. Tool fatigue can impact more than just the limited staffing resources on a security team; multiple, heavy agents deployed on every instance or server can oftentimes bring application performance to its knees. This impacts cost as more compute power will need to be provisioned to try to make up for the drag on resources.
- Don’t go scan crazy. Scans = data. Performing too many scans of your infrastructure, too often, can overwhelm the data collection capabilities of your security tools. Regular scans are necessary, of course. But there’s a point of diminishing returns where too many scans impact performance and won’t make you safer.
Operationally, workload focus makes managing specific enforcement-point policies easier because the policies can be simpler. A monolithic or clustered approach generally results in a single enormous rule set addressing all applications and use cases, with thousands or tens of thousands of rules. In contrast, a workload-focused solution can have more comprehensible sets of rules. It makes designing new policies and testing them simpler and faster.
Functionally, breaking up the enforcement burden and localizing it means every enforcement point needs to process only a portion of the traffic. Latency is reduced because the rule set is small. When a workload’s traffic increases, more enforcement points can be spun up and assigned to it; when it decreases, the pool can shrink again.
The bottom line? Don’t push hardware or software into scenarios they were never designed for. To make security scalable and elastic, think in terms of solutions architected from the ground up to scale horizontally via at-need deployment of lightweight, distributed policy enforcement points. Stretching old solutions to fit will not work.