In a perfect world, as security analysts, we’d be capturing data on every process, every action, every application, and every system in our organizations’ networks and infrastructures. We’d be monitoring all of it at all times of day and night. Ever vigilant, tireless, and relentlessly rational, we’d be able to formulate accurate conclusions based upon all this data that would lead us to the right decisions about the likelihood that particular events might be malicious. We’d remember everything that happened in our environments in past weeks and months, and we’d be able to bring all of this information to bear on the SOC analysis we were conducting.
But we don’t live in a perfect world. No IT security team has all the information it would need to determine with absolute certainty if malicious activities were taking place anywhere in the environment, yet nearly all security teams already have more information than they can monitor, consider, or interpret. There are always gaps between what could be kept tabs on and what’s actually being watched.
These gaps—the areas in your environment that remain unmonitored, or where sensor data is being collected, but the data’s not being looked at—constitute the cracks through which a malicious intruder could gain a foothold. In reality, in far too many enterprise IT environments, they’re not small cracks. Instead, they’re the size of the Grand Canyon.
What’s causing the cracks in cybersecurity analysis?
The biggest reason these cracks exist is that there’s a fundamental mismatch between the amount of data our network security sensors can collect and the amount of data our human analyst teams can monitor. Simply put, there’s far too much data for human eyes to watch or human minds to grasp.
To address this issue, security teams routinely tune down overly “chatty” intrusion detection systems or write rules for their security information and event management (SIEM) systems that exclude large numbers of alerts. Teams may also reduce the amount of log data they're gathering to save on storage costs. But this filtering is usually excessive, to the detriment of organizations' security.
On average, a human security analyst will interact with one out of every one million events that are collected in a SIEM. The unfortunate and dangerous consequence of this alert volume reduction is that, because it’s binary or rule-based, it’s not an intelligent selection process. This means that valuable information—the true signals of an attack—can be ignored, overlooked, or missed.
Fundamentally, information security is a “finding the needle in a haystack” type of problem. Tune down the sensors, and your analysts are excluding some of the “hay” from consideration in their search for the “needle.” With luck, and a well-constructed tuning process—though creating this is labor-intensive, time-consuming, and ongoing—you won’t discard the “hay” where the “needle” is to be found. But you can’t be certain, since that elusive needle could be hidden among any of the alerts that your security sensors are generating.
Many IT leaders are looking to SIEM or Security Orchestration, Automation, and Response (SOAR) solutions for help filling in these cracks. But these too have their limitations. SIEM systems are expensive and laborious to implement. Their tuning demands a significant investment of time and attention, and by nature excludes potentially useful and valuable information. SOAR platforms can help to streamline incident response workflows, but still require interventions and responses from human analysts before initiating playbooks.
What’s falling through the cracks in SOC analysis?
Data leakage is probably the most significant problem. Unattended alerts are inevitable when data volumes so greatly exceed the limits of human analyst teams’ attention spans and monitoring abilities. Absent or missing data points cannot be avoided when sensors are tuned down, since excluding potentially valuable data is the natural consequence of tuning. Often teams have the budget to add more sensors (a capital expenditure), but can’t afford to monitor them (an operational cost).Security analysts are suffering as well. When humans are overworked, they lose morale. It’s hard to stay motivated when the task at hand feels futile, and cybersecurity analysts are well aware that the present situation (too many alerts, lack of monitoring capacity) contributes to an increased risk profile for their organizations.
Contextual insights from across various data sources are often lacking. When security analyst teams are left to rely on memory to correlate events that may be separated in time by weeks or months, and that might have been generated by very different telemetries, it can be extremely difficult—if not downright impossible—to discern the relationships between them.
How can we patch the cracks in cybersecurity analysis?
SOC analysis teams need a diversity of sensors, gathering information from all parts of the environment, in order to collect enough telemetry data for adequate coverage. At a minimum, you need network-based, host-based, and agent-based sensors, but supplementing these with web browser and authentication controls is an improvement, and incorporating logs from operating systems and your cloud vendors is even better yet.
What’s also necessary is the ability to seamlessly monitor all of this data: to achieve this, it’s essential to turn to automated security operations software. The amount of time between the instant when a sensor first generates an alert to the moment full remediation is attained must be as short as possible. The bottleneck that prevents most SOC teams from achieving speed here is the manual step in which human analysts review alerts and make decisions about which to escalate. If you can integrate automated tools throughout the security incident workflow, removing the human that sits in the middle, you’ll remove the bottleneck from the process, exponentially increasing both the amount of sensor data you’re able to monitor, and the speed with which you can monitor it.
To fill the gaps in the security incident workflow, diverse solutions and tools must be seamlessly integrated. Most cybersecurity stacks are comprised of hardware and software from multiple vendors, but these solutions need to be interconnected in a seamless chain. To be truly effective at patching the cracks, security analysis software must be able to draw on or feed results to multi-dimensional sources including network sensors and log aggregation, case management and SOAR tools. Security operations software that relies on machine learning, integrated probabilistic reasoning, and dynamic scoping (to continuously increase the amount of information it is evaluating) is readily able to perform the monitoring tasks that humans find monotonous and overwhelming.
It’s ready to bridge even the Grand Canyon of unattended alerts.
To learn more about how the Respond Analyst considers and correlates new security alerts as they become available, finding relationships that help it make better decisions, check out our video on Dynamic Scoping and Prioritization.
Or listen to our CEO, Mike Armistead, discuss the future of security automation technologies with Patrick Grey from Risky Business.
Or download our recent report on automating your security operations workflow today.
Chris has over 30 years of experience in defensive information security; 14 years in the defense and intelligence community and 17 years in commercial industry. He has designed, built and managed global security operations centers and incident response teams for eight of the global fortune-50. As he often says, if you have complaints about today’s security operations model, you can partially blame him. It’s from his first-hand experience in learning the limitations of the man vs. data SecOps model that Chris leads product design and strategy for Respond Software.View all posts by Chris Calvert