What’s Old is New: How Old Math Underpins the Future of A.I. in Security Operations

Most of us engineers know the truth—A.I. is just old math theory wrapped in a pretty package.  The deep learning algorithms used for Neural Networks? Yep, you guessed it, those were developed in 1943!

For those of us in Security Operations, the underpinning mathematical theories of probability will lead us into the future. Probability theory will automate human analysis–making real-time decisions on streaming data.

Probabilistic modeling will fill the gaps that our SecOps teams deal with today:  Too much data and not enough time. We humans have a very difficult time monitoring a live streaming console of security events.  We just can’t thread it all together with our limited knowledge, biases, and the small amount of time we have to interact with each new event.

Making instant decisions as data is streamed real-time is near impossible because there is:

    • too much info and data to process,
    • not enough meaning—we don’t understand what the data is telling us,
    • poor memories—can’t remember things two hours ago let alone, days, week’s or months before.

Enter Probability Theory

Watch my short video to learn how Probability Theory will fundamentally change the future of Security Operations by expanding our ability to analyze more data across our environments than ever before.

Click here to watch now.

Fight Fire with Fire:
How Security Automation Can Close the Vulnerability Gap Facing Industrial Operations

“Be stirring as the time; be fire with fire; threaten the threatener and outface the brow of bragging horror.”
William Shakespeare 1592

…or as Metallica once sang in 1982, Fight Fire with Fire!

There is a fire alight in our cyber world.  Threats are pervasive, the tech landscape is constantly changing, and now industrial companies are increasingly vulnerable with the advent of automation within their operations.  Last week a ransomware attack halted operations at Norsk Hydro ASA in both the U.S. and Europe, and just days later two U.S. chemical companies were also affected by a network security incident.

 

As manufacturing processes become increasingly complex and spread out around the world,
more companies will have to navigate the risk of disruption from cyber attacks. 

Bloomberg Cybersecurity

 

Industrial control systems (ICS), in particular, were not designed with cybersecurity in mind. Historically, they weren’t even connected to the internet or the IT network, but this is no longer the case. Automation and connectivity are essential for today’s industrial companies to thrive but this has also made them more vulnerable to attacks.

 

The more automation you introduce into your systems, the more you need to protect them. Along with other industries, you may potentially start to see a much stronger emphasis on cybersecurity.
Bloomberg Cybersecurity

 

Adding to the problem is a shortage of trained security staff to monitor the large volumes of data generated across the network that inevitably makes a plant’s operation even more vulnerable.

Fight the vulnerabilities that ICS automation causes with security automation

To close the vulnerability gap, industrial companies can fight fire with fire by embracing security automation. Extending automation tools beyond the industrial operations and into a plant’s security operations center can reduce the risk of a cyber attack. Security automation arms security teams with information to quickly identify threats so human analysts can act before a potential threat causes undue harm.

At Respond Software, we’re helping companies realize the power of automation with a new category of software called Robotic Decision Automation (RDA) for security operations. By augmenting teams with a ‘virtual analyst’, called the Respond Analyst, security teams can quickly automate frontline security operations (monitoring and triage).  Only the incidents with the highest probability of being malicious and actionable are escalated to human analysts for further investigation and response.

We believe that by combining human expertise with decision automation, industrial organizations can reduce their vulnerability risk profile.  The Respond Analyst can do the heavy lifting to cover the deluge of data generated each day and human analysts can elevate to focus on creative endeavors to remediate and contain threats faster.

It’s no question that industrial companies will continue to be targeted by bad actors. But now with front-line security automation, these organizations can also proactively safeguard operations against threats.

Be fire with fire.
W.S.

Read more:
3 Trends That Make Automation a Must for Securing Industrial Control Systems

Why Did Gartner Name Us a 2018 Cool Vendor for Security Ops & Vulnerability Management?

Our guess! It’s because we’ve redefined security operations by providing organizations, both large and small, with software that emulates the decision-making of an expert security analyst, effectively adding a super-human security team member to any security ops organization.  See the news release here now.

And if that’s not enough, our Respond Analyst, is based on our patent-pending Probabilistic Graphical Optimization (PGO)™ technology and specializes in high-volume, low signal security detection while it also learns, adapts and maintains a security teams’ collective knowledge 24×7, 365 days a year. In other words, this is the future of Security Operations.

Why did we do it? We know security data has simply grown beyond the ability of humans, no matter how skilled an analyst team may be. We also know that most security incidents stem from operational inefficiencies and out and out failures that lead to organizations being compromised.

Thanks Gartner for the recognition! You can read the Gartner Cool Vendor Report for Security Operations and Vulnerability Management here now. 

The Eight Fragments of SIEM

Security Information and Event Management (SIEM) was declared dead more than a decade ago, and yet it is still widely deployed in many security programs. We are used to thinking about SIEM as a monolithic platform capability, but it isn’t anymore. It can be broken down into eight different discrete capabilities, each of which is evolving and innovating at a different rate. This will drive fragmentation of traditional SIEM, however, this is probably a good thing for the effectiveness of our security detection and response programs.

To simplify their vendor management and technical complexity, many security teams will only buy a new security tool if it eliminates other existing tools from their portfolio. This is the primary counter-argument against fragmentation of SIEM capabilities. Here’s the question we have to answer, “Is it more important for information security to increase our capability to detect and respond or simplify our tool portfolio?”

Traditional SIEM vendors will always struggle to balance innovation and investment across so many different capabilities. These vendors attempt to differentiate by bringing a greater capability in a few areas, or by using a peanut butter approach to achieve a lower but uniform capability across the board.

This presents a difficult set of business decisions, typically driven by the available budget to invest. Large companies are in the business of earning profitable revenue from the sale of these products, while small startups are in the business of innovating and delivering new capabilities leveraging focused growth investment.

Here are the eight fragments of SIEM:

1. Data Collection and Normalization
The first thing that a SIEM does is collect data from originating data sources and parse it into a common format. Structured data is far easier to understand for logic automation than unstructured data. Data volume, velocity, and variety are bread and butter for big data (plumbing) providers, like Hadoop or ElasticSearch, and yet parsing security data into a workable format is still the province of SIEM technologies. There are many new concepts emerging every day in big data management and dedicated big data platforms. Open source projects keep up with these improvements more effectively than SIEMs, so it feels as if SIEMs will eventually get out of the plumbing business.

2. Context
“Context is King.” Once data has been collected into a SIEM platform, understanding that data in its full context is critical to effective detection and response. This includes understanding internal assets, external threat intelligence, internal IT operations, and events patterns. An IP address or hostname are only minimally informative, but if they can be described as a critical asset running a point-of-sale application then we understand how important they are for our business. There is fragmentation even within this category, along the user, network and asset divides. UEBA, NAC, IAM anyone? Each is a challenging problem in itself.

3. Detection Logic
Once data has been collected and context has been added to it, we can apply analytical or detection science to identify those events that require further response. This area of the market is innovating and evolving at an asymptotic rate. For many years Boolean logic was the limit available to identify events of interest. It was used to simply “funnel” the event volume down to a manageable amount by alarming on specifically described security scenarios. These scenarios were sometimes called correlated events, but more often were a narrowly defined common security situation like multiple failed logins or only high severity IDS alerts.

With the advent of advanced analytical techniques, machine learning and artificial intelligence, the quality of logic that can be applied to every security event has exploded. Given that time to detection is often measured in months or years, anything that can reduce this is of critical importance to the success of our security programs.

4. Console
Once that logic was applied, an event or correlated event was displayed on a console for an analyst to evaluate. This key bottleneck reduced what was possible to look at in terms of security events down to 0.0001% or less of the total security events generated. In addition, human factors around monitoring alarms at scale resulted in many missed detections. The idea that one more alert is going to help an analyst make a decision is a false one, and so the console truly is dead. One recent attempt to replace the console centered on the use of dashboards to summarize security information, but this was even less effective than being buried in correlated events, as you were buried in summary dashboards.

5. Workflow
Once some form of logic has enabled an analyst to determine an ongoing incident might require action, the system provided basic workflow management for the events of interest to be moved through an analytical process and assembled into a case for an Incident Responder. Many incident response teams would then transition to a dedicated IR case management solution and away from the SIEM’s workflow. Measuring these steps and understanding all the long-poles in time-to-resolution is very important for our ability to improve our operations. This is an area that needs more innovation and glue to connect all the actions taken on the way to resolution.

6. Case Management
A case is simply a container for all events, context and analyst descriptions of a single incident. A case allows an incident responder to rapidly decide whether they should continue their investigation and what the priority of that investigation should be. It also provides a formal record of investigation as it is conducted and a forensically sound process for incident handling. While many SIEM systems contain a small case management function, the advent of ServiceNow and IBM Resilient have radically innovated beyond standard SIEM.

7. Automated response
All basic SIEM tools provide external integrations — think right-click tools. These can be the gathering of additional investigative information, sometimes called decorating the alert, or actions to halt attackers by imposing a firewall or IPS blocks in near real time. Many companies have conducted experiments inevitably resulting in “self-denial of service” by attempting to design rapid blocking techniques directly from within the SIEM. There is now a new category of vendors called security automation, orchestration, and response who are innovating in this space and positioning themselves as downstream of the SIEM for response automation but really are mostly upstream in alert decoration, at the moment.

8. Forensic Data Lake and Search
Another critical functionality of most SIEMs was the maintenance, preservation, and availability of security logs for forensic analysis (meaning post-incident detection) and for the purposes of hunting for novel incidents. This capability relies heavily on the speed of data retrieval, and is generally dominated by columnar or parallel data stores — think Apache Spark or Vertica. Since the mean time to detection is 3/4 of a year, we need faster access to much more data than ever before if we are going to hunt where the attackers are located in time. No SIEM can provide this without extraordinary costs associated and this is another avenue for the big data solutions to outperform traditional SIEM.

Conclusion

There is so much innovation and speed in the security market in each of these categories that it is difficult to ignore this fragmentation and blindly continue with a single monolithic platform. Ultimately, we are paid to protect our companies and customers, to defend them on an increasingly hostile Internet where the consequences of failure continue to grow exponentially. This means we cannot afford to ignore the innovation around the eight fragments of security information and event management.

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.