What’s Old is New: How Old Math Underpins the Future of A.I. in Security Operations

Most of us engineers know the truth—A.I. is just old math theory wrapped in a pretty package.  The deep learning algorithms used for Neural Networks? Yep, you guessed it, those were developed in 1943!

For those of us in Security Operations, the underpinning mathematical theories of probability will lead us into the future. Probability theory will automate human analysis–making real-time decisions on streaming data.

Probabilistic modeling will fill the gaps that our SecOps teams deal with today:  Too much data and not enough time. We humans have a very difficult time monitoring a live streaming console of security events.  We just can’t thread it all together with our limited knowledge, biases, and the small amount of time we have to interact with each new event.

Making instant decisions as data is streamed real-time is near impossible because there is:

    • too much info and data to process,
    • not enough meaning—we don’t understand what the data is telling us,
    • poor memories—can’t remember things two hours ago let alone, days, week’s or months before.

Enter Probability Theory

Watch my short video to learn how Probability Theory will fundamentally change the future of Security Operations by expanding our ability to analyze more data across our environments than ever before.

Click here to watch now.

Jumping to a New Curve

In the business classic “The Innovator’s Dilemma“, author Clayton Christensen shows how jumping to a new productivity curve is difficult for incumbent leaders but valuable for new innovators.  I think a lot about this concept for cybersecurity. The world has changed dramatically these last 5-10 years and the curve most enterprises are on results in lots of siloed detectors, rudimentary processing, people-centric processes, and high costs to maintain platforms. The solutions for these problems had great promise in the beginning but still can’t provide the level of productivity necessary to keep up with advances by the adversary. Workflow automation helps, but not enough to address the “orders of magnitude” problem that exists. The scale is definitely tipped in favor of the attackers.  So how do we think out of the box to help companies jump to that new productivity curve?

Helping Customers Jump to a New Curve of Productivity

Three years ago, we started on a mission to help security operations teams right the balance between attackers and defenders. We are on the front-lines to change the status quo and to bring in a new way of thinking to defend the enterprise.

At Respond Software, we strive to unlock the true potential of Man + Machine —without bankrupting security teams. We aim to elevate the human analysts/incident responders to do what they do best (be curious, think outside the box, proactively take action) and let the machines do what machines do best (consistently analyze huge amounts of data thoroughly and accurately based on hundreds of dimensions). In short, security teams can use modern processing and computing techniques to help jump to a new curve and better defend their enterprise.

Today, our product, the Respond Analyst, is fulfilling that mission for customers around the globe. In fact, over the last 30 days, our Robotic Decision Automation product actively monitored billions of live events, vetted those into tens of thousands of cases, and escalated (only!) hundreds of incidents to our customers’ incident responders. What’s more, our security operations software customers were able to give the Respond Analyst feedback on what they liked, what they didn’t like and how to improve the results.  They now have an analyst on their team that can plow through the alerts and invoke expert judgement to group and prioritize them into incidents. This eliminates a huge amount of time wasted chasing false positives while freeing analysts to focus on threat hunting, deeper investigations, and proactive security measures.  What a change for those teams!

New $20 Million Investment = More Status Quo Busting

To continue these efforts and to expand to meet increasing demand, we are pleased to announce our $20M Series B round of financing.  The round was led by new investor ClearSky Security, with additional investment from our existing investors, CRV and Foundation Capital.

We are extremely pleased to add ClearSky Security to our team. ClearSky’s depth of cybersecurity knowledge and experience—both personally amongst the partners and from backing successful companies such as Demisto and Cylance—will be extremely helpful as we look to establish our innovative robotic decision automation software in more security operations teams. On top of it, we get Jay Leek, current ClearSky Managing Director and former CISO at Blackstone, to be on our Board.  See our press release (and the accompanying video) for more details and his perspective.

I’d also like to thank the hard work and dedication of the entire group of Responders that got us to where we are today. As I recently told the security operations software team, I’m certainly psyched to get the endorsement and funding from three world-class investors. Even more so, I look forward to using the funds to work with ClearSky to further innovate, provide service to customers, and expand our reach to help more security operations teams take the fight to the adversaries…and save money while they do it.  It’s time for security operations to bust through the status quo and jump to a new curve of productivity, capability and job satisfaction.

It’s time for the next phase of Respond Software.

Watch and Read More:


Video:  Jay Leek shares his reasons for investing in Respond Software (on the way to the airport in an Uber)!

Press Release:  Respond Software Raises $20 Million to Meet Growing Demand for Robotic Decision Automation in Security Operations

 

Managing Security Events: Not as Difficult as Finding Magic Stones

These days finding a qualified and available Security Analyst seems more difficult than locating an Infinity Stone in the Marvel Universe.  Like Thanos, I’m sure many CISOs are wishing they could snap their fingers, but instead of destroying half the population, creating an army of security professionals to manage the complex threat landscape.

Due to the massive gap in available security skill sets and qualified people, many organizations are outsourcing at least a portion of their operations to Managed Security Service Providers (MSSP).  This seems to be a reasonable alternative, but just like in-house security operations, MSSPs have their share of challenges. In this blog, we will discuss those challenges to help you determine if an MSSP is the right security operations model for your organization.  Then if you decide to keep security operations in-house, we’ll share a better alternative that doesn’t involve voyaging through the galaxy hunting for magical stones.  

.     
source: helpnetsecurity.com


6 considerations when working with or hiring an MSSP  

 

  1. Get ready for a long ramp: According to Gartner, onboarding time for an MSSP is 1 to 4 months.*  This elongated time means organizations that are thinking about hiring an MSSP must be patient.  Just remember those bad actors are not so tolerant and will not wait for you to get on board and set up with your MSSP before they attack.

  2. Typical outsourcing issues:  MSSPs have many customers, therefore they lack intimate knowledge of a single customer’s network or infrastructure. This makes it extremely difficult to perform effective analysis of that customer’s unique security configuration and requirements.

  3. Take a number:  Like any organization, MSSP’s have resource constraints. MSSPs will typically devote resources to larger customers who tend to pay the most when the largest incidents hit or volumes peak.

  4. We’ve got you covered—not so much:  Due to the high volume of alerts they are trying to manage, MSSPs will usually tune down sensors.  That means the MSSP’s ability to identify an attack will degrade.

  5. Law of diminishing returns:  Just like any organization, MSSPs face high analyst turnover and resource shortages.  When an analyst leaves the MSSP, customers suffer, as they are paying the same price for lower quality results.  Additionally, the MSSP must re-focus their attention to hire new talent from an already dwindling pool of candidates adversely impacting the current level of service that the customer receives.  This problem can often become worse over time.
  6. Cookie cutter solutions: MSSPs have an uncustomizable delivery model.  In other words, the MSSP model is optimized for their business, not for the requirements of the customer.   

 

These challenges are merely a sampling of a much larger set of difficulties that service providers face demonstrating that the MSSP alternative may not be the best for every organization.  When moving to an MSSP or using one, carefully think through all of the challenges listed above, as these will impact the amount of time you need to investigate false positives and may cause you to miss important attacks or threats.  Of course, you might decide to keep your security operations in-house, but you will likely face many of the same challenges as the MSSP.

And finally, remember there is a third alternative that doesn’t require you to search the galaxy for that illusive security expert.  Robotic Decision Automation software for security operations will automate event analysis, management, and triage.  The Respond Analyst delivers these capabilities, performing just like an expert analyst, but at machine speed and with 100% consistency.

If addressing the skills gap shortage with software seems like an alternative for you, please visit the following pages for more information:

*Gartner, “How to Work with an MSSP to Improve Security,” January 30, 2018

Fight Fire with Fire:
How Security Automation Can Close the Vulnerability Gap Facing Industrial Operations

“Be stirring as the time; be fire with fire; threaten the threatener and outface the brow of bragging horror.”
William Shakespeare 1592

…or as Metallica once sang in 1982, Fight Fire with Fire!

There is a fire alight in our cyber world.  Threats are pervasive, the tech landscape is constantly changing, and now industrial companies are increasingly vulnerable with the advent of automation within their operations.  Last week a ransomware attack halted operations at Norsk Hydro ASA in both the U.S. and Europe, and just days later two U.S. chemical companies were also affected by a network security incident.

 

As manufacturing processes become increasingly complex and spread out around the world,
more companies will have to navigate the risk of disruption from cyber attacks. 

Bloomberg Cybersecurity

 

Industrial control systems (ICS), in particular, were not designed with cybersecurity in mind. Historically, they weren’t even connected to the internet or the IT network, but this is no longer the case. Automation and connectivity are essential for today’s industrial companies to thrive but this has also made them more vulnerable to attacks.

 

The more automation you introduce into your systems, the more you need to protect them. Along with other industries, you may potentially start to see a much stronger emphasis on cybersecurity.
Bloomberg Cybersecurity

 

Adding to the problem is a shortage of trained security staff to monitor the large volumes of data generated across the network that inevitably makes a plant’s operation even more vulnerable.

Fight the vulnerabilities that ICS automation causes with security automation

To close the vulnerability gap, industrial companies can fight fire with fire by embracing security automation. Extending automation tools beyond the industrial operations and into a plant’s security operations center can reduce the risk of a cyber attack. Security automation arms security teams with information to quickly identify threats so human analysts can act before a potential threat causes undue harm.

At Respond Software, we’re helping companies realize the power of automation with a new category of software called Robotic Decision Automation (RDA) for security operations. By augmenting teams with a ‘virtual analyst’, called the Respond Analyst, security teams can quickly automate frontline security operations (monitoring and triage).  Only the incidents with the highest probability of being malicious and actionable are escalated to human analysts for further investigation and response.

We believe that by combining human expertise with decision automation, industrial organizations can reduce their vulnerability risk profile.  The Respond Analyst can do the heavy lifting to cover the deluge of data generated each day and human analysts can elevate to focus on creative endeavors to remediate and contain threats faster.

It’s no question that industrial companies will continue to be targeted by bad actors. But now with front-line security automation, these organizations can also proactively safeguard operations against threats.

Be fire with fire.
W.S.

Read more:
3 Trends That Make Automation a Must for Securing Industrial Control Systems

New Paradigm for SecOps
Atones for the Sins of my Past

I’m an advocate for SIEMs, and have been a staunch believer in correlation rules for the past 15 years. So why did I decide to take the leap and join the Respond Software team?

The simplest explanation is that I joined to atone for the sins of my past. In the words of the great philosopher, Inigo Montoya, “Let me explain…No, there is too much. Let me sum up.”

Coming to terms with the reality of SIEMs

For 15 years I’ve been shouting from the rooftops, “SIEMs will solve all your Security Operations challenges!”  But all my proclamations came into question as soon as I learned about the capabilities of the Respond Analyst.

I’ve held a few different roles during this time, including Sales Engineer, Solutions Architect, and Security Operations Specialist. All of these were pre-sales roles, all bound together by one thing—SIEM expertise. I’ve worked with SIEM since it began and I’ve seen it evolve over the years, even working as part of a team that built a Risk Correlation Engine at OpenService/LogMatrix. Make no mistake about it, I’m still a big fan of SIEM and what it can do for an organization. It doesn’t matter whether you are using a commercial or open source solution, or even built your own, SIEMs still provide a lot of value. For years I helped customers gain visibility into their logs and events, worked with them to meet compliance requirements, and pass their audits with ease. I developed use cases, wrote correlation rules, and firmly believed that every time a correlation rule fired, it would be a true incident worthy of escalation and remediation.

Funny thing about that correlation part, it never really worked out. It became a vicious cycle of tuning and tweaking, filtering, and excluding to reduce the number of firings. It didn’t matter the approach or the technique, the cycle never ended and still goes on today. Organizations used to have one or two people that were responsible for the SIEM, but it wasn’t usually their full-time job. Now we have analysts, administrators, incident responders, and content teams and SIEM is just one of the tools these folks use within the SOC. In order to solve the challenges of SIEM, we have added bodies and layered other solutions on top of it, truly unsustainable for all but the largest of enterprises.

In the back of my mind, I knew there had to be a better way to find the needle in a pile of needles. Eventually, I learned about this company called Respond Software, founded by people like me, who have seen the same challenges, committed the same sins, and who eventually found a better way. I hit their website, read every blog, watched numerous videos, and clicked every available link, learning as much as I could about the company and their solution.

The daily grind of a security analyst: Consoles, false positives, data collection—repeat

I think one of the most interesting things I read on our website was the 2017 Cyentia Institute’s Voice of the Analyst Survey. I can’t say I was surprised, but it turns out that analysts spend most of their time monitoring, staring at a console and waiting for something to happen. It’s no surprise that they ranked it as one of their least favorite activities. It reminded me of one of my customers, who had a small SOC, with a single analyst for each shift. The analyst assigned to the morning shift found it mind-numbing to stare at a console for most of the day. In order to make it a little more exciting, the day would start by clearing every alert, every day, without fail. When I asked why, he said the alerts were always deemed as false positives by the IR team, and no matter how much tuning was done, they were all false positives. At least they were actually using their SIEM for monitoring. I’ve seen multiple companies use their SIEM as an expensive (financially and operationally) log collector, using it only to search logs when an incident was detected through other channels.

My Atonement: Filling the SIEM gaps and helping overworked security analysts

Everything I’ve seen over the years combined with what I learned about our mission here, made the decision to join Respond Software an easy one. Imagine a world where you don’t have to write rules or stare at consoles all day long. No more guessing what is actionable or ruling out hundreds of false positives. Respond Software has broken that cycle with software that takes the best of human judgment at scale and consistent analysis, building upon facts to make sound decisions. The Respond Analyst works 24×7, and never takes a coffee break, never goes on vacation and allows your security team to do what they do best—respond to incidents and not chase false positives.

I’ve seen firsthand the limitations of the traditional methods of detecting incidents, and the impact it has on security operations and the business as a whole. I’ve also seen how the Respond Analyst brings real value to overwhelmed teams, ending the constant struggle of trying to find the one true incident in a sea of alerts.

If you would like to talk to our team of experts and learn more about how you can integrate Robotic Decision Automation into your security infrastructure, contact us: tellmemore@respond-software.com

The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.

Meta-Data

Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

Controlling Cybersecurity Costs with Decision Automation Software

This is the third and final blog in my series on the 3C’s (Coverage, Context, and Cost) required for creating and maintaining a sustainable security infrastructure. In the first part, I reviewed the steps you need to take in order to ensure adequate visibility into your IT environment—determining how much basic sensor coverage is necessary, and how many additional data sources you’ll want to monitor to maximize your chances of detecting attackers. In part 2, I took a deep dive into alert context, discussing the types of additional information analysts must consider when making decisions about whether or not to act upon any particular alert.

Today’s topic is cost. How can resource-constrained security teams see to it that their analysts’ limited time and attention are spent in the ways that are most likely to generate value and results?

No enterprise-scale security program can operate without human monitoring. Simply put, your organization’s security team members—along with the highly specialized knowledge and experience they have—are your most valuable resource. But hiring and retaining top talent isn’t cheap. Nor is building the physical infrastructure to house them.

How can you limit your business’s exposure to IT security risks cost-effectively? Would it make sense to establish a dedicated Security Operations Center (SOC) in-house? Can these capabilities be outsourced successfully? Is there an automated solution teams can use without adding headcount? Let’s take a look at the options.

 

The Internal SOC Model

 

Large organizations with highly complex infrastructures requiring continuous, centralized monitoring—especially of highly customized technologies—often feel they have no other option. They must build and run their own dedicated SOC. The costs of creating and maintaining such a facility vary wildly, depending on the coverage, detection and triage capabilities, software and hardware, and physical or virtual facilities you choose.

At a bare minimum, you’d need a team of 12 analysts to maintain 24/7 coverage. On average, a full-time information security analyst earns an annual salary of $95,510, according to the US Bureau of Labor Statistics. And many SOC analysts don’t stay in their jobs for long. The average retention period for a junior analyst is a mere 12 to 18 months. This means that you’ll need to budget for recurring recruitment and training costs—no small expense in a field known for a growing skills gap and near-zero unemployment.

To learn more about how to retain SOC analysts—and keep them motivated to perform at their best, read our Voice of the Analyst Study.

Naturally, the costs of ownership for a dedicated in-house SOC extend beyond salaries, benefits and other personnel expenses. They also extend beyond the initial infrastructure build and purchase costs. Monitoring software is delivered as a service or must be regularly upgraded, threat intelligence feeds are subscription-based, and a SIM/SIEM requires maintenance and tuning.

To maintain an in-house SOC, the total recurring costs are estimated to be anywhere from $1.42 to $6.25 million.

 

Hybrid Models

 

In an attempt to obtain some of the benefits of a dedicated SOC without incurring costs that are within reach only for the largest enterprises—or those with the deepest pockets—numerous businesses today are turning to hybrid or shared SOC operational models. With this setup, your SOC is monitored by dedicated or semi-dedicated employees on a part-time basis. It’s common for internal staff to oversee SOC operations 8 hours a day, 5 days per week, with responsibilities offloaded to an external provider at other times.

Of course, cybercrime doesn’t sleep, and attackers aren’t bound by time zones, nations or continents. If anything, they’re more likely to attempt brute-force style attacks outside of business hours when any alerts generated are less likely to attract notice. Handoff time—when monitoring responsibility shifts from internal staff to the outside provider—is one of your network’s most vulnerable moments.

Another option is to have the external service provider assume a subset of responsibilities at all times, while others are handled in-house. When monitoring that’s not specific to your organization is in place, however, the false positive rate is likely to be higher. Triaging alerts generated by multiple sources with different decision-making criteria can become complicated and confusing, too.

Although the costs associated with the hybrid operational model are lower than those of a dedicated in-house SOC, they remain considerable. Cost estimates vary, but long-term investments in hardware, infrastructure, and talent are still significant.

You can find more information about the costs and benefits associated with different SOC operational models in the recent Gartner report, “Selecting the Right SOC for Your Organization.” Read it here.

 

The MSSP Outsourced Model

 

Increasingly, Managed Security Service Providers (MSSPs) are offering fully outsourced security monitoring as an alternative. In this resource-sharing approach, small to mid-sized companies are promised access to enterprise-grade SOC capabilities, but human analysts and costs (for both infrastructure and maintenance) are spread across the MSSP’s customer base.

This service model has inherent limitations. Many MSSPs struggle with the same challenges faced by large enterprises that have decided to build an internal SOC. Highly skilled analysts are hard to find, salaries can be prohibitively high, and once on the job, they must monitor more tickets and events than they have time for. This problem is particularly acute for MSSPs, whose employees must split their attention between multiple client companies’ systems.

MSSPs cannot serve every type of business. If yours needs a significant amount of customization or control over your security monitoring processes, perhaps due to regulatory compliance requirements, an MSSP may not be able to provide this.

MSSPs usually follow standardized workflows and deploy the same platforms and solutions for multiple customers. Because the MSSP decides which software and hardware they support, it’s possible that their selections will be incompatible with your current systems, requiring you to take a “rip and replace” approach to tools for which you’ve already paid.

Giving an outside provider access to your most sensitive data and systems requires a great deal of trust. No matter how strong your relationship with your MSSP, their employees will never understand your business culture or industry as deeply as your own staff. And they won’t be able to amass as much contextual information about each alert generated by your system as an in-house expert could, either.

Ultimately, MSSPs struggle to live up to their promise of providing robust security services at an affordable price. Costs are traditionally tied to the number of devices in a specific service offering (endpoints for anti-malware or network IDS sensors for network security monitoring).  The costs of full security monitoring capabilities can quickly add up. Clients often end up feeling “nickel and dimed” when they need additional incremental services or customization.

Now, there’s another alternative.

 

Enter Decision Automation Software

 

Gaining in popularity among security teams is “decision automation” software–software that automates the monitoring and triage process with astonishing accuracy. Decision automation software can monitor 100% of your sensor data for about a third of the cost of outsourcing to an MSSP.

With human analysts in short supply, and their salaries remaining the number one driver of IT security costs, you can rely on a more cost-effective automated solution to perform continuous network monitoring and alert analysis tasks. Decision automation software is able to attend to all your sensor data—for 100% coverage—without needing to take breaks and without bias.

Decision automation can readily cover a broad array of network and endpoint sensors, along with augmented data sources. It can collect rich and relevant contextual information for each alert generated by the system, ensuring that analysis is thorough as well as consistent. And it can do so for a small fraction of the cost of even a junior-level SOC analyst. By utilizing decision automation software, you free up your most valuable resource—humans— to use their intelligence and insights creatively.

Adding decision automation to your security operations team will enable human analysts to play the role of full-fledged detectives, rather than small-time “mall cops.” It lets them keep their focus where it belongs—on the bigger picture—and train their attention on high-value tasks, instead of monitoring a screen all day.

 

If you’d like to see for yourself how low-cost, high-value decision automation software can help protect your organization against cyber attacks, request a demo of the Respond Analyst today.

2019 Security Predictions: Real or Bogus? Respond Experts Weigh In

Where is cybersecurity heading and what should security teams focus on in 2019?

We thought it would be helpful to examine the most common cybersecurity predictions for 2019. Business press, trade publications and of course vendors had a lot to say about emerging technology trends, but what predictions or trends will matter most? More importantly, how can security teams and CISOs turn these predictions into advantages for their business?

I sat down with Chris Calvert, industry expert who often says; “if you have a problem with how today’s SOCs work, it’s partially my fault and I’m working to solve that issue!” With over 30 years of experience in information security, Chris has worked for the NSA, the DOD Joint Staff and held leadership positions in both large and small companies, including IBM and Hewlett Packard Enterprise. He has designed, built and managed security operations centers and incident response teams for eight of the global fortune-50.

During our conversation, we discuss questions like:

  • Will we see an increase in crime, espionage and sabotage by rogue nation-states?
  • How will malware sophistication change how we protect our networks?
  • Will utilities become a primary target for ransomware attacks?
  • A new type of fileless malware will emerge, but what is it? (think worms)
  • And finally, will cybersecurity vendors deliver on the true promise of A.I.?

You can listen to his expert thoughts and opinions on the podcast here!

Want to be better prepared for 2019?

The Respond Analyst is trained as an expert cybersecurity analyst that combines human reasoning with machine power to make complex decisions, with 100% consistency. As an automated cybersecurity analyst, the Respond Analyst processes millions of alerts as they stream. Allowing your team to focus on higher priority tasks like threat hunting and or incident response.

Here’s some other useful information:

The Science of Detection Part 2: The Role of Integrated Reasoning in Security Analysis Software

Today’s blog is part two in my science of detection series, and we’ll look at how integrated reasoning in security analysis software leads to better decisions. Be sure to check back in the coming weeks to see the next blogs in our series. In part three, I’ll be taking an in-depth look at the signal quality of detectors, such as signatures, anomalies, behaviors, and logs.

If you’ve been reading our blogs lately, you’ve seen the term “integrated reasoning” used before, so it’s time to give you a deeper explanation of what it means. Integrated reasoning combines multiple sensors and sensor types for analysis and better detection. Before making a security decision, you must take into account a large number of different factors simultaneously.

What Is Integrated Reasoning?

Interestingly, when we started using the term, Julie from our marketing team Googled it and pointed out that it was the name of a new test section introduced in the Graduate Management Admission Test (GMAT) in 2012. What the GMAT section is designed to test in potential MBA candidates is exactly what we mean when we refer to integrated reasoning. It consists of the following skills:

  • Two-part analysis: The ability to identify multiple answers as most correct.
  • Multi-source reasoning: The ability to reason from multiple sources and types of information.
  • Graphic interpretation: The ability to interpret statistical distributions and other graphical information.
  • Table analysis: The ability to interpret tabular information such as patterns or historical data and to understand how useful distinct information is to a given decision.

All of these skills provide a combination of perspectives that allow you to reason and reach a well thought out and accurate conclusion. The same reason we are evaluating our potential MBA candidates against this standard is why we would design to this standard for security analysis software, or if you will, a “virtual” security analyst.

What is an MBA graduate but a decision maker? Fortunately, we are training our future business leaders on integrated reasoning skills, but when the number of factors to be considered increases, humans get worse at making decisions — especially when they need to be made rapidly. Whether from lack of attention, lack of time, bias or a myriad of other reasons, people don’t make rational decisions most of the time.

However, when you’re reasoning and using all of the available information in a systematic manner, you have a much greater chance of identifying the best answer. To put this within a security analysis frame of reference, let’s consider some of the information available to us to make effective security decisions.

What Information Should We Consider?

The most effective security analysis software uses anything that is observable within the environment and reduces the uncertainty that any one event should be investigated.

To achieve integrated reasoning, the software should utilize a combination of detectors, including:

  • Signature-based alerts
  • Detection analytics
  • Behaviors
  • Patterns
  • History
  • Threat intelligence
  • Additional contextual information

In order to make the right decisions, security analysis software should take into account three important factors: sensors, perspective and context. When you combine different forms of security telemetry, like network security sensors and host-based sensors, you have a greater chance of detecting maliciousness. Then, if you deliberately overlap that diverse suite of sensors, you now have a form of logical triangulation. Then add context, and you can understand the importance of each alert. Boom, a good decision!

Like our theoretical MBA candidate, security analysts have to hold hundreds of relevant factors in their minds simultaneously and are charged with making a number of critical decisions every hour. A tall order for a mere mortal, indeed.

Imagine this: A user receives a phishing email, clicks on the link a week later and is infected by malware. The system anti-virus reports “cleaned” but only found 1 of 4 pieces of malware installed. The remaining malware communicates to a command-and-control server and is used as an internal waypoint for lateral exploration very low and slow. This generates thousands of events over a period of weeks or months, but all of them have varying levels of fidelity. More likely, this is the backstory that an incident responder would eventually assemble potentially months — or years — after the fact to explain a breach.

Integrated Reasoning is a must for making sound decisions when it comes to deciding which security alerts to escalate for further examination. But with the amount of incoming data increasing by the minute, security teams are having a hard time keeping up. Your best bet is to choose security analysis software, like the Respond Analyst, that has built-in integrated reasoning capabilities to help with decision-making, so teams can focus on highly likely security incidents.

Curious to see how the Respond Analyst’s integrated reasoning capabilities can help your security team make better decisions? Request a demo today.

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.