New Paradigm for SecOps
Atones for the Sins of my Past

I’m an advocate for SIEMs, and have been a staunch believer in correlation rules for the past 15 years. So why did I decide to take the leap and join the Respond Software team?

The simplest explanation is that I joined to atone for the sins of my past. In the words of the great philosopher, Inigo Montoya, “Let me explain…No, there is too much. Let me sum up.”

Coming to terms with the reality of SIEMs

For 15 years I’ve been shouting from the rooftops, “SIEMs will solve all your Security Operations challenges!”  But all my proclamations came into question as soon as I learned about the capabilities of the Respond Analyst.

I’ve held a few different roles during this time, including Sales Engineer, Solutions Architect, and Security Operations Specialist. All of these were pre-sales roles, all bound together by one thing—SIEM expertise. I’ve worked with SIEM since it began and I’ve seen it evolve over the years, even working as part of a team that built a Risk Correlation Engine at OpenService/LogMatrix. Make no mistake about it, I’m still a big fan of SIEM and what it can do for an organization. It doesn’t matter whether you are using a commercial or open source solution, or even built your own, SIEMs still provide a lot of value. For years I helped customers gain visibility into their logs and events, worked with them to meet compliance requirements, and pass their audits with ease. I developed use cases, wrote correlation rules, and firmly believed that every time a correlation rule fired, it would be a true incident worthy of escalation and remediation.

Funny thing about that correlation part, it never really worked out. It became a vicious cycle of tuning and tweaking, filtering, and excluding to reduce the number of firings. It didn’t matter the approach or the technique, the cycle never ended and still goes on today. Organizations used to have one or two people that were responsible for the SIEM, but it wasn’t usually their full-time job. Now we have analysts, administrators, incident responders, and content teams and SIEM is just one of the tools these folks use within the SOC. In order to solve the challenges of SIEM, we have added bodies and layered other solutions on top of it, truly unsustainable for all but the largest of enterprises.

In the back of my mind, I knew there had to be a better way to find the needle in a pile of needles. Eventually, I learned about this company called Respond Software, founded by people like me, who have seen the same challenges, committed the same sins, and who eventually found a better way. I hit their website, read every blog, watched numerous videos, and clicked every available link, learning as much as I could about the company and their solution.

The daily grind of a security analyst: Consoles, false positives, data collection—repeat

I think one of the most interesting things I read on our website was the 2017 Cyentia Institute’s Voice of the Analyst Survey. I can’t say I was surprised, but it turns out that analysts spend most of their time monitoring, staring at a console and waiting for something to happen. It’s no surprise that they ranked it as one of their least favorite activities. It reminded me of one of my customers, who had a small SOC, with a single analyst for each shift. The analyst assigned to the morning shift found it mind-numbing to stare at a console for most of the day. In order to make it a little more exciting, the day would start by clearing every alert, every day, without fail. When I asked why, he said the alerts were always deemed as false positives by the IR team, and no matter how much tuning was done, they were all false positives. At least they were actually using their SIEM for monitoring. I’ve seen multiple companies use their SIEM as an expensive (financially and operationally) log collector, using it only to search logs when an incident was detected through other channels.

My Atonement: Filling the SIEM gaps and helping overworked security analysts

Everything I’ve seen over the years combined with what I learned about our mission here, made the decision to join Respond Software an easy one. Imagine a world where you don’t have to write rules or stare at consoles all day long. No more guessing what is actionable or ruling out hundreds of false positives. Respond Software has broken that cycle with software that takes the best of human judgment at scale and consistent analysis, building upon facts to make sound decisions. The Respond Analyst works 24×7, and never takes a coffee break, never goes on vacation and allows your security team to do what they do best—respond to incidents and not chase false positives.

I’ve seen firsthand the limitations of the traditional methods of detecting incidents, and the impact it has on security operations and the business as a whole. I’ve also seen how the Respond Analyst brings real value to overwhelmed teams, ending the constant struggle of trying to find the one true incident in a sea of alerts.

If you would like to talk to our team of experts and learn more about how you can integrate Robotic Decision Automation into your security infrastructure, contact us: tellmemore@respond-software.com

The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.

Meta-Data

Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

Controlling Cybersecurity Costs with Decision Automation Software

This is the third and final blog in my series on the 3C’s (Coverage, Context, and Cost) required for creating and maintaining a sustainable security infrastructure. In the first part, I reviewed the steps you need to take in order to ensure adequate visibility into your IT environment—determining how much basic sensor coverage is necessary, and how many additional data sources you’ll want to monitor to maximize your chances of detecting attackers. In part 2, I took a deep dive into alert context, discussing the types of additional information analysts must consider when making decisions about whether or not to act upon any particular alert.

Today’s topic is cost. How can resource-constrained security teams see to it that their analysts’ limited time and attention are spent in the ways that are most likely to generate value and results?

No enterprise-scale security program can operate without human monitoring. Simply put, your organization’s security team members—along with the highly specialized knowledge and experience they have—are your most valuable resource. But hiring and retaining top talent isn’t cheap. Nor is building the physical infrastructure to house them.

How can you limit your business’s exposure to IT security risks cost-effectively? Would it make sense to establish a dedicated Security Operations Center (SOC) in-house? Can these capabilities be outsourced successfully? Is there an automated solution teams can use without adding headcount? Let’s take a look at the options.

 

The Internal SOC Model

 

Large organizations with highly complex infrastructures requiring continuous, centralized monitoring—especially of highly customized technologies—often feel they have no other option. They must build and run their own dedicated SOC. The costs of creating and maintaining such a facility vary wildly, depending on the coverage, detection and triage capabilities, software and hardware, and physical or virtual facilities you choose.

At a bare minimum, you’d need a team of 12 analysts to maintain 24/7 coverage. On average, a full-time information security analyst earns an annual salary of $95,510, according to the US Bureau of Labor Statistics. And many SOC analysts don’t stay in their jobs for long. The average retention period for a junior analyst is a mere 12 to 18 months. This means that you’ll need to budget for recurring recruitment and training costs—no small expense in a field known for a growing skills gap and near-zero unemployment.

To learn more about how to retain SOC analysts—and keep them motivated to perform at their best, read our Voice of the Analyst Study.

Naturally, the costs of ownership for a dedicated in-house SOC extend beyond salaries, benefits and other personnel expenses. They also extend beyond the initial infrastructure build and purchase costs. Monitoring software is delivered as a service or must be regularly upgraded, threat intelligence feeds are subscription-based, and a SIM/SIEM requires maintenance and tuning.

To maintain an in-house SOC, the total recurring costs are estimated to be anywhere from $1.42 to $6.25 million.

 

Hybrid Models

 

In an attempt to obtain some of the benefits of a dedicated SOC without incurring costs that are within reach only for the largest enterprises—or those with the deepest pockets—numerous businesses today are turning to hybrid or shared SOC operational models. With this setup, your SOC is monitored by dedicated or semi-dedicated employees on a part-time basis. It’s common for internal staff to oversee SOC operations 8 hours a day, 5 days per week, with responsibilities offloaded to an external provider at other times.

Of course, cybercrime doesn’t sleep, and attackers aren’t bound by time zones, nations or continents. If anything, they’re more likely to attempt brute-force style attacks outside of business hours when any alerts generated are less likely to attract notice. Handoff time—when monitoring responsibility shifts from internal staff to the outside provider—is one of your network’s most vulnerable moments.

Another option is to have the external service provider assume a subset of responsibilities at all times, while others are handled in-house. When monitoring that’s not specific to your organization is in place, however, the false positive rate is likely to be higher. Triaging alerts generated by multiple sources with different decision-making criteria can become complicated and confusing, too.

Although the costs associated with the hybrid operational model are lower than those of a dedicated in-house SOC, they remain considerable. Cost estimates vary, but long-term investments in hardware, infrastructure, and talent are still significant.

You can find more information about the costs and benefits associated with different SOC operational models in the recent Gartner report, “Selecting the Right SOC for Your Organization.” Read it here.

 

The MSSP Outsourced Model

 

Increasingly, Managed Security Service Providers (MSSPs) are offering fully outsourced security monitoring as an alternative. In this resource-sharing approach, small to mid-sized companies are promised access to enterprise-grade SOC capabilities, but human analysts and costs (for both infrastructure and maintenance) are spread across the MSSP’s customer base.

This service model has inherent limitations. Many MSSPs struggle with the same challenges faced by large enterprises that have decided to build an internal SOC. Highly skilled analysts are hard to find, salaries can be prohibitively high, and once on the job, they must monitor more tickets and events than they have time for. This problem is particularly acute for MSSPs, whose employees must split their attention between multiple client companies’ systems.

MSSPs cannot serve every type of business. If yours needs a significant amount of customization or control over your security monitoring processes, perhaps due to regulatory compliance requirements, an MSSP may not be able to provide this.

MSSPs usually follow standardized workflows and deploy the same platforms and solutions for multiple customers. Because the MSSP decides which software and hardware they support, it’s possible that their selections will be incompatible with your current systems, requiring you to take a “rip and replace” approach to tools for which you’ve already paid.

Giving an outside provider access to your most sensitive data and systems requires a great deal of trust. No matter how strong your relationship with your MSSP, their employees will never understand your business culture or industry as deeply as your own staff. And they won’t be able to amass as much contextual information about each alert generated by your system as an in-house expert could, either.

Ultimately, MSSPs struggle to live up to their promise of providing robust security services at an affordable price. Costs are traditionally tied to the number of devices in a specific service offering (endpoints for anti-malware or network IDS sensors for network security monitoring).  The costs of full security monitoring capabilities can quickly add up. Clients often end up feeling “nickel and dimed” when they need additional incremental services or customization.

Now, there’s another alternative.

 

Enter Decision Automation Software

 

Gaining in popularity among security teams is “decision automation” software–software that automates the monitoring and triage process with astonishing accuracy. Decision automation software can monitor 100% of your sensor data for about a third of the cost of outsourcing to an MSSP.

With human analysts in short supply, and their salaries remaining the number one driver of IT security costs, you can rely on a more cost-effective automated solution to perform continuous network monitoring and alert analysis tasks. Decision automation software is able to attend to all your sensor data—for 100% coverage—without needing to take breaks and without bias.

Decision automation can readily cover a broad array of network and endpoint sensors, along with augmented data sources. It can collect rich and relevant contextual information for each alert generated by the system, ensuring that analysis is thorough as well as consistent. And it can do so for a small fraction of the cost of even a junior-level SOC analyst. By utilizing decision automation software, you free up your most valuable resource—humans— to use their intelligence and insights creatively.

Adding decision automation to your security operations team will enable human analysts to play the role of full-fledged detectives, rather than small-time “mall cops.” It lets them keep their focus where it belongs—on the bigger picture—and train their attention on high-value tasks, instead of monitoring a screen all day.

 

If you’d like to see for yourself how low-cost, high-value decision automation software can help protect your organization against cyber attacks, request a demo of the Respond Analyst today.

2019 Security Predictions: Real or Bogus? Respond Experts Weigh In

Where is cybersecurity heading and what should security teams focus on in 2019?

We thought it would be helpful to examine the most common cybersecurity predictions for 2019. Business press, trade publications and of course vendors had a lot to say about emerging technology trends, but what predictions or trends will matter most? More importantly, how can security teams and CISOs turn these predictions into advantages for their business?

I sat down with Chris Calvert, industry expert who often says; “if you have a problem with how today’s SOCs work, it’s partially my fault and I’m working to solve that issue!” With over 30 years of experience in information security, Chris has worked for the NSA, the DOD Joint Staff and held leadership positions in both large and small companies, including IBM and Hewlett Packard Enterprise. He has designed, built and managed security operations centers and incident response teams for eight of the global fortune-50.

During our conversation, we discuss questions like:

  • Will we see an increase in crime, espionage and sabotage by rogue nation-states?
  • How will malware sophistication change how we protect our networks?
  • Will utilities become a primary target for ransomware attacks?
  • A new type of fileless malware will emerge, but what is it? (think worms)
  • And finally, will cybersecurity vendors deliver on the true promise of A.I.?

You can listen to his expert thoughts and opinions on the podcast here!

Want to be better prepared for 2019?
The Respond Analyst is trained as an expert cybersecurity analyst that combines human reasoning with machine power to make complex decisions, with 100% consistency. As an automated cybersecurity analyst, the Respond Analyst processes millions of alerts as they stream. Allowing your team to focus on higher priority tasks like threat hunting and or incident response.

Here’s some other useful information:

The Science of Detection Part 2: The Role of Integrated Reasoning in Security Analysis Software

Today’s blog is part two in my science of detection series, and we’ll look at how integrated reasoning in security analysis software leads to better decisions. Be sure to check back in the coming weeks to see the next blogs in our series. In part three, I’ll be taking an in-depth look at the signal quality of detectors, such as signatures, anomalies, behaviors, and logs.

If you’ve been reading our blogs lately, you’ve seen the term “integrated reasoning” used before, so it’s time to give you a deeper explanation of what it means. Integrated reasoning combines multiple sensors and sensor types for analysis and better detection. Before making a security decision, you must take into account a large number of different factors simultaneously.

What Is Integrated Reasoning?

Interestingly, when we started using the term, Julie from our marketing team Googled it and pointed out that it was the name of a new test section introduced in the Graduate Management Admission Test (GMAT) in 2012. What the GMAT section is designed to test in potential MBA candidates is exactly what we mean when we refer to integrated reasoning. It consists of the following skills:

  • Two-part analysis: The ability to identify multiple answers as most correct.
  • Multi-source reasoning: The ability to reason from multiple sources and types of information.
  • Graphic interpretation: The ability to interpret statistical distributions and other graphical information.
  • Table analysis: The ability to interpret tabular information such as patterns or historical data and to understand how useful distinct information is to a given decision.

All of these skills provide a combination of perspectives that allow you to reason and reach a well thought out and accurate conclusion. The same reason we are evaluating our potential MBA candidates against this standard is why we would design to this standard for security analysis software, or if you will, a “virtual” security analyst.

What is an MBA graduate but a decision maker? Fortunately, we are training our future business leaders on integrated reasoning skills, but when the number of factors to be considered increases, humans get worse at making decisions — especially when they need to be made rapidly. Whether from lack of attention, lack of time, bias or a myriad of other reasons, people don’t make rational decisions most of the time.

However, when you’re reasoning and using all of the available information in a systematic manner, you have a much greater chance of identifying the best answer. To put this within a security analysis frame of reference, let’s consider some of the information available to us to make effective security decisions.

What Information Should We Consider?

The most effective security analysis software uses anything that is observable within the environment and reduces the uncertainty that any one event should be investigated.

To achieve integrated reasoning, the software should utilize a combination of detectors, including:

  • Signature-based alerts
  • Detection analytics
  • Behaviors
  • Patterns
  • History
  • Threat intelligence
  • Additional contextual information

In order to make the right decisions, security analysis software should take into account three important factors: sensors, perspective and context. When you combine different forms of security telemetry, like network security sensors and host-based sensors, you have a greater chance of detecting maliciousness. Then, if you deliberately overlap that diverse suite of sensors, you now have a form of logical triangulation. Then add context, and you can understand the importance of each alert. Boom, a good decision!

Like our theoretical MBA candidate, security analysts have to hold hundreds of relevant factors in their minds simultaneously and are charged with making a number of critical decisions every hour. A tall order for a mere mortal, indeed.

Imagine this: A user receives a phishing email, clicks on the link a week later and is infected by malware. The system anti-virus reports “cleaned” but only found 1 of 4 pieces of malware installed. The remaining malware communicates to a command-and-control server and is used as an internal waypoint for lateral exploration very low and slow. This generates thousands of events over a period of weeks or months, but all of them have varying levels of fidelity. More likely, this is the backstory that an incident responder would eventually assemble potentially months — or years — after the fact to explain a breach.

Integrated Reasoning is a must for making sound decisions when it comes to deciding which security alerts to escalate for further examination. But with the amount of incoming data increasing by the minute, security teams are having a hard time keeping up. Your best bet is to choose security analysis software, like the Respond Analyst, that has built-in integrated reasoning capabilities to help with decision-making, so teams can focus on highly likely security incidents.

Curious to see how the Respond Analyst’s integrated reasoning capabilities can help your security team make better decisions? Request a demo today.

3 Top Cybersecurity Trends for Channel Partners to Watch

We all know the next big IT shift towards AI and intelligent automation is on the horizon. Over the last few years, vendors and press have focused on the human-to-machine automation transformation. Many vendors promise solutions—but often those solutions are complex and not optimized for the channel.

The good news is that cybersecurity is primed and ready for automation now. But the question for Partners remains: How can VARs, Integrators, and MSSPs find the right solution that provides true human-to-machine technology to simplify life for their customers?

Here are 3 cybersecurity trends driving the industry towards automation and 1 simple recommendation that Channel Partners can leverage to get ahead of the game immediately:

Trend 1: Traditional console monitoring is ineffective

Security teams are spending too much time monitoring alerts that are providing little value for their efforts. Sifting through endless alerts with a high percentage of false positives is ineffective at best. It’s causing us to burn-out analysts and puts us in a continuous cycle of hiring and training new analysts. The analysts interviewed for the Voice of the Analyst (VOA) Survey help to inform us on where analyst time is better spent and what activities we should automate first. Automating workflow to increase analyst efficiency is important, but automating level 1 alert monitoring itself? That’s downright disruptive.

Cyentia Institute: Voice of the Analyst Survey, October 2017

Figure 1: We asked analysts to score their daily activities on a number of dimensions. One key finding is that analysts spend the most time monitoring, but it provides low value in finding malicious and actionable security threats. (Download VOA Survey here)

Trend 2: People shortage

Most security teams don’t complain about a lack of tools. They complain about a lack of people. Whether the budget won’t allow or skilled resources are in too high a demand to find (or retain), we’ve reached a point where supply has been outstripped by demand. What choice do we have? Leverage the power of machines to augment our security teams. This is finally possible with the advent of decision-automation tools that can off-load the task of console monitoring.

Bitdefender: CISOs’ Toughest Dilemma: Prevention Is Faulty, yet Investigation Is a Burden, April 2018

Figure 2. People shortage is a significant trend in our industry, forcing us to re-think how we’ll actively monitor our environments.

Trend 3: Too many tools

“Too many tools” is a regular complaint in organizations. Did you know most large organizations have on average 75+ security tools? Small organizations are not far behind. It’s all we can do to deploy these necessary security tools and maintain them let alone reviewing the endless alerts that these tools generate. What’s even more challenging is that we’ve seen an industry trend toward platform-based tools (e.g. SIEM or SOAR) that require engineering resources with the expertise to build and maintain platform content such as correlation rules and playbooks. Many organizations are overwhelmed by this task. In contrast, tools with expertise built-in, intelligent applications if you will, are what’s needed and they will change the way we think about platforms going forward.

Momentum Cyber February 2017 CYBERscape

Figure 3. Most organizations have dozens of tools to deploy and maintain.

An industry transformation is underway: Automation will disrupt the way cybersecurity is performed

We think 2019 will be the year of automation for cybersecurity. Customers will require automation to address the top 3 trends. They need to scale with the growing number of alerts and the increased complexity of monitoring today’s hybrid environments. Adding more people is not the answer. Finding ways to automate to off-load cumbersome tasks typically performed by humans is the answer.

This presents exciting new revenue opportunities for Channel Partners and also explains why we are experiencing increased momentum with: VARs, Integrators, and even MSSP’s. Respond Software is at the forefront of the industry transformation—applying machines to roles traditionally executed by humans.

One simple recommendation to gain a competitive advantage: the Respond Analyst

The Respond Analyst software is a scalable, plug-and-play “virtual analyst” that perfectly complements any security detection tool sale: Channel partners can increase revenue by providing both the tools and the Respond Analyst to monitor them.

This provides a unique selling opportunity for our Partners. Partnering with Respond Software gives customers—especially the mid-size enterprise ($50M-$1Bil revenue) simple solutions with fast results. Partners can also take advantage of recurring revenue, fast installations, and the potential to increase opportunities to sell more sensors.

To all of our potential partners: Please reach out if you’re interested in learning more about our solution and our partner program by registering at our partner page. Here’s an opportunity to bring new value to your customers and join us on our journey to bring automated security monitoring to the world.

For more information, read the Global Channel Partner Program Press Release

Why “Context is King” for Cybersecurity in 2019

Remember 3 C’s Part 2

 

Welcome to part two of my three-part blog series on the 3C’s (Coverage, Context, and Cost) required for sustainable security monitoring infrastructure. In the last blog, I reviewed the importance of effective Coverage within a modern security operation. Today’s blog will focus on the second “C”—Context.

When it comes to triaging security alerts, CONTEXT IS KING! Context helps the analyst paint a picture of what is happening. Context makes a generic security alert relevant to your organization. Alerts that include internal, external and historical context makes the difference between a security alert which needs to be re-triaged and a security incident which is deemed malicious and actionable and can be acted on right away.

Step into the shoes of a security analyst. Let’s take for an example a single alert from a network intrusion detection system. The alert indicates that a permitted malware communication has occurred between two systems. Given the number of events in your queue, you probably can’t afford to spend more than a few minutes to decide if this alert represents a true threat to your organization.

A thorough analysis with context for this alert would include:

  • Who is the attacker? Who is the target?
  • What type of attack is this? How sophisticated is it?
  • What is the attacker’s objective?
  • Was the attack successful? Is the target vulnerable? Was it remediated by another control?
  • What would be the impact to the business if it were successful?
  • Is this happening anywhere else?
  • How long has this been going on for?

Answers to the majority of these questions are not provided within an alert. Generally, an alert contains limited context and the only identifiable information in the alert is a set of IP addresses. To determine if an alert indicates a true threat or is just another false-positive, three contextual areas must be considered:

Internal Context: Contextual information about internal systems, like the system’s business function, importance, location, and vulnerability, reside in adjacent repositories which take time to retrieve information from as well as evaluate the data’s significance in the context of the attack. Context about internal systems helps an analyst understand if the observed attack is even relevant to the target system as well as help prioritize the incident — is this attack against a production server or a visitor on the guest network?

External Context: Given that only an IP address is included in the event, external context can help attribute who owns the IP address and its geolocation. Reputable threat intelligence is helpful in understanding more about the attacker, the attacker’s intent, and if other organizations have been targeted.

Behavioral Analysis: Historical patterns of the behavior and associations of systems and account help corroborate if the observed activity is malicious or just normal behavior. Incidents unfold over time, involve multiple data sources, and adversaries attempt to ‘live off the land’ – meaning they will attempt to hide within authorized administrative tools.

In reality, security teams don’t have the capacity to collect and analyze the terabytes of data generated by security sensors or escalated by MSSPs (especially as organizations continue to increase their coverage and add new data sources). Effective decisions are made only when the event is considered with all the contextual elements combined, however gathering sufficient context takes time – something human analysts are short on. Machines, however, are 100% consistent, able to operate on large data streams and emulate human analysis and decision making through artificial intelligence approaches.

Do existing security monitoring technologies provide the “context” needed to identify and triage attacks, faster?

The short answer is no, at least not without a significant effort. SIEMs and SOAR (Orchestration) platforms can provide this context, but it comes at a cost. Both require you to build and maintain content within their platforms (correlation rules and playbooks, respectively), enabling you to apply boolean logic and if/then/else conditions. Additionally, these platforms were not meant to scale to modern data volumes, correlation rules and playbooks hit performance issues and can only operate on a pre-filtered set of alerts/inputs — which in turn has its downsides (a significant reduction in visibility/coverage).

However, let’s say that you are able to overcome the hurdles listed above and your security monitoring technology is effectively decorating events with relevant context. The challenge here is that a human analyst is still required to think deeply, judge the overall event in light of the context and make a manual decision if the event is malicious and actionable.

Read More:
Neither SIEM nor SOAR—Can Security Operations be Automated? Risky Business host, Patrick Gray and Mike Armistead discuss.

Bring Decision Automation into the security tech stack

Decision Automation software automates the collection of relevant context AND the interpretation of security alerts by emulating human reasoning and judgment. And the good news: you can (if you find the right tool) integrate this technology quickly. The most robust Decision Automation software is plug-and-play and immediately enhances the capability of existing SIEM and SOAR platforms. Decision Automation only presents the most valid security threats with the contextual evidence required within the alert so analysts can understand and respond quickly.

The importance of context when monitoring and triaging security data should not be underestimated. Context truly is King! The more context analysts have, the more confident and efficient they will be in resolving attacks. Armed with evidence to effectively respond to malicious attacks, morale rises and security teams become empowered. Contextual alerts save security teams valuable time, money, and resources.

This leads us to the 3rd “C” in our series—Cost. Stay tuned for next month’s final blog when I’ll examine the ROI that can be achieved with Decision Automation. Find out how understaffed security teams can identify more valid incidents and reprioritize resources to focus on higher priority projects—all while staying under budget!

If you would like to talk with one of our cybersecurity experts to discuss how to integrate Decision Automation into your security infrastructure, contact us: tellmemore@respondsoftware.com

More information:

The Science of Detection Part 1: 5 Essential Elements of a Data Source

I’m passionate about the science of detection. It used to be a black art, like long distance high-frequency radio communication, but with modern cybersecurity technology, we’re putting the science back in. With that in mind, I plan to write a series of blogs about the science of detection with an aim to enable more effective and rapid identification of “maliciousness” in our enterprise technology.

In today’s blog, we’ll look at the key elements of a data source to ensure effective detection. Be sure to check back in the coming weeks to see the next blogs in the series. In parts two and three, I’ll be taking an in-depth look at how integrated reasoning will fundamentally change detection technology and the signal quality of detectors, such as signatures, anomalies, behaviors and logs.

In operational security, we monitor various pieces of technology in our network for hackers and malware. We look at logs and specialized security sensors, and we use context and intelligence to try to identify the “bad guys” from the noise. Often, a lot of this work is “best effort” since we’re collecting data from other technology teams who are only using it to troubleshoot performance, capacity and availability issues — not security. It can be a challenge to configure these sources to make them specific to the needs of security, and this greatly complicates our success rate.

When we look at the data sources or telemetries that we monitor, there are five elements that are important for their effectiveness in detecting malicious activity.

1. Visibility

Visibility is one of the most important elements of a data source. What can you see? Is this a network sensor? Are you decrypting traffic so that you can see the patterns or signatures of an attack? Or is this a system log source where stealthy user or administrator behaviors can be captured? When you’re considering visibility, there are two things that are key: the location of the sensor and the tuning of the events, alerts or logs that it generates. For signature-based data sources, it’s tremendously important that you keep them up to date consistently and tuned for maximum signal.

2. Signal Quality

We look at signal quality to help determine the likelihood that any given signature or alarm will reliably indicate the presence of malicious activity. When you consider network intrusion detection and prevention sensors, things get really complicated. I have seen the same IDS signature alarm between different hosts in one day where one instance was a false-positive, and the other instance was malicious. How are we supposed to separate those two out? Not without deep analysis that considers many additional factors.

3. Architecture

With the advent of autonomous analysis and integrated reasoning, the architecture of your sensor grid can provide significant advantages. The most important is sensor overlap, which means different types of sensors should be implemented in the infrastructure so that attackers must get past more than one detection point.

A good example would be host-based endpoint protection agents in a user environment. By forcing users to then transit a network intrusion detection sensor and maybe even a URL filter in order to conduct business on the internet, you end up with three perspectives and three chances to recognize systems that are behaving maliciously. This means it’s important to deploy internal (East – West) network sensors to corroborate your other sensing platforms in order to reduce false positives and produce high fidelity incidents. You can fool me once, but fooling me twice or a third time gets much harder.

4. Data Management

All of our sensors should be easy to aggregate into a single data platform using common log transport methods. This can be a SIEM or a big data platform. It’s also tremendously important to capture all of the fields that can help us contextualize and understand the alerts we’ve observed. This data platform becomes the incident response and forensic repository for root cause analysis and is a good hunting ground for a hunt team.

5. Event Alignment

Given the complex nature of the modern enterprise, it’s possible for a user’s laptop to have 10 or 15 different IP addresses in any given day. We need to be able to reassemble that information to find the host that’s infected. A good example would be to collect hostname rather than just IP address,where it’s available. Proxies, firewalls and NAT devices can all effectively blind you when looking for malicious internal hosts. In fact, one Security Operations Center I built could not locate 50% of known compromised assets due to a combination of network design and geography.

A combination of perspectives provides the most effective sensor grid. Leveraging multiple forms of visibility, improving the signal quality of your sources, architecting for sensor overlap and key detection chokepoints, and streaming all of this data into an effective big data management system where it can be analyzed and leveraged across the operational security lifecycle can provide a far more effective security operations capability.

How the Respond Analyst Can Help You

The Respond Analyst is able to understand these telemetries and contextual data sources and considers all factors in real-time. This frees you from monitoring a console of alerts, which allows you to focus on higher-value work. It also frees your detection programs from the volume limitations of human monitoring. Putting all of these elements together provides a massive improvement in your ability to detect intruders before they can do major damage to your enterprise technology. We’re putting machines in front of alerts so that humans can focus on situations.

3 Reasons Understaffed Security Teams Can Now Sleep at Night

If you feel overwhelmed with security operations, you’re not alone. Matter of fact, it’s a common theme we hear all the time: “We’re overloaded and need help!” We’ve been in the trenches, building security operations for mid to large enterprises, so we understand the unique pressure IT and security teams feel. It’s not easy balancing it all—especially for mid-sized enterprises with resource-constrained security teams.

Cybersecurity in mid-sized companies has unique challenges. With fewer resources and tighter budgets, IT teams are spread thin while wearing multiple hats. Unfortunately, sometimes security projects accumulate, leaving teams exposed and overwhelmed. But it doesn’t have to be this way—there is a viable solution.

Here are the three biggest challenges security teams face and why The Respond Analyst helps them sleep soundly at night.
Reason #1 – We don’t have enough time
Our customers need to free time to work on priority projects and initiatives. We designed our product to provide expert intrusion analysis without all the fuss of deploying extensive technology stacks that require significant upfront and continued investment. We’re here to simplify the process, not add complexity. Security event console monitoring is the way of the past and we free our customers from staring at security consoles and instead move them toward higher value tasks and initiatives.

Within seven days, The Respond Analyst has learned its environment and is finding actionable incidents for our customers. The setup process is simple: 1) deploy a virtual appliance or install our software, 2) direct security feeds to our software and 3) add simple context. There is no significant time commitments or in-depth expertise in security operations required.
Reason #2 – We need additional security expertise
One of the biggest challenges our customers face is finding the right people and retaining them. This challenge is expected to grow with an ever competitive job market, resulting in higher wages and more movement at a time when organizations are trying to implement steady security programs. To say it’s difficult is an understatement.

We don’t expect our customers to be experts in intrusion analysis and security operations—that is why they’ve partnered with us. The Respond Analyst is an expert system that automates the decision making of a front line security analyst. This pre-packaged intelligence requires no security expertise to deploy. There is no use case development, programming of rules, or tagging of event data. Well vetted incidents, without all the fuss, are the result of a well designed expert system.
Reason #3 – We don’t have the time, money or desire to build a legacy SOC
Many organizations understand the old way of building the legacy SOC with SIEM is not the future. Indeed, it’s not even keeping up with today’s threats. Not only is it less effective then solutions such as The Respond Analyst, but it is also significantly higher cost and results in a far lengthier Return on Investment timeframe.

The process of building a SIEM with 80+ data sources (where most really only look at 5 or less), hiring, training and retaining experienced intrusion analyst, and implementing a sophisticated process to keep it glued together, is outdated. Of course, this was the best we could do given the technology and understanding we had at the time, but now we have a better way. Old models have since been replaced and our customers receive the benefit of avoiding frustration and high cost by using a pre-packaged expert system.

Times have changed and with the emergence of expert systems, like The Respond Analyst, we have brought technology where traditionally we’ve had large investments and lengthy time-intensive projects. The result is mid-sized enterprise customers now have an option to operate at maturity levels beyond large traditional enterprise operations by leveraging expert systems. This new approach frees up time, provides needed expertise and saves our customers the headache and cost of legacy solutions. And better yet, our customers gain relief from the stress of understaffed resources and can relax knowing we have their security operations covered.

Read more:

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.