What’s Old is New: How Old Math Underpins the Future of A.I. in Security Operations

Most of us engineers know the truth—A.I. is just old math theory wrapped in a pretty package.  The deep learning algorithms used for Neural Networks? Yep, you guessed it, those were developed in 1943!

For those of us in Security Operations, the underpinning mathematical theories of probability will lead us into the future. Probability theory will automate human analysis–making real-time decisions on streaming data.

Probabilistic modeling will fill the gaps that our SecOps teams deal with today:  Too much data and not enough time. We humans have a very difficult time monitoring a live streaming console of security events.  We just can’t thread it all together with our limited knowledge, biases, and the small amount of time we have to interact with each new event.

Making instant decisions as data is streamed real-time is near impossible because there is:

    • too much info and data to process,
    • not enough meaning—we don’t understand what the data is telling us,
    • poor memories—can’t remember things two hours ago let alone, days, week’s or months before.

Enter Probability Theory

Watch my short video to learn how Probability Theory will fundamentally change the future of Security Operations by expanding our ability to analyze more data across our environments than ever before.

Click here to watch now.

Managing Security Events: Not as Difficult as Finding Magic Stones

These days finding a qualified and available Security Analyst seems more difficult than locating an Infinity Stone in the Marvel Universe.  Like Thanos, I’m sure many CISOs are wishing they could snap their fingers, but instead of destroying half the population, creating an army of security professionals to manage the complex threat landscape.

Due to the massive gap in available security skill sets and qualified people, many organizations are outsourcing at least a portion of their operations to Managed Security Service Providers (MSSP).  This seems to be a reasonable alternative, but just like in-house security operations, MSSPs have their share of challenges. In this blog, we will discuss those challenges to help you determine if an MSSP is the right security operations model for your organization.  Then if you decide to keep security operations in-house, we’ll share a better alternative that doesn’t involve voyaging through the galaxy hunting for magical stones.  

source: helpnetsecurity.com

6 considerations when working with or hiring an MSSP  


  1. Get ready for a long ramp: According to Gartner, onboarding time for an MSSP is 1 to 4 months.*  This elongated time means organizations that are thinking about hiring an MSSP must be patient.  Just remember those bad actors are not so tolerant and will not wait for you to get on board and set up with your MSSP before they attack.

  2. Typical outsourcing issues:  MSSPs have many customers, therefore they lack intimate knowledge of a single customer’s network or infrastructure. This makes it extremely difficult to perform effective analysis of that customer’s unique security configuration and requirements.

  3. Take a number:  Like any organization, MSSP’s have resource constraints. MSSPs will typically devote resources to larger customers who tend to pay the most when the largest incidents hit or volumes peak.

  4. We’ve got you covered—not so much:  Due to the high volume of alerts they are trying to manage, MSSPs will usually tune down sensors.  That means the MSSP’s ability to identify an attack will degrade.

  5. Law of diminishing returns:  Just like any organization, MSSPs face high analyst turnover and resource shortages.  When an analyst leaves the MSSP, customers suffer, as they are paying the same price for lower quality results.  Additionally, the MSSP must re-focus their attention to hire new talent from an already dwindling pool of candidates adversely impacting the current level of service that the customer receives.  This problem can often become worse over time.
  6. Cookie cutter solutions: MSSPs have an uncustomizable delivery model.  In other words, the MSSP model is optimized for their business, not for the requirements of the customer.   


These challenges are merely a sampling of a much larger set of difficulties that service providers face demonstrating that the MSSP alternative may not be the best for every organization.  When moving to an MSSP or using one, carefully think through all of the challenges listed above, as these will impact the amount of time you need to investigate false positives and may cause you to miss important attacks or threats.  Of course, you might decide to keep your security operations in-house, but you will likely face many of the same challenges as the MSSP.

And finally, remember there is a third alternative that doesn’t require you to search the galaxy for that illusive security expert.  Robotic Decision Automation software for security operations will automate event analysis, management, and triage.  The Respond Analyst delivers these capabilities, performing just like an expert analyst, but at machine speed and with 100% consistency.

If addressing the skills gap shortage with software seems like an alternative for you, please visit the following pages for more information:

*Gartner, “How to Work with an MSSP to Improve Security,” January 30, 2018

Core Telemetries: Focus on the Right Data Sources to Achieve An Enterprise-Grade Security Monitoring Program

According to the most recent Verizon Data Breach Investigations Report, 73% of cyberattacks can be attributed to outsiders. This means that, generally speaking, the attacker will have to compromise an endpoint device and cross the enterprise threshold to accomplish their goals. Imagine a drive-by download that compromises a remote user’s laptop: the endpoint may run the malicious code, but the attackers still need to use the network to move laterally and access your data.

In this case, as in most attacks, the attack might have been detected on the endpoint as well as from any one of many other points within your environment. If your security monitoring system is able to collect sensor data from at least one of these points and if you’re able to monitor that data effectively, you’ll discover the attack and prevent a breach.

Each additional source of security data provides an extra layer of defense against cyberattacks. The deeper your defenses, and the more redundancy that’s built into them, the stronger your security monitoring program.

But even the most diligent SecOps teams are challenged by data overload.  Teams report that less than 10 percent of the data they collect is analyzed.

Many organizations with limited resources find it challenging to prioritize security projects. Which data sources are most important? What solutions should be deployed first?

Build the Foundation with Endpoint Protection and Network Security Monitoring

If you haven’t already implemented it, setting up a basic endpoint protection platform (EPP) is a critical first step towards securing your network. EPP solutions allow you to collect, monitor, and analyze data from endpoint devices, reporting on known threats, preventing malware from executing, and in some cases quarantining unknown files until they can be investigated. Endpoint protection is relatively simple to deploy, and provides a valuable first layer of defense.

To improve visibility into your environment, consider adding a Network Intrusion Detection and Prevention Solution (NIDPS). Most NIDPS solutions rely on signatures to detect a broad range of threats, and are able to provide comprehensive network threat detection for your network, as well as from connected mobile and remote devices and cloud-based resources. NIDPS modules can be enabled within many Unified Threat Management (UTM) solutions as well, so you might actually already have a solution in place that you simply need to start monitoring.

Go Deeper With Advanced Solutions

If you’d like to improve upon the basic coverage offered by EPP and NIDPS, you can add one of today’s more advanced solutions, such as web proxy filtering and monitoring, URL filtering, email filtering (or anti-phishing solutions) or endpoint detection and response (EDR). These solutions can provide additional coverage of commonly exploited attack vectors (such as web browsers or email), or a more detailed record of the actions taken by the operating system. This can add up to deeper and more comprehensive coverage, but only if you are able to effectively monitor the larger amounts of log and event data they supply.

Boost Your Security Data Monitoring Capabilities With Security Analysis Software

Adding telemetries to your security stack can mitigate your risks and improve your security posture, but only if you are able to monitor those additional data sources continuously and effectively. Incorporating advanced solutions that you don’t have the time or ability to monitor doesn’t help.

And effectiveness in security monitoring is defined not by the number of data sources you monitor, but rather by how continuous and thorough your analysis of that data is.

This is where automated solutions can add the most value. The Respond Analyst can monitor sensor data from both foundational and advanced solutions. It’s able to work 24/7/365, and is capable of handling more events per hour than 14 human security analysts. The Respond Analyst is quick to deploy, and seamlessly integrates with a broad range of third-party security solutions, enabling it to ingest and monitor their data feeds without significant onboarding time, data tagging, or “training.”

The Respond Analyst enables smaller teams to monitor telemetries across their infrastructure—something they could not hope to accomplish manually. It makes it possible for smaller organizations to collect, monitor, and analyze security alerts and relevant contextual information on a scale that was previously available only to the largest enterprises. Along the way, the Respond Analyst brings advanced security capabilities within reach for businesses large and small, in numerous industries and verticals.

To learn more about how the Respond Analyst can work together with your existing security solutions, or with those you’re currently considering, contact us to schedule a demo.

The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.


Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

2019 Security Predictions: Real or Bogus? Respond Experts Weigh In

Where is cybersecurity heading and what should security teams focus on in 2019?

We thought it would be helpful to examine the most common cybersecurity predictions for 2019. Business press, trade publications and of course vendors had a lot to say about emerging technology trends, but what predictions or trends will matter most? More importantly, how can security teams and CISOs turn these predictions into advantages for their business?

I sat down with Chris Calvert, industry expert who often says; “if you have a problem with how today’s SOCs work, it’s partially my fault and I’m working to solve that issue!” With over 30 years of experience in information security, Chris has worked for the NSA, the DOD Joint Staff and held leadership positions in both large and small companies, including IBM and Hewlett Packard Enterprise. He has designed, built and managed security operations centers and incident response teams for eight of the global fortune-50.

During our conversation, we discuss questions like:

  • Will we see an increase in crime, espionage and sabotage by rogue nation-states?
  • How will malware sophistication change how we protect our networks?
  • Will utilities become a primary target for ransomware attacks?
  • A new type of fileless malware will emerge, but what is it? (think worms)
  • And finally, will cybersecurity vendors deliver on the true promise of A.I.?

You can listen to his expert thoughts and opinions on the podcast here!

Want to be better prepared for 2019?

The Respond Analyst is trained as an expert cybersecurity analyst that combines human reasoning with machine power to make complex decisions, with 100% consistency. As an automated cybersecurity analyst, the Respond Analyst processes millions of alerts as they stream. Allowing your team to focus on higher priority tasks like threat hunting and or incident response.

Here’s some other useful information:

Why “Context is King” for Cybersecurity in 2019

Remember 3 C’s Part 2


Welcome to part two of my three-part blog series on the 3C’s (Coverage, Context, and Cost) required for sustainable security monitoring infrastructure. In the last blog, I reviewed the importance of effective Coverage within a modern security operation. Today’s blog will focus on the second “C”—Context.

When it comes to triaging security alerts, CONTEXT IS KING! Context helps the analyst paint a picture of what is happening. Context makes a generic security alert relevant to your organization. Alerts that include internal, external and historical context makes the difference between a security alert which needs to be re-triaged and a security incident which is deemed malicious and actionable and can be acted on right away.

Step into the shoes of a security analyst. Let’s take for an example a single alert from a network intrusion detection system. The alert indicates that a permitted malware communication has occurred between two systems. Given the number of events in your queue, you probably can’t afford to spend more than a few minutes to decide if this alert represents a true threat to your organization.

A thorough analysis with context for this alert would include:

  • Who is the attacker? Who is the target?
  • What type of attack is this? How sophisticated is it?
  • What is the attacker’s objective?
  • Was the attack successful? Is the target vulnerable? Was it remediated by another control?
  • What would be the impact to the business if it were successful?
  • Is this happening anywhere else?
  • How long has this been going on for?

Answers to the majority of these questions are not provided within an alert. Generally, an alert contains limited context and the only identifiable information in the alert is a set of IP addresses. To determine if an alert indicates a true threat or is just another false-positive, three contextual areas must be considered:

Internal Context: Contextual information about internal systems, like the system’s business function, importance, location, and vulnerability, reside in adjacent repositories which take time to retrieve information from as well as evaluate the data’s significance in the context of the attack. Context about internal systems helps an analyst understand if the observed attack is even relevant to the target system as well as help prioritize the incident — is this attack against a production server or a visitor on the guest network?

External Context: Given that only an IP address is included in the event, external context can help attribute who owns the IP address and its geolocation. Reputable threat intelligence is helpful in understanding more about the attacker, the attacker’s intent, and if other organizations have been targeted.

Behavioral Analysis: Historical patterns of the behavior and associations of systems and account help corroborate if the observed activity is malicious or just normal behavior. Incidents unfold over time, involve multiple data sources, and adversaries attempt to ‘live off the land’ – meaning they will attempt to hide within authorized administrative tools.

In reality, security teams don’t have the capacity to collect and analyze the terabytes of data generated by security sensors or escalated by MSSPs (especially as organizations continue to increase their coverage and add new data sources). Effective decisions are made only when the event is considered with all the contextual elements combined, however gathering sufficient context takes time – something human analysts are short on. Machines, however, are 100% consistent, able to operate on large data streams and emulate human analysis and decision making through artificial intelligence approaches.

Do existing security monitoring technologies provide the “context” needed to identify and triage attacks, faster?

The short answer is no, at least not without a significant effort. SIEMs and SOAR (Orchestration) platforms can provide this context, but it comes at a cost. Both require you to build and maintain content within their platforms (correlation rules and playbooks, respectively), enabling you to apply boolean logic and if/then/else conditions. Additionally, these platforms were not meant to scale to modern data volumes, correlation rules and playbooks hit performance issues and can only operate on a pre-filtered set of alerts/inputs — which in turn has its downsides (a significant reduction in visibility/coverage).

However, let’s say that you are able to overcome the hurdles listed above and your security monitoring technology is effectively decorating events with relevant context. The challenge here is that a human analyst is still required to think deeply, judge the overall event in light of the context and make a manual decision if the event is malicious and actionable.

Read More:
Neither SIEM nor SOAR—Can Security Operations be Automated? Risky Business host, Patrick Gray and Mike Armistead discuss.

Bring Decision Automation into the security tech stack

Decision Automation software automates the collection of relevant context AND the interpretation of security alerts by emulating human reasoning and judgment. And the good news: you can (if you find the right tool) integrate this technology quickly. The most robust Decision Automation software is plug-and-play and immediately enhances the capability of existing SIEM and SOAR platforms. Decision Automation only presents the most valid security threats with the contextual evidence required within the alert so analysts can understand and respond quickly.

The importance of context when monitoring and triaging security data should not be underestimated. Context truly is King! The more context analysts have, the more confident and efficient they will be in resolving attacks. Armed with evidence to effectively respond to malicious attacks, morale rises and security teams become empowered. Contextual alerts save security teams valuable time, money, and resources.

This leads us to the 3rd “C” in our series—Cost. Stay tuned for next month’s final blog when I’ll examine the ROI that can be achieved with Decision Automation. Find out how understaffed security teams can identify more valid incidents and reprioritize resources to focus on higher priority projects—all while staying under budget!

If you would like to talk with one of our cybersecurity experts to discuss how to integrate Decision Automation into your security infrastructure, contact us: tellmemore@respondsoftware.com

More information:

3 Reasons Understaffed Security Teams Can Now Sleep at Night

If you feel overwhelmed with security operations, you’re not alone. Matter of fact, it’s a common theme we hear all the time: “We’re overloaded and need help!” We’ve been in the trenches, building security operations for mid to large enterprises, so we understand the unique pressure IT and security teams feel. It’s not easy balancing it all—especially for mid-sized enterprises with resource-constrained security teams.

Cybersecurity in mid-sized companies has unique challenges. With fewer resources and tighter budgets, IT teams are spread thin while wearing multiple hats. Unfortunately, sometimes security projects accumulate, leaving teams exposed and overwhelmed. But it doesn’t have to be this way—there is a viable solution.

Here are the three biggest challenges security teams face and why The Respond Analyst helps them sleep soundly at night.

Reason #1 – We don’t have enough time

Our customers need to free time to work on priority projects and initiatives. We designed our product to provide expert intrusion analysis without all the fuss of deploying extensive technology stacks that require significant upfront and continued investment. We’re here to simplify the process, not add complexity. Security event console monitoring is the way of the past and we free our customers from staring at security consoles and instead move them toward higher value tasks and initiatives.

Within seven days, The Respond Analyst has learned its environment and is finding actionable incidents for our customers. The setup process is simple: 1) deploy a virtual appliance or install our software, 2) direct security feeds to our software and 3) add simple context. There is no significant time commitments or in-depth expertise in security operations required.

Reason #2 – We need additional security expertise

One of the biggest challenges our customers face is finding the right people and retaining them. This challenge is expected to grow with an ever competitive job market, resulting in higher wages and more movement at a time when organizations are trying to implement steady security programs. To say it’s difficult is an understatement.

We don’t expect our customers to be experts in intrusion analysis and security operations—that is why they’ve partnered with us. The Respond Analyst is an expert system that automates the decision making of a front line security analyst. This pre-packaged intelligence requires no security expertise to deploy. There is no use case development, programming of rules, or tagging of event data. Well vetted incidents, without all the fuss, are the result of a well designed expert system.

Reason #3 – We don’t have the time, money or desire to build a legacy SOC

Many organizations understand the old way of building the legacy SOC with SIEM is not the future. Indeed, it’s not even keeping up with today’s threats. Not only is it less effective then solutions such as The Respond Analyst, but it is also significantly higher cost and results in a far lengthier Return on Investment timeframe.

The process of building a SIEM with 80+ data sources (where most really only look at 5 or less), hiring, training and retaining experienced intrusion analyst, and implementing a sophisticated process to keep it glued together, is outdated. Of course, this was the best we could do given the technology and understanding we had at the time, but now we have a better way. Old models have since been replaced and our customers receive the benefit of avoiding frustration and high cost by using a pre-packaged expert system.

Times have changed and with the emergence of expert systems, like The Respond Analyst, we have brought technology where traditionally we’ve had large investments and lengthy time-intensive projects. The result is mid-sized enterprise customers now have an option to operate at maturity levels beyond large traditional enterprise operations by leveraging expert systems. This new approach frees up time, provides needed expertise and saves our customers the headache and cost of legacy solutions. And better yet, our customers gain relief from the stress of understaffed resources and can relax knowing we have their security operations covered.

Read more:

Is SOAR the Missing Link to the Success of the SIEM Market?

5 Key SOAR Considerations SIEM Vendors Need to Make

It’s time for SIEM vendors to up their game—but are they? And is SOAR the holy grail to solving the issues SIEMs have faced over the past few years?

A while back, we outlined the 8 fragments of SIEM that affect SecOps teams. Introducing its own set of challenges, SIEM was declared dead more than a decade ago, yet is still widely deployed in most security organizations today. One reason it’s still used broadly is that SIEM, once deployed, has processes and procedures woven around it making it burdensome to change; meaning ‘uninstalling’ SIEM isn’t as simple as flipping an on/off switch.

In a recent article in Information Age, Andy Kays, CTO of Redscan Managed Services, believes the onus lies on SIEM vendors to improve threat detection and response for their customers, by leveraging Security Orchestration, Automation and Response (SOAR).

While this may seem to be an obvious solution, integrating SOAR into SIEM is not so black and white and there are key considerations SIEM vendors and IT security teams need to make when it comes to leveraging SOAR.

1. SOAR: Buy vs build?

SIEM vendors will have to decide if they want to start building or buying SOAR capabilities, or just figure out how to best integrate.

The problem with both? SIEM vendors will be forced to integrate yet another platform into their software and there is a lot of setup, forethought, and man-hours required for integration into their own platforms, whether or not you build vs buy.

2. SOAR adoption is slow and it requires skilled people.

While SOAR is a beneficial tool, the number of installments in the cybersecurity industry is still limited. In addition, SOAR is resource-intensive and requires a trained professional to deploy the tools and develop appropriate playbook automation for specific and relevant actions and tasks.

3. SOAR can help ease SOC task burdens for security teams.

In which areas? Specifically workflow, case creation/updates, automated response actions. The more complex the task, however, the trickier it is to automate. Like SIEM, SOAR tools require you to tackle use case individually and requires security expertise and an engineering background. This is great for security teams who have skilled employees and the time to build it.

4. Platform-based approaches are still very reliant on people, whether SIEM or SOAR.

While efficiencies can be gained, they don’t address the core issue of skill shortage and resource/budget shortage. Yes, they ultimately address the people shortage by automating certain aspects of the SOC workflow, but the reliance on people to build and maintain playbooks is still a disadvantage.

5. Both SOAR and Decision-automation have a deliberate fit in the flow of SIEM operations.

While SOAR can and should be leveraged, SIEM vendors need to consider the true missing link: the high fidelity decision-making capabilities that neither SOAR nor SIEM can provide, but can be achieved with decision-automation solutions. Simply put, both SIEMs and SOAR platforms struggle in detecting malicious intrusions because they rely on rules and simple correlations —this is where decision-automation tools can help.

Decision-automation solutions that come pre-built with the ability to replicate human judgment are an alternative that automates security alert triage and analysis, covers a breadth of use cases and does not require a team of security experts.

That being said, SOAR and decision-automation are both needed; meaning there is an optimal way to plug SIEM, SOAR, and decision-automation in together. A happy “API-enabled” coexistence with all three would provide the best long-term outcome.

All-in-all, we agree that while SIEM vendors can leverage SOAR, the combination is still very reliant on people. But, by integrating with a decision-automation software like a Respond Analyst, SIEM vendors have a plug-and-play solution to help security organizations tackle high-volume, time-consuming event analysis of fundamental data feeds. This is especially beneficial for mid-sized enterprises who may not have a front line analyst, as the Respond Analyst is there to fill that role (24×7, I might add) for smaller security organizations.

Planning Your 2019 Security Budget? Consider A New Way!

As cybersecurity awareness month is upon us, there’s a lot of cybersecurity companies developing information to educate and advise security operation teams how they can be more efficient in protecting their network. A goal of mine this month was to learn as much as possible on the latest security technologies, cyber-threats and trends within the industry.

One of the most intriguing interviews I have listened to this month was TAG Cybers 7-Part Series with Ed Amoroso, Founder of TAG Cyber, and Mike Armistead, CEO and Co-Founder of Respond Software. The discussion between Mike and Ed covers a wide variety of topics, including enterprise security, industry trends and how Respond Software’s pragmatic approach can dramatically empower security teams – providing teams with additional capacity and efficiency with managed detection and response to defend their network environments.

Our CEO, Mike, is a Cybersecurity veteran – having launched multiple security companies and bringing them to the enterprise level and scale – his track record is quite impressive! I found it compelling listening to Mike discuss how he came up with the idea of Respond Analyst and the problem he was ultimately trying to solve – how security teams cannot process and analyze the high-volume of data that is being generated from their security applications… and it’s only getting more difficult to keep up as attackers become more sophisticated and the data keeps piling up.

Additionally, Mike discusses how Respond Analyst has an immediate impact on your team. Analyzing and monitoring all your security data day-1 it’s installed – leaving your analysts to do what they’re good at, using their creativity for threat hunting and incident response.

If you want to learn how Respond Analyst can improve your team, I recommend listening to their interview!

Planning your 2019 IT budget?

If you are preparing your 2019 cybersecurity budget, TAG Cyber also released their 2019 Enterprise Cyber Security Controls! This publication is a phenomenal resource for anyone who is involved in security buying decisions. This article provides information on emerging technologies that can enhance your security team’s performance!

We are excited to be a part of such an amazing group of cybersecurity companies that are changing the ways security teams defend their network environment!

A new tool for defenders – Real-time analysis of Web Proxy data

When I got back into the office after taking a short break to recharge my batteries, I was really excited to be speaking with my colleagues at Respond Software about the upcoming release of our web filtering model for our Respond analyst. You see, over the last few months we’ve been working tirelessly to build a way to analyze web filtering event data in real-time. Now that I’m sitting down to write this blog, the fruit of all the hard work our team has put into making this a reality is really sinking in. We’ve done it! It’ s now available as part of the Respond Analyst!

This was no small feat, as most of you in the security operations world would know.

You may ask why we chose to take this challenge on.  The answer is quite simple, there is a ton of valuable information in web filtering data and it’s extremely difficult for security teams to analyze these events in real-time due to the sheer volume of data generated by enterprises. What a perfect opportunity for us to show off the Respond Analyst’s intelligence and capability.

Up until now, security operations and IR teams have pivoted to using web filtering data for investigations once they’ve already been alerted to an attack through threat hunting or some other form of detection.  Processing all of the web filtering data for an organization in a SIEM or similar has just been way too expensive to do. In fact, most organizations can’t even afford to store this data for a “reasonable” amount of time for investigators to dig through.

Think about it for a second, each web page visited can generate a number of new web requests to pull back content from different sources. Then picture each employee using the internet for most of day; navigating the web through their day-to-day tasks, a few personal items between meetings, all this amounts to hundreds of web page visits each day. If you have a few hundred employees, the volume of data generated by the web filtering solution quickly becomes unmanageable. Well now we’re able to process all of these events in real-time.

Consider the questions you are able to ask of the data without even taking the assigned web filtering category into account…

  • Analyze each component of the HTTP header
  • Perform user agent analysis
  • Take a look at how suspicious the requested domain is
  • Perform URL string comparisons to all other requests over an extended period of time
  • Compare each attribute to information you’ve gathered in your threat intel database

But why stop there…

  • What about looking at whether the pattern of behavior across a set of requests is indicative of exploit kit delivery?
  • Maybe you suspect that these requests are related to command-and-control activity
  • What about the upload of documents to a filesharing service, is that data exfiltration or simply everyday user activity?

Web filtering data can also leverage the power of integrated reasoning.  When web filtering data is combined with IDS/IPS sensors, Anti-malware technology and contextual sources like vulnerability data and critical asset lists, you are able to form an objective view of your enterprise’s threat landscape.  Beyond the analysis of each of these data sources, the Respond Analyst accurately scopes all events related to the same security incident together for a comprehensive incident overview.  The Respond Analyst then assigns an appropriate priority to that incident and documents all the details of the situation and presents this information to you.  This is, by far, the most efficient way to reduce attacker dwell time.

We have a long way to go and many more exciting Respond Analyst skills & capabilities on the way. I couldn’t be prouder of all the work we’ve achieved and the release of our Web Filtering model.

Way to go Respond team!

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.