What’s Old is New: How Old Math Underpins the Future of A.I. in Security Operations

Most of us engineers know the truth—A.I. is just old math theory wrapped in a pretty package.  The deep learning algorithms used for Neural Networks? Yep, you guessed it, those were developed in 1943!

For those of us in Security Operations, the underpinning mathematical theories of probability will lead us into the future. Probability theory will automate human analysis–making real-time decisions on streaming data.

Probabilistic modeling will fill the gaps that our SecOps teams deal with today:  Too much data and not enough time. We humans have a very difficult time monitoring a live streaming console of security events.  We just can’t thread it all together with our limited knowledge, biases, and the small amount of time we have to interact with each new event.

Making instant decisions as data is streamed real-time is near impossible because there is:

    • too much info and data to process,
    • not enough meaning—we don’t understand what the data is telling us,
    • poor memories—can’t remember things two hours ago let alone, days, week’s or months before.

Enter Probability Theory

Watch my short video to learn how Probability Theory will fundamentally change the future of Security Operations by expanding our ability to analyze more data across our environments than ever before.

Click here to watch now.

Jumping to a New Curve

In the business classic “The Innovator’s Dilemma“, author Clayton Christensen shows how jumping to a new productivity curve is difficult for incumbent leaders but valuable for new innovators.  I think a lot about this concept for cybersecurity. The world has changed dramatically these last 5-10 years and the curve most enterprises are on results in lots of siloed detectors, rudimentary processing, people-centric processes, and high costs to maintain platforms. The solutions for these problems had great promise in the beginning but still can’t provide the level of productivity necessary to keep up with advances by the adversary. Workflow automation helps, but not enough to address the “orders of magnitude” problem that exists. The scale is definitely tipped in favor of the attackers.  So how do we think out of the box to help companies jump to that new productivity curve?

Helping Customers Jump to a New Curve of Productivity

Three years ago, we started on a mission to help security operations teams right the balance between attackers and defenders. We are on the front-lines to change the status quo and to bring in a new way of thinking to defend the enterprise.

At Respond Software, we strive to unlock the true potential of Man + Machine —without bankrupting security teams. We aim to elevate the human analysts/incident responders to do what they do best (be curious, think outside the box, proactively take action) and let the machines do what machines do best (consistently analyze huge amounts of data thoroughly and accurately based on hundreds of dimensions). In short, security teams can use modern processing and computing techniques to help jump to a new curve and better defend their enterprise.

Today, our product, the Respond Analyst, is fulfilling that mission for customers around the globe. In fact, over the last 30 days, our Robotic Decision Automation product actively monitored billions of live events, vetted those into tens of thousands of cases, and escalated (only!) hundreds of incidents to our customers’ incident responders. What’s more, our security operations software customers were able to give the Respond Analyst feedback on what they liked, what they didn’t like and how to improve the results.  They now have an analyst on their team that can plow through the alerts and invoke expert judgement to group and prioritize them into incidents. This eliminates a huge amount of time wasted chasing false positives while freeing analysts to focus on threat hunting, deeper investigations, and proactive security measures.  What a change for those teams!

New $20 Million Investment = More Status Quo Busting

To continue these efforts and to expand to meet increasing demand, we are pleased to announce our $20M Series B round of financing.  The round was led by new investor ClearSky Security, with additional investment from our existing investors, CRV and Foundation Capital.

We are extremely pleased to add ClearSky Security to our team. ClearSky’s depth of cybersecurity knowledge and experience—both personally amongst the partners and from backing successful companies such as Demisto and Cylance—will be extremely helpful as we look to establish our innovative robotic decision automation software in more security operations teams. On top of it, we get Jay Leek, current ClearSky Managing Director and former CISO at Blackstone, to be on our Board.  See our press release (and the accompanying video) for more details and his perspective.

I’d also like to thank the hard work and dedication of the entire group of Responders that got us to where we are today. As I recently told the security operations software team, I’m certainly psyched to get the endorsement and funding from three world-class investors. Even more so, I look forward to using the funds to work with ClearSky to further innovate, provide service to customers, and expand our reach to help more security operations teams take the fight to the adversaries…and save money while they do it.  It’s time for security operations to bust through the status quo and jump to a new curve of productivity, capability and job satisfaction.

It’s time for the next phase of Respond Software.

Watch and Read More:


Video:  Jay Leek shares his reasons for investing in Respond Software (on the way to the airport in an Uber)!

Press Release:  Respond Software Raises $20 Million to Meet Growing Demand for Robotic Decision Automation in Security Operations

 

Introducing “Inferred Context” or
How to Enjoy a Spring Day

Moving at a brisk pace across the campus of your company, laptop stowed under your arm, you hardly have a moment to admire the beauty of an early spring day. During the short trip and perhaps unbeknownst to you, your computer has changed IP addresses multiple times. This common practice helps IT organizations centrally and automatically manage IP addresses resulting in improved reliability and reduced network administration.

However, constant IP address changes can create havoc for Security Analysts because each address will appear as an independent system when a security alert occurs. For instance, an Analyst may start investigating an event based on an IP address and an attack name. The next step is to identify what has happened in association with that IP address, as well as what other systems may be involved in the attack. Depending on the information returned, the Analyst can make a completely inaccurate decision in terms of resolving the event.  If you cannot determine the location or owner of the target machine, you can’t fix the problem.

The decision-making process is further impacted by incomplete, outdated or inaccurate critical asset lists. This is an all too common occurrence that contributes to high numbers of false positives and even worse, false negatives.

Inferred Context – Advanced decision-making skills

Watch video on Inferred Context

The latest release of the Respond Analyst comes equipped with a new set of features called Inferred Context. Inferred Context improves the Respond Analyst’s ability to make informed, accurate decisions that lead to faster incident response times.

Two examples of how applying Inferred Context will result in better security decisions:

Dynamic Host Configuration (DHCP)

The first component of Inferred Context automatically and intelligently maintains an up to date mapping of an IP address to hostname through ingestion of Dynamic Host Configuration Protocol (DHCP) information. This enhances the accuracy of the Respond Analyst’s findings by attributing all relevant events to the infected/targeted asset (and only that asset!) and enabling reasoning across data sources where one source includes IP addresses (such as network IDS/IPS events). The result is fewer false positives, more accurate prioritization of events and faster time to resolution.

Critical Assets – Shades of Gray

Many customers are challenged in keeping up-to-date lists of systems and their level of criticality. To address this, the second component of Inferred Context collects vulnerability scan data in the Respond Analyst that includes information about the host such as operating system, as well as which ports are open. Because applications communicate over open ports, the Respond Analyst infers that an application is running there. For example, a Simple Mail Transfer Protocol (SMTP) server runs on port 25, so if that port is open, the Respond Analyst will infer that is a mail server, which is considered a critical asset.

Inferred Context is supported on all of the models listed on the Respond Software Integrations page.  Additionally, this release is expanding support to give security teams more visibility into their existing alerting telemetries for the following systems:

● Endpoint Protection Platforms: Trend Micro Deep Security, Trend Micro OfficeScan, Palo Alto TRAPs
● Web proxy/URL filtering: McAfee Secure Web Gateway
● Network IDS/IPS: Checkpoint

Inferred Context is helping the Respond Analyst quickly find the target of the attack, so security teams can resolve them quickly. Analysts will no longer have to investigate a multitude of false positives or try to manually search for the affected system, system owner and/or the system name. Instead of staring at a screen filled with endless events, let the Respond Analyst automate the process so you can get out there and enjoy a spring day.

For more on Inferred Context, please read our recent press release or better yet, check out our YouTube channel.

The Respond Analyst’s decision-making skills are continuously expanding. To learn more about what the Respond Analyst can do for your organization or to gain access to Future Early Access programs, tellmemore@respond-software.com

The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.

Meta-Data

Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

2019 Security Predictions: Real or Bogus? Respond Experts Weigh In

Where is cybersecurity heading and what should security teams focus on in 2019?

We thought it would be helpful to examine the most common cybersecurity predictions for 2019. Business press, trade publications and of course vendors had a lot to say about emerging technology trends, but what predictions or trends will matter most? More importantly, how can security teams and CISOs turn these predictions into advantages for their business?

I sat down with Chris Calvert, industry expert who often says; “if you have a problem with how today’s SOCs work, it’s partially my fault and I’m working to solve that issue!” With over 30 years of experience in information security, Chris has worked for the NSA, the DOD Joint Staff and held leadership positions in both large and small companies, including IBM and Hewlett Packard Enterprise. He has designed, built and managed security operations centers and incident response teams for eight of the global fortune-50.

During our conversation, we discuss questions like:

  • Will we see an increase in crime, espionage and sabotage by rogue nation-states?
  • How will malware sophistication change how we protect our networks?
  • Will utilities become a primary target for ransomware attacks?
  • A new type of fileless malware will emerge, but what is it? (think worms)
  • And finally, will cybersecurity vendors deliver on the true promise of A.I.?

You can listen to his expert thoughts and opinions on the podcast here!

Want to be better prepared for 2019?

The Respond Analyst is trained as an expert cybersecurity analyst that combines human reasoning with machine power to make complex decisions, with 100% consistency. As an automated cybersecurity analyst, the Respond Analyst processes millions of alerts as they stream. Allowing your team to focus on higher priority tasks like threat hunting and or incident response.

Here’s some other useful information:

Neither SIEM nor SOAR–Can Security Decisions be Automated? Patrick Gray and Mike Armistead Discuss

We’ve asked the questions before, but we’ll ask it again: how much time does your security team spend staring at monitors? How about investigating false-positives escalated from an MSSP? More importantly, how are small security teams expected to cope with the growing amount of security data?

The world of security operations is changing. Extra processing power combined with faster mathematical computations, means security monitoring and event triage can now be analyzed at machine-scale and speed. With new innovations that leverage decision-automation, security organizations can analyze incidents more efficiently than ever before. Security teams no longer have to tune down or ignore low-signal events. Instead, technologies can now recognize patterns to identify malicious attacks that may have otherwise been overlooked.

So how will these new technologies impact security operations moving forward?

Mike Armistead, Respond Software CEO, recently sat down with Patrick Gray, from Risky Business, to discuss the state of information security today. In the 30-minute podcast, Mike and Patrick shed light on the future of security operations, discussing the limitations of traditional security monitoring/analysis techniques and the power of new technologies, like decision automation to change security forever.

During this podcast you’ll learn to:

  • Identify the biggest mistakes security teams make today and how to avoid it.
  • Manage the onslaught of data.
  • Increase your team’s capacity.
  • Stop wasting time chasing false-positives.

Listen to the full podcast, here!

Learn more about what the Respond Analyst can do for you!

Mid-sized Enterprises: Want Robust, Sustainable SecOps? Remember 3 C’s

Cybersecurity is tricky business for the mid-sized enterprise.

Attacks targeting mid-sized companies are on the rise, but their security teams are generally resource constrained and have a tough time covering all the potential threats.

There are solutions that provide sustainable security infrastructures but the vendor landscape is confusing and difficult to navigate. With smaller teams and more than 1,200 cybersecurity vendors in the market, it’s no wonder mid-sized enterprise IT departments often stick with “status quo” solutions that provide bare-minimum coverage. The IT leaders I talk to, secretly tell me they know bare-bones security is a calculated risk but often executive support for resources is just not there.  These are tradeoffs that smaller security teams should not have to make.

Here’s the good news.  Building a solid enterprise-scale security program without tradeoffs is possible. To get started IT leaders should consider the 3 C’s of a sustainable security infrastructure: Coverage, Context, and Cost.

In part 1 of this 3-part blog series, we will deep-dive into the first “C”: Coverage.

When thinking about coverage, there are two challenges to overcome. The first challenge is to achieve broad visibility into your sensors. There is a wide array of security sensors and it’s easy to get overwhelmed by the avalanche of data they generate. Customers often ask me: Do we have to monitor everything? Where do I begin? Are certain sensor alerts better indications of compromise than others?

Take the first step: Achieve visibility with appropriate sensor coverage

To minimize blind spots, start by achieving basic 24 x 7 coverage with continuous monitoring of Network Intrusion Detection & Prevention (NIDS/NIPS) and Endpoint Protection Platform (EPP) activity. NIDS/NIPS solutions leverage signatures to detect a wide variety of threats within your network, alerting on unauthorized inbound, lateral, and outbound network communications. Vendors like Palo Alto Networks, TrendMicro and Cisco have solid solutions. Suricata and Snort are two popular open-source alternatives. EPP solutions (Symantec, McAfee, Microsoft) also leverage signatures to detect a variety of threats (e.g. Trojans, Ransomware, Spyware, etc) and their alerts are strong indicators of known malware infections.

Both NIDS/NIPS and EPP technologies use signatures to detect threats and provide broad coverage of a variety of attacks, however, they do not cover everything.  To learn more on this topic read our eBook: 5 Ingredients to Help your Security Team Perform at Enterprise-Scale

To gain deeper visibility IT departments can eventually start to pursue advanced coverage.

With advanced coverage, IT teams can augment basic 24 x 7 data sensor coverage by monitoring web proxy, URL filtering, and/or endpoint detection and response (EDR). These augmented data sources offer opportunities to gain deeper visibility into previously unknown attacks because they report on raw activity and do not rely on attack signatures like NIDS/NIPS and EPP. Web proxy and URL filtering solutions log all internal web browsing activity, and as a result, provides in-depth visibility into one of the most commonly exploited channels that attackers use to compromise internal systems. In addition, EDR solutions act as a DVR on the system, recording every operation performed by the operating system—including all operations initiated by adversaries or malware. Of course, the hurdle to overcome with these advanced coverage solutions is managing the vast amounts of data they produce.

This leads to the second coverage challenge to overcome—obtaining the required expertise and capacity necessary to analyze the mountains of data generated.

As sensor coverage grows, more data is generated with each sensor type, creating data with unique challenges. Some sensors are extremely noisy and generate massive amounts of data. Others generate less data but are highly specialized and require a great deal more skill to analyze. To deal with the volume of data, common approaches are to ‘tune down’ sensors (which literally filters out potentially valuable data). This type of filtering is tempting since it essentially reduces the workload of a security team to a more manageable level. In doing so, however, clues to potential threats stay hidden in the data.

Take the second step: Consider security automation to improve coverage with resource-constrained teams.

Automation effectively offers smaller security teams the same capability that a full-scale Security Operations Center (SOC) team provides a larger organization, at a fraction of the investment and hassle.

Automation improves the status quo and stops the tradeoffs that IT organizations make every day. Smaller teams benefit with advanced security operations. Manual monitoring stops. Teams can keep up with the volume of data and can ensure that the analysis of each and every event is thorough and consistent. Security automation also provides continuous and effective network security monitoring and reduces time to respond. Alert collection, analysis, prioritization, and event escalation decisions can be fully or partially automated.

So to close, more Coverage for smaller security teams is, in fact, possible: First, find the right tools to gain more visibility across the network and endpoints. Second, start to think about solutions that automate the expert analysis of the data that increased visibility produces.

But, remember, ‘Coverage’ is just 1 part of this 3-part puzzle. Be sure to check back next month for part 2 of my 3 C’s (Coverage, Context, Cost) blog series. My blog on “Context” will provide a deeper dive into automation and will demonstrate how mid-sized enterprise organizations can gain more insights from their security data—ultimately finding more credible threats.

In the meantime, please reach out if you’d like to talk to one of our Security Architect to discuss coverage in your environment.

A new tool for defenders – Real-time analysis of Web Proxy data

When I got back into the office after taking a short break to recharge my batteries, I was really excited to be speaking with my colleagues at Respond Software about the upcoming release of our web filtering model for our Respond analyst. You see, over the last few months we’ve been working tirelessly to build a way to analyze web filtering event data in real-time. Now that I’m sitting down to write this blog, the fruit of all the hard work our team has put into making this a reality is really sinking in. We’ve done it! It’ s now available as part of the Respond Analyst!

This was no small feat, as most of you in the security operations world would know.

You may ask why we chose to take this challenge on.  The answer is quite simple, there is a ton of valuable information in web filtering data and it’s extremely difficult for security teams to analyze these events in real-time due to the sheer volume of data generated by enterprises. What a perfect opportunity for us to show off the Respond Analyst’s intelligence and capability.

Up until now, security operations and IR teams have pivoted to using web filtering data for investigations once they’ve already been alerted to an attack through threat hunting or some other form of detection.  Processing all of the web filtering data for an organization in a SIEM or similar has just been way too expensive to do. In fact, most organizations can’t even afford to store this data for a “reasonable” amount of time for investigators to dig through.

Think about it for a second, each web page visited can generate a number of new web requests to pull back content from different sources. Then picture each employee using the internet for most of day; navigating the web through their day-to-day tasks, a few personal items between meetings, all this amounts to hundreds of web page visits each day. If you have a few hundred employees, the volume of data generated by the web filtering solution quickly becomes unmanageable. Well now we’re able to process all of these events in real-time.

Consider the questions you are able to ask of the data without even taking the assigned web filtering category into account…

  • Analyze each component of the HTTP header
  • Perform user agent analysis
  • Take a look at how suspicious the requested domain is
  • Perform URL string comparisons to all other requests over an extended period of time
  • Compare each attribute to information you’ve gathered in your threat intel database

But why stop there…

  • What about looking at whether the pattern of behavior across a set of requests is indicative of exploit kit delivery?
  • Maybe you suspect that these requests are related to command-and-control activity
  • What about the upload of documents to a filesharing service, is that data exfiltration or simply everyday user activity?

Web filtering data can also leverage the power of integrated reasoning.  When web filtering data is combined with IDS/IPS sensors, Anti-malware technology and contextual sources like vulnerability data and critical asset lists, you are able to form an objective view of your enterprise’s threat landscape.  Beyond the analysis of each of these data sources, the Respond Analyst accurately scopes all events related to the same security incident together for a comprehensive incident overview.  The Respond Analyst then assigns an appropriate priority to that incident and documents all the details of the situation and presents this information to you.  This is, by far, the most efficient way to reduce attacker dwell time.

We have a long way to go and many more exciting Respond Analyst skills & capabilities on the way. I couldn’t be prouder of all the work we’ve achieved and the release of our Web Filtering model.

Way to go Respond team!

When Currency is Time, Spend it Threat Hunting

“Time is what we want most, but what we use worst.”
– William Penn

How many valuable cybersecurity tasks have you put aside due to the pressures of time? Time is currency and we spend it every moment we’re protecting our enterprises.

When we are constantly tuning, supporting and maintaining our security controls or chasing down an alert from an MSSP, only to discover it’s yet another false positive, we spend precious currency. When we create new correlation logic in our SIEM or decide which signatures to tune down to lower the volume of events to make it more manageable for our security team, we spend precious currency. When we analyze events from a SIEM to determine if they’re malicious and actionable or if a SIEM rule needs additional refinement, we spend precious currency. When we hire and train new analysts to cover churn, then watch them leave for a new opportunity – we waste currency and the investment hurts.

You can spend your “currency” doing pretty much anything, which is a blessing and a curse. We can (and do) waste an inordinate amount of time going down rabbit holes chasing false positives. We are forced to make choices: do we push back a request while we investigate the MSSP escalations or do we delay an investigation to provide the service agility the enterprise requires?

Both options are important, and both need addressing; forcing us to make a choice. In our gut we think the escalation is another false positive, but as cybersecurity professionals; we wait for the sword of Damocles to fall. It’s only a matter of time before one of these escalations is related to the thing we worry about most in our environments. Either way, something gets delayed…. hopefully just lunch.

Basing decisions on what we can neglect is reactive and unsustainable. It’s a matter of time until we choose to postpone the wrong thing.

We need to use our time more wisely.

Organizations need to spend precious “currency” focusing on higher value tasks, like threat hunting, that motivate their talent and provide value to the organization. But also need to maintain two hands on the wheel of lower value tasks that still need attention.

Organizations should implement automation tools to focus on the lower-value, repetitive tasks such as high-volume network security monitoring. Generating and receiving alerts from your security controls is easy, making sense and determining if they’re malicious and actionable is a different story. The decision to escalate events is typically inconsistent and heavily relies on the analyst making the decision. Factor in the amount of time required to gather supporting evidence and then make a decision, while doing this an additional 75 times an hour. As a defender, you don’t have enough “currency of time” to make consistent, highly-accurate decisions. Security analysts tasked with monitoring high-noise, low-signal event feeds is a misallocation of time that only leads to a lack of job satisfaction and burnout.

There is another way.

Employing Respond Analyst is like adding a virtual team of expert, superhuman analysts and will allow your team to, bring their talent and expertise to threat hunting. Adding Respond Analyst allows your talent to focus on higher value tasks and more engaging work so you can combat analyst burnout, training drains, and churn.

You Don’t Have SOC Analysts, You Have SOC Synthesists

For all my nearly 20 years in the Security Operations field being a “SOC Analyst”, building and helping with SOC design, evolving SOCs and everything in between, I’ve been calling my team members by the wrong title. Worse yet, none of my colleagues ever corrected me, mostly because they didn’t know. Hard to believe, but I suspect it’s because we all never really thought much about what the “SOC Analyst” really does for a living.

The word “analyst” means a person who conducts an analysis. “Analysis” has its roots in the Greek word “analyein” which translates “to break up”. This implies breaking the problem into pieces.

It begins…

“SOC Analysts” typically come into work at the start of their long shifts and sit down in front of a console to look at alerts of some kind.  These alerts have data points from a single security product telemetry like an IDS sensor. These alerts do not usually have enough information alone to make a decision on whether they are dealing with a security incident or threat.

Now what?

The “SOC Analyst” would then want to know more information so they go collect data points from other sources to piece together (combine corroborating pieces of evidence) to form an as complete a picture as possible regarding what is going on in their environment, and what is the likelihood it is malicious.

Then what?

Now it’s decision time.  Does the picture paint a portrait of something nefarious going on and we need to get Security Incident Responders engaged or is it just a misconfigured company application running amok and should they need to notify a server admin?

Finally.

The decision is made and it’s on to the next alert.  Wash, rinse, repeat for the next 11+ hours.

What we just walked through was an individual taking one piece of evidence and trying to add more evidence to create a whole picture.  So let’s look at the definition of “analysis” below from dictionary.com:

Analysis

noun, plural a·nal·y·ses [uhnaluh-seez].

1. The separating of any material or abstract entity into its constituent elements (as opposed to synthesis).

2. This process as a method of studying the nature of something or of determining its essential features and their relations.

“The separating”?  Wait, what now? We just walked through how a Security Analysts is combining things, not breaking them apart!

Now let’s look at the definition of the word Synthesis, again from dictionary.com:

Synthesis

noun, plural syn·the·ses [sin-thuh-seez].

1. The combining of the constituent elements of separate material or abstract entities into a single or unified entity (opposed to analysis).

2. A complex whole formed by combining.

This definition fits what our “SOC Analysts” do every day much better than analysis now doesn’t it?

Wouldn’t it be great if your “SOC Analysts” had the time to synthesize all the contextual evidence they could collect around an initial alert to formulate a hypothesis?  THEN had even more time to turn around and breakdown all the possible permutations of the evidence to test the hypothesis and reaffirm or change their mind on each decision they made?  Yes, that would be awesome to have the time to do both!

Wouldn’t you rather synthesize AND analyze before making a decision to alert your incident responders, or just let them sleep another hour……

 

 

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.