The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.

Meta-Data

Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

Neither SIEM nor SOAR–Can Security Decisions be Automated? Patrick Gray and Mike Armistead Discuss

We’ve asked the questions before, but we’ll ask it again: how much time does your security team spend staring at monitors? How about investigating false-positives escalated from an MSSP? More importantly, how are small security teams expected to cope with the growing amount of security data?

The world of security operations is changing. Extra processing power combined with faster mathematical computations, means security monitoring and event triage can now be analyzed at machine-scale and speed. With new innovations that leverage decision-automation, security organizations can analyze incidents more efficiently than ever before. Security teams no longer have to tune down or ignore low-signal events. Instead, technologies can now recognize patterns to identify malicious attacks that may have otherwise been overlooked.

So how will these new technologies impact security operations moving forward?
Mike Armistead, Respond Software CEO, recently sat down with Patrick Gray, from Risky Business, to discuss the state of information security today. In the 30-minute podcast, Mike and Patrick shed light on the future of security operations, discussing the limitations of traditional security monitoring/analysis techniques and the power of new technologies, like decision automation to change security forever.

During this podcast you’ll learn to:

  • Identify the biggest mistakes security teams make today and how to avoid it.
  • Manage the onslaught of data.
  • Increase your team’s capacity.
  • Stop wasting time chasing false-positives.

Listen to the full podcast, here!

Learn more about what the Respond Analyst can do for you!

3 Reasons Understaffed Security Teams Can Now Sleep at Night

If you feel overwhelmed with security operations, you’re not alone. Matter of fact, it’s a common theme we hear all the time: “We’re overloaded and need help!” We’ve been in the trenches, building security operations for mid to large enterprises, so we understand the unique pressure IT and security teams feel. It’s not easy balancing it all—especially for mid-sized enterprises with resource-constrained security teams.

Cybersecurity in mid-sized companies has unique challenges. With fewer resources and tighter budgets, IT teams are spread thin while wearing multiple hats. Unfortunately, sometimes security projects accumulate, leaving teams exposed and overwhelmed. But it doesn’t have to be this way—there is a viable solution.

Here are the three biggest challenges security teams face and why The Respond Analyst helps them sleep soundly at night.
Reason #1 – We don’t have enough time
Our customers need to free time to work on priority projects and initiatives. We designed our product to provide expert intrusion analysis without all the fuss of deploying extensive technology stacks that require significant upfront and continued investment. We’re here to simplify the process, not add complexity. Security event console monitoring is the way of the past and we free our customers from staring at security consoles and instead move them toward higher value tasks and initiatives.

Within seven days, The Respond Analyst has learned its environment and is finding actionable incidents for our customers. The setup process is simple: 1) deploy a virtual appliance or install our software, 2) direct security feeds to our software and 3) add simple context. There is no significant time commitments or in-depth expertise in security operations required.
Reason #2 – We need additional security expertise
One of the biggest challenges our customers face is finding the right people and retaining them. This challenge is expected to grow with an ever competitive job market, resulting in higher wages and more movement at a time when organizations are trying to implement steady security programs. To say it’s difficult is an understatement.

We don’t expect our customers to be experts in intrusion analysis and security operations—that is why they’ve partnered with us. The Respond Analyst is an expert system that automates the decision making of a front line security analyst. This pre-packaged intelligence requires no security expertise to deploy. There is no use case development, programming of rules, or tagging of event data. Well vetted incidents, without all the fuss, are the result of a well designed expert system.
Reason #3 – We don’t have the time, money or desire to build a legacy SOC
Many organizations understand the old way of building the legacy SOC with SIEM is not the future. Indeed, it’s not even keeping up with today’s threats. Not only is it less effective then solutions such as The Respond Analyst, but it is also significantly higher cost and results in a far lengthier Return on Investment timeframe.

The process of building a SIEM with 80+ data sources (where most really only look at 5 or less), hiring, training and retaining experienced intrusion analyst, and implementing a sophisticated process to keep it glued together, is outdated. Of course, this was the best we could do given the technology and understanding we had at the time, but now we have a better way. Old models have since been replaced and our customers receive the benefit of avoiding frustration and high cost by using a pre-packaged expert system.

Times have changed and with the emergence of expert systems, like The Respond Analyst, we have brought technology where traditionally we’ve had large investments and lengthy time-intensive projects. The result is mid-sized enterprise customers now have an option to operate at maturity levels beyond large traditional enterprise operations by leveraging expert systems. This new approach frees up time, provides needed expertise and saves our customers the headache and cost of legacy solutions. And better yet, our customers gain relief from the stress of understaffed resources and can relax knowing we have their security operations covered.

Read more:

Must-Attend December 2018 Information Security Events & Webinars

Security Geek is back with the top recommendations for upcoming cybersecurity events in December! I picked these events and conferences because they provide a wealth of information, knowledge, and learning materials to help your security team improve its efficiency and effectiveness in defending your environment.

Here are the top shows to attend:

DataConnectors: December 5, 2018 | Dallas, TX

DataConnectors: December 6, 2018 | Washington, D.C.

DataConnectors: December 12, 2018 | Chicago, IL

DataConnectors: December 13, 2018 | Fort Lauderdale, FL

The Dallas, D.C., Chicago & Fort Lauderdale Cyber Security Conferences feature 40-60 vendor exhibits and 8-12 educational speaker sessions discussing current cybersecurity issues such as cloud security, email security, VoIP, LAN security, wireless security & more. Meet with industry veterans and learn about emerging cybersecurity technologies.

My favorite part about the DataConnectors events – they’re free!


Cloud Security Conference: December 10-12, 2018 | Orlando, FL
The Cloud Security Alliance event welcomes world-leading security experts and cloud providers to discuss global governance, the latest trends in technology, the threat landscape, security innovations and best practices in order to help organizations address the new frontiers in cloud security.

IANS: December 12, 2018 | Webinar

In this webinar, IANS Research Director Bill Brenner and IANS Faculty Member Dave Shackleford look back at the biggest security news trends of 2018, what made them significant and what it all could mean for the year ahead.

 

Carbon Black: December 19, 2018 | Webinar

Learn how CB Defense, a real-time security operations solution, enables organizations to ask questions on all endpoints and take action to remediate attacks in real-time.

To stay up-to-date on where the Respond Software team is heading, check out our events calendar! The subject matter experts and industry professionals at Respond are always in attendance and ready to share their knowledge expertise!

Mid-sized Enterprises: Want Robust, Sustainable SecOps? Remember 3 C’s

Cybersecurity is tricky business for the mid-sized enterprise.

Attacks targeting mid-sized companies are on the rise, but their security teams are generally resource constrained and have a tough time covering all the potential threats.

There are solutions that provide sustainable security infrastructures but the vendor landscape is confusing and difficult to navigate. With smaller teams and more than 1,200 cybersecurity vendors in the market, it’s no wonder mid-sized enterprise IT departments often stick with “status quo” solutions that provide bare-minimum coverage. The IT leaders I talk to, secretly tell me they know bare-bones security is a calculated risk but often executive support for resources is just not there.  These are tradeoffs that smaller security teams should not have to make.

Here’s the good news.  Building a solid enterprise-scale security program without tradeoffs is possible. To get started IT leaders should consider the 3 C’s of a sustainable security infrastructure: Coverage, Context, and Cost.

In part 1 of this 3-part blog series, we will deep-dive into the first “C”: Coverage.

When thinking about coverage, there are two challenges to overcome. The first challenge is to achieve broad visibility into your sensors. There is a wide array of security sensors and it’s easy to get overwhelmed by the avalanche of data they generate. Customers often ask me: Do we have to monitor everything? Where do I begin? Are certain sensor alerts better indications of compromise than others?

Take the first step: Achieve visibility with appropriate sensor coverage

To minimize blind spots, start by achieving basic 24 x 7 coverage with continuous monitoring of Network Intrusion Detection & Prevention (NIDS/NIPS) and Endpoint Protection Platform (EPP) activity. NIDS/NIPS solutions leverage signatures to detect a wide variety of threats within your network, alerting on unauthorized inbound, lateral, and outbound network communications. Vendors like Palo Alto Networks, TrendMicro and Cisco have solid solutions. Suricata and Snort are two popular open-source alternatives. EPP solutions (Symantec, McAfee, Microsoft) also leverage signatures to detect a variety of threats (e.g. Trojans, Ransomware, Spyware, etc) and their alerts are strong indicators of known malware infections.

Both NIDS/NIPS and EPP technologies use signatures to detect threats and provide broad coverage of a variety of attacks, however, they do not cover everything.  To learn more on this topic read our eBook: 5 Ingredients to Help your Security Team Perform at Enterprise-Scale

To gain deeper visibility IT departments can eventually start to pursue advanced coverage.

With advanced coverage, IT teams can augment basic 24 x 7 data sensor coverage by monitoring web proxy, URL filtering, and/or endpoint detection and response (EDR). These augmented data sources offer opportunities to gain deeper visibility into previously unknown attacks because they report on raw activity and do not rely on attack signatures like NIDS/NIPS and EPP. Web proxy and URL filtering solutions log all internal web browsing activity, and as a result, provides in-depth visibility into one of the most commonly exploited channels that attackers use to compromise internal systems. In addition, EDR solutions act as a DVR on the system, recording every operation performed by the operating system—including all operations initiated by adversaries or malware. Of course, the hurdle to overcome with these advanced coverage solutions is managing the vast amounts of data they produce.

This leads to the second coverage challenge to overcome—obtaining the required expertise and capacity necessary to analyze the mountains of data generated.

As sensor coverage grows, more data is generated with each sensor type, creating data with unique challenges. Some sensors are extremely noisy and generate massive amounts of data. Others generate less data but are highly specialized and require a great deal more skill to analyze. To deal with the volume of data, common approaches are to ‘tune down’ sensors (which literally filters out potentially valuable data). This type of filtering is tempting since it essentially reduces the workload of a security team to a more manageable level. In doing so, however, clues to potential threats stay hidden in the data.

Take the second step: Consider security automation to improve coverage with resource-constrained teams.

Automation effectively offers smaller security teams the same capability that a full-scale Security Operations Center (SOC) team provides a larger organization, at a fraction of the investment and hassle.

Automation improves the status quo and stops the tradeoffs that IT organizations make every day. Smaller teams benefit with advanced security operations. Manual monitoring stops. Teams can keep up with the volume of data and can ensure that the analysis of each and every event is thorough and consistent. Security automation also provides continuous and effective network security monitoring and reduces time to respond. Alert collection, analysis, prioritization, and event escalation decisions can be fully or partially automated.

So to close, more Coverage for smaller security teams is, in fact, possible: First, find the right tools to gain more visibility across the network and endpoints. Second, start to think about solutions that automate the expert analysis of the data that increased visibility produces.

But, remember, ‘Coverage’ is just 1 part of this 3-part puzzle. Be sure to check back next month for part 2 of my 3 C’s (Coverage, Context, Cost) blog series. My blog on “Context” will provide a deeper dive into automation and will demonstrate how mid-sized enterprise organizations can gain more insights from their security data—ultimately finding more credible threats.

In the meantime, please reach out if you’d like to talk to one of our Security Architect to discuss coverage in your environment.

November Information Security Events You Don’t Want to Miss

Your favorite Security Geek is back with some great news – a list of upcoming cybersecurity shows and conferences you need be aware of during the month of November!

There are numerous information security events happening on a monthly basis and sometimes it can be difficult to navigate which ones provide value and disregard the shows that are a time-waste. This is where we can help you out.

We’ve outlined a few of the top shows you should be looking at below!

FS-ISAC Summit: Nov 11-14 | Chicago, IL

Are you in the financial services industry? Well, then this is the show for you!

As Partners in the Information Security community, we have all been challenged in 2018 with the onslaught of DDoS and phishing campaigns with payloads that have included credential stealing malware, destructive malware and ransomware. These challenges are expanding the responsibilities placed upon us as security professionals and requiring us to ensure we are following best practices.

The FS-ISAC conferences provide information and best practices on how cybersecurity teams in banking and financial institutions can help protect their networks.

DataConnectors: Nov 15, 2018 | Nashville, TN
DataConnectors: Nov 29, 2018 | Phoenix, AZ

The Nashville and Phoenix Cyber Security Conferences feature 40-60 vendor exhibits and 8-12 educational speaker sessions discussing current cyber-security issues such as cloud security, email security, VoIP, LAN security, wireless security & more.

The best part of the DataConnectors events – they’re free! Meet with industry veterans and learn about emerging cybersecurity technologies.

Cyber Security & Cloud Expo 2018: Nov 28 – 29, 2018 | Santa Clara, Ca

The Cyber Security & Cloud Expo North America 2018 will host two days of top-level discussion around cybersecurity and cloud, and the impact they are having on industries including government, energy, financial services, healthcare and more. Chris Calvert, Co-Founder and VP of Product Strategy at Respond Software, will discuss the current state of security operations and emerging trends that are changing out teams operate.

 

Cyber Security Summit: November 29, 2018 | Los Angeles, CA

The annual Cyber Security Summit: Los Angeles connects C-Suite & Senior Executives responsible for protecting their companies’ critical infrastructures with innovative solution providers and renowned information security expertise.

Each one of these conferences provides a wealth of information, knowledge and learning material to help your security team improve its efficiency and effectiveness in cyber threat hunting. To stay up-to-date on where the Respond Software team is heading, check out our events calendar! The subject matter and industry professionals at Respond are always in attendance and ready to share their expertise!

Why It’s Time to Go Back To The Basics of SOC Design

The average SOC is no more prepared to solve their cybersecurity issues today, than they were 10 to 20 years ago. Many security applications have been developed to help protect your network, but SOC Design has traditionally remained the same.

Yes, it’s true we have seen advancements like improved management of data with SIEMS and Orchestration of resolutions, but these tools haven’t resolved the fundamental challenges. Data generated from the most basic security alerts and incidents are overwhelming and still plague the most advanced security organizations.

Which begs the question: How are smaller, resource-constrained security organizations expected to keep up when even enterprise-sized organizations can’t?

According to a recent article in Computer Weekly, the issue is that most organizations, even with the tools & the know-how, are still getting the basics all wrong.

“Spending on IT security is at an all-time high. The volume of security offerings to cover every possible facet of security is unparalleled…The reason so many organisations suffer breaches is simply down to a failure in doing the very basics of security. It doesn’t matter how much security technology you buy, you will fail. It is time to get back to basics.”.

The article mentions that security operations teams need to focus these four key areas to really see any impact positively affecting their SOC design:

  1. Security Strategy
  2. Security Policy
  3. User Awareness
  4. User Change

But is it as simple as this?

The answer is a resounding YES!

There is no question that it’s still possible to cover the basics in security strategy and achieve enterprise security results. Our recommendation? Start with the most tedious and time-sucking part of security analyst role — analysis and triage of all collected security data. Let your team focus on higher-priority tasks like cyber threat hunting. It’s where you’ll get the biggest bang for your buck.

How Automating Long Tail Analysis Helps Security Incident Response

Today’s modern cybersecurity solutions must scale to unparalleled levels due to constantly expanding attack surfaces resulting in enormous volumes of diverse data to be processed. Scale issues have migrated from just the sheer volume of traffic, such as IOT led DDoS attacks and the traffic from multiple devices, to the need for absolute speed in identifying and catching the bad guys.

Long tail analysis is narrowed down to looking for very weak signals from attackers who are technologically savvy enough to stay under your radar and remain undetected.

But, what’s the most efficient and best way to accomplish what can be a time-consuming and a highly repetitive tasks?

What is Long Tail Analysis?

You might be wondering what the theory is behind long tail analysis, even though you’re familiar with the term and could already be performing these actions frequently in your security environment.  The term Long Tail first emerged in 2004 and was created by Wired editor-in-chief, Chris Anderson to describe “the new marketplace.” His theory is that our culture and economy is increasingly shifting away from a focus on a relatively small number of “hits” (mainstream products and markets) at the head of the demand curve and toward a huge number of niches in the tail.

In a nutshell and from a visual standpoint, this is how we explain long tail analysis in cybersecurity:  You’re threat hunting for those least common events that will be the most useful in understanding anomalous behaviour in your environments.

Finding Needles in Stacks of Needles

Consider the mountains of data generated from all your security sources. It’s extremely challenging to extract weak signals while avoiding all the false positives. Our attempt to resolve this challenge is to provide analysts with banks of monitors displaying different dashboards they need to be familiar with in order to detect malicious patterns.  As you know, this doesn’t scale.  We cannot expect a person to react to these dashboards consistently.  Nor do we expect them to “do all the things”.

Instead, experienced analysts enjoy digging into the data.  They’ll pivot into one of the many security solutions used to combat cybersecurity threats such as log management solutions, packet analysis platforms, and even some endpoint agents all designed to record and playback a historical record.  We break down common behaviours looking for those outliers.  We zero in on these ‘niche’ activities and understand them one at a time. Unfortunately, we can’t always get to each permutation and they are left unresolved.

Four Long Steps of Long Tail Analysis in the SOC

If you are unfamiliar with long tail analysis, here are 4 steps of how a typical analyst will work through it:

Step 1: First, you identify events of interest like a user authentication or web site connections.  Then, you determine how to aggregate the events in a way that provides enough meaning for analysis. Example:  Graph user account by the number of authentication events or web domains by the number of connections.

Step 2: Once the aggregated data is grouped together, the distribution might be skewed in a particular direction with a long tail either to the left or right.  You might be particularly interested in the objects that fall within that long tail.  These are the objects that are extracted, in table format, for further analysis.

Step 3: For each object, you investigate as required. For authentications, you would look at the account owner, the number of authentication events, the purpose of the account.  All with the intended goal of understanding why that specific behaviour is occurring.

Step 4: You then decide what actions to take and move on to the next object.  Typically, the next steps include working with incident responders or your IT team.  Alternatively, you might decide to simply ignore the event and repeat Step 3 with the next object.

Is There a Better Solution?

At Respond Software, we’re confident that long tail analysis can be automated to make your team more efficient at threat hunting. As we continue to build Respond Analyst modules, we move closer to delivering on that promise — and dramatically improve your ability to defend your business.

4 Reasons Your MSSP Might Not Be Providing Dependable Security Monitoring

Unless your goal with your Managed Security Service Provider to simply check your audit requirement box, you are likely not getting the dependable security monitoring you are looking for.

Reason #1 – One Size Doesn’t Fit All
The first reason is the general “one size fits all/most” model that MSSP’s are forced to work in so they can make a profit. My introduction to the one size fits all/most model goes back to when I started in cybersecurity and worked for a large Tier-1 MSSP. We applied “recommended signature sets” to provide higher fidelity alerting as somewhat of a self-serving tale told by MSSPs to justify the event funnel where events are filtered out and never presented to an analyst for analysis. While this helps keep super noisy signatures from coming to the console (who would have the time to weed thru them to find the needle in that haystack?) it also creates a significant visibility gap. The event funnel also helped keep our SIEM from tipping over.

Filtering is something we as an industry have unfortunately come to accept as the solution to address the exponential problem of data growth and lack of skilled analysts. This is mainly due to technology and human limitations. This is where expert systems, AI and ML can be a big help.

Reason #2 – False Positive Headaches
How many times have you been woken up at 2:00 AM by your MSSP for an escalation that turned out to be a false positive? Consider how many hours you have spent chasing down an escalation that was nothing. When an escalation comes in from your MSSP do you jump right up knowing there is a very high probability this escalation is malicious and actionable, or do you finish your lunch believing it will likely be another waste of time? Chasing down false positives is not only a drain on time and resources, but they are also an emotional drain for the security Incident Responders. People want to do work that adds value; expending cycles and finding out it was a waste of time is disappointing. I have yet to come across any organization that is ok with the level of false escalations from their MSSP.

Reason #3 – Generic Analysis
The third reason your MSSP might not be providing the value you need is because the MSSP analysts are not focused solely on your business. With a typical MSSP, you get a general set of SIEM intrusion detection content (e.g. correlation rules, queries) that is built to address a very generalized set of use cases that can apply to most, if not all, customers. If you want custom detection content, your only option has generally been to pay for a managed SIEM dedicated to you. You may be sending logs from a set of data sources to your MSSP, but do they have the proper detection content to evaluate those logs? In my years of SOC consulting, I have had an insider view of some of the detection content being used MSSP’s – my impression was that the content was generalized and basic. There was no cross-telemetry correlation to speak of, and very little content that could be considered advanced or line of business focused. Without this level of visibility, I question how dependable the analysis results will be.

Reason #4 – Tribal Knowledge
The challenge of knowing all the subtle nuances of your enterprise is something an MSSP will never achieve. Understanding account types and which assets are more critical than others is unique to each enterprise. And this information changes overtime. How is an outsider that may have dozens or even several hundred other customers supposed to know the nuances of your users, systems, or specific business practices, etc? There is a myriad of unwritten knowledge that is necessary to be able to effectively monitor and accurately decide which security events are worthy of escalating for response, and MSSPs often times do not have the company context to make good decisions for their customers.

If you are outsourcing your security monitoring or considering it to reduce cost or add capacity, take a look at Respond Analyst. You can manage your own Security Monitoring and Triage program with our pre-built expert decision system – no staffing required. Respond Analyst is like having your own team of Security Analysts working for you, 24×7 regardless of your company size or maturity.

Ripping off the Bandage: How AI is Changing the SOC Maturity Model

The introduction of virtual analysts, artificial intelligence and other advanced technologies into the Security Operations Center (SOC) is changing how we should think about maturity models. AI is replacing traditional human tasks, and when those tasks are automated the code effectively becomes the procedure. Is that a -1 or a +10 for security operations? Let’s discuss that.

To see the big picture here, we should review what a maturity model is and why we are using them for formal security operations. A maturity model is a process methodology that drives good documentation, repeatability, metrics and continuous improvement. The assumption being that these are a proxy for effectiveness and efficiency. The most common model used in Security Operations is a variant of the Carnegie Mellon, Capability Maturity Model for Integration (CMMI). Many process methods focus on defect management, this is even more evident in the CMMI since it originated in the software industry.

In the early 2000’s, we started using CMMI at IBM, Big Blue insisted that we couldn’t offer a commercial service that wasn’t on a maturity path and they had adopted CMMI across the entire company at that point. We had, at that time, what seemed like a never-ending series of failures in our security monitoring services, and for each failure a new “bandage” in the form of a process or procedure was applied. After a few years we had an enormous list of processes and procedures, each connected to the other in a PERT chart of SOC formality. Most of these “bandages” were intended to provide guidance and support to analysts as they conducted security monitoring and to prevent predictable failures, so we could offer a consistent and repeatable service across shifts and customers.

To understand this better, let’s look at the 5 levels of the CMMI model:

  1. Initial (ad hoc)
  2. Managed (can be repeated)
  3. Defined (is repeated)
  4. Measured (is appropriately measured)
  5. Self-optimizing (measurements leads to improvements)

This well-defined approach seemed to be perfect. It allowed us to take junior analysts and empower them to have a consistent level of service delivery. We could repeat ourselves across customers. We might not deliver the most effective results, but we could at least be reasonably consistent. As it turns out, people don’t like working in such structured roles because there’s little room for creativity or curiosity. Not surprisingly, this gave rise to the 18-24 month security analyst turn-over phenomenon. Many early analysts came from help desk positions and were escaping “call resolution” metrics in the first place.

Our application of SOC maturity morphed over the years from solving consistency problems into consistently repeating the wrong things because they could be easily measured. When failures happened, we were now in the habit of applying the same “bandages” over and over.  Meanwhile, the bad guys had moved on to new and better attack techniques. I have seen security operations teams follow maturity guidelines right down a black hole, when for example, a minor SIEM content change can take months, not the few hours it should take.

According to the HPE Security Operations Maturity report, the industry median maturity score is 1.4, or slightly better than ad-hoc. I’m only aware of 2 SOCs in the world that are CMMI 3.0.  So, while across the industry we are measuring our repeatability and hoping that it equates to effectiveness and efficiency, we are still highly immature, and this is reflected in the almost daily breaches being reported. You can also see this in the multi-year sine wave of SOC capability many organizations experience; it goes something like this:

  1. Breach
  2. Response
  3. New SOC or SOC rebuild
  4. Delivery challenges
  5. Maturity program
  6. Difficulty articulating ROI
  7. Cost reductions
  8. Outsourcing
  9. Breach
  10. Repeat

With a virtual analyst, your SOC can now leap to CMMI level 5 for what was traditionally a human-only task. An AI-based virtual analyst, like the Respond Analyst, conducts deep analysis in a consistent fashion and learns rationally from experience. This approach provides effective monitoring in real time and puts EVERY SINGLE security-relevant event under scrutiny. Not only that, you liberate your people from rigorous process control, and allow them to hunt for novel or persistent attackers using their creativity and curiosity.

This will tip the balance towards the defender and we need all the help we can get!

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.