Why we’re supporting the next generation of cybersecurity professionals with (ISC)²

The RSA security conference halls at Moscone Center were abuzz last week with conversations ranging from nation-state attacks to speculation about which household brand will be breached next.

But one conversation stood out like an ominous black cloud on the horizon—the cybersecurity skills gap.

At Respond Software we believe the talent shortage has gone far beyond something people alone have the capacity to tackle. The solution of the future is one that augments human capabilities with Robotic Decision Automation (RDA), which complements the security analyst’s skillsets and expands their ability to do more. Raising the bar for security expertise is our aim and we understand that the need for higher-skilled cybersecurity experts is more critical than ever before.

This year at RSA we decided to take a slightly different approach by giving back to help expand our cybersecurity community. Starting at RSA and events throughout 2019, in lieu of giving away hundreds of coffee mugs, pens, or other swag that ends up in landfills, Respond Software will instead route a portion of our promotional budget to (ISC)² Cybersecurity Scholarships programs. Attendees at several hand-picked Respond Software events throughout the year will have the option to add their name to our donation roster.

To kick this off, last week 90% of the attendees at our sponsored RSA ISE VIP Luncheon agreed to support this effort! Today we will send our first donation check, in the amount of $2,000, to (ISC)2. We are aiming to donate upwards of $10,000 throughout the course of 2019 to support the valuable work the scholarship program does and to give back to our industry.

About ISC² CyberSecurity Scholarship
Each year, (ISC)², the world’s leading cybersecurity and IT security professional organization, and the Center for Cyber Safety and Education, partner to offer scholarships to students around the world.

The Respond Analyst
Last year, the Respond Analyst, our flagship product pre-built with decision-making skills, was able to expand the capacity of security teams by adding the equivalent of 14 ‘human’ analysts to every team which shines a light on how quickly automation can help close the skills gap.

The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.

Meta-Data

Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

3 Reasons Understaffed Security Teams Can Now Sleep at Night

If you feel overwhelmed with security operations, you’re not alone. Matter of fact, it’s a common theme we hear all the time: “We’re overloaded and need help!” We’ve been in the trenches, building security operations for mid to large enterprises, so we understand the unique pressure IT and security teams feel. It’s not easy balancing it all—especially for mid-sized enterprises with resource-constrained security teams.

Cybersecurity in mid-sized companies has unique challenges. With fewer resources and tighter budgets, IT teams are spread thin while wearing multiple hats. Unfortunately, sometimes security projects accumulate, leaving teams exposed and overwhelmed. But it doesn’t have to be this way—there is a viable solution.

Here are the three biggest challenges security teams face and why The Respond Analyst helps them sleep soundly at night.
Reason #1 – We don’t have enough time
Our customers need to free time to work on priority projects and initiatives. We designed our product to provide expert intrusion analysis without all the fuss of deploying extensive technology stacks that require significant upfront and continued investment. We’re here to simplify the process, not add complexity. Security event console monitoring is the way of the past and we free our customers from staring at security consoles and instead move them toward higher value tasks and initiatives.

Within seven days, The Respond Analyst has learned its environment and is finding actionable incidents for our customers. The setup process is simple: 1) deploy a virtual appliance or install our software, 2) direct security feeds to our software and 3) add simple context. There is no significant time commitments or in-depth expertise in security operations required.
Reason #2 – We need additional security expertise
One of the biggest challenges our customers face is finding the right people and retaining them. This challenge is expected to grow with an ever competitive job market, resulting in higher wages and more movement at a time when organizations are trying to implement steady security programs. To say it’s difficult is an understatement.

We don’t expect our customers to be experts in intrusion analysis and security operations—that is why they’ve partnered with us. The Respond Analyst is an expert system that automates the decision making of a front line security analyst. This pre-packaged intelligence requires no security expertise to deploy. There is no use case development, programming of rules, or tagging of event data. Well vetted incidents, without all the fuss, are the result of a well designed expert system.
Reason #3 – We don’t have the time, money or desire to build a legacy SOC
Many organizations understand the old way of building the legacy SOC with SIEM is not the future. Indeed, it’s not even keeping up with today’s threats. Not only is it less effective then solutions such as The Respond Analyst, but it is also significantly higher cost and results in a far lengthier Return on Investment timeframe.

The process of building a SIEM with 80+ data sources (where most really only look at 5 or less), hiring, training and retaining experienced intrusion analyst, and implementing a sophisticated process to keep it glued together, is outdated. Of course, this was the best we could do given the technology and understanding we had at the time, but now we have a better way. Old models have since been replaced and our customers receive the benefit of avoiding frustration and high cost by using a pre-packaged expert system.

Times have changed and with the emergence of expert systems, like The Respond Analyst, we have brought technology where traditionally we’ve had large investments and lengthy time-intensive projects. The result is mid-sized enterprise customers now have an option to operate at maturity levels beyond large traditional enterprise operations by leveraging expert systems. This new approach frees up time, provides needed expertise and saves our customers the headache and cost of legacy solutions. And better yet, our customers gain relief from the stress of understaffed resources and can relax knowing we have their security operations covered.

Read more:

A new tool for defenders – Real-time analysis of Web Proxy data

When I got back into the office after taking a short break to recharge my batteries, I was really excited to be speaking with my colleagues at Respond Software about the upcoming release of our web filtering model for our Respond analyst. You see, over the last few months we’ve been working tirelessly to build a way to analyze web filtering event data in real-time. Now that I’m sitting down to write this blog, the fruit of all the hard work our team has put into making this a reality is really sinking in. We’ve done it! It’ s now available as part of the Respond Analyst!

This was no small feat, as most of you in the security operations world would know.

You may ask why we chose to take this challenge on.  The answer is quite simple, there is a ton of valuable information in web filtering data and it’s extremely difficult for security teams to analyze these events in real-time due to the sheer volume of data generated by enterprises. What a perfect opportunity for us to show off the Respond Analyst’s intelligence and capability.

Up until now, security operations and IR teams have pivoted to using web filtering data for investigations once they’ve already been alerted to an attack through threat hunting or some other form of detection.  Processing all of the web filtering data for an organization in a SIEM or similar has just been way too expensive to do. In fact, most organizations can’t even afford to store this data for a “reasonable” amount of time for investigators to dig through.

Think about it for a second, each web page visited can generate a number of new web requests to pull back content from different sources. Then picture each employee using the internet for most of day; navigating the web through their day-to-day tasks, a few personal items between meetings, all this amounts to hundreds of web page visits each day. If you have a few hundred employees, the volume of data generated by the web filtering solution quickly becomes unmanageable. Well now we’re able to process all of these events in real-time.

Consider the questions you are able to ask of the data without even taking the assigned web filtering category into account…

  • Analyze each component of the HTTP header
  • Perform user agent analysis
  • Take a look at how suspicious the requested domain is
  • Perform URL string comparisons to all other requests over an extended period of time
  • Compare each attribute to information you’ve gathered in your threat intel database

But why stop there…

  • What about looking at whether the pattern of behavior across a set of requests is indicative of exploit kit delivery?
  • Maybe you suspect that these requests are related to command-and-control activity
  • What about the upload of documents to a filesharing service, is that data exfiltration or simply everyday user activity?

Web filtering data can also leverage the power of integrated reasoning.  When web filtering data is combined with IDS/IPS sensors, Anti-malware technology and contextual sources like vulnerability data and critical asset lists, you are able to form an objective view of your enterprise’s threat landscape.  Beyond the analysis of each of these data sources, the Respond Analyst accurately scopes all events related to the same security incident together for a comprehensive incident overview.  The Respond Analyst then assigns an appropriate priority to that incident and documents all the details of the situation and presents this information to you.  This is, by far, the most efficient way to reduce attacker dwell time.

We have a long way to go and many more exciting Respond Analyst skills & capabilities on the way. I couldn’t be prouder of all the work we’ve achieved and the release of our Web Filtering model.

Way to go Respond team!

How Automating Long Tail Analysis Helps Security Incident Response

Today’s modern cybersecurity solutions must scale to unparalleled levels due to constantly expanding attack surfaces resulting in enormous volumes of diverse data to be processed. Scale issues have migrated from just the sheer volume of traffic, such as IOT led DDoS attacks and the traffic from multiple devices, to the need for absolute speed in identifying and catching the bad guys.

Long tail analysis is narrowed down to looking for very weak signals from attackers who are technologically savvy enough to stay under your radar and remain undetected.

But, what’s the most efficient and best way to accomplish what can be a time-consuming and a highly repetitive tasks?

What is Long Tail Analysis?

You might be wondering what the theory is behind long tail analysis, even though you’re familiar with the term and could already be performing these actions frequently in your security environment.  The term Long Tail first emerged in 2004 and was created by Wired editor-in-chief, Chris Anderson to describe “the new marketplace.” His theory is that our culture and economy is increasingly shifting away from a focus on a relatively small number of “hits” (mainstream products and markets) at the head of the demand curve and toward a huge number of niches in the tail.

In a nutshell and from a visual standpoint, this is how we explain long tail analysis in cybersecurity:  You’re threat hunting for those least common events that will be the most useful in understanding anomalous behaviour in your environments.

Finding Needles in Stacks of Needles

Consider the mountains of data generated from all your security sources. It’s extremely challenging to extract weak signals while avoiding all the false positives. Our attempt to resolve this challenge is to provide analysts with banks of monitors displaying different dashboards they need to be familiar with in order to detect malicious patterns.  As you know, this doesn’t scale.  We cannot expect a person to react to these dashboards consistently.  Nor do we expect them to “do all the things”.

Instead, experienced analysts enjoy digging into the data.  They’ll pivot into one of the many security solutions used to combat cybersecurity threats such as log management solutions, packet analysis platforms, and even some endpoint agents all designed to record and playback a historical record.  We break down common behaviours looking for those outliers.  We zero in on these ‘niche’ activities and understand them one at a time. Unfortunately, we can’t always get to each permutation and they are left unresolved.

Four Long Steps of Long Tail Analysis in the SOC

If you are unfamiliar with long tail analysis, here are 4 steps of how a typical analyst will work through it:

Step 1: First, you identify events of interest like a user authentication or web site connections.  Then, you determine how to aggregate the events in a way that provides enough meaning for analysis. Example:  Graph user account by the number of authentication events or web domains by the number of connections.

Step 2: Once the aggregated data is grouped together, the distribution might be skewed in a particular direction with a long tail either to the left or right.  You might be particularly interested in the objects that fall within that long tail.  These are the objects that are extracted, in table format, for further analysis.

Step 3: For each object, you investigate as required. For authentications, you would look at the account owner, the number of authentication events, the purpose of the account.  All with the intended goal of understanding why that specific behaviour is occurring.

Step 4: You then decide what actions to take and move on to the next object.  Typically, the next steps include working with incident responders or your IT team.  Alternatively, you might decide to simply ignore the event and repeat Step 3 with the next object.

Is There a Better Solution?

At Respond Software, we’re confident that long tail analysis can be automated to make your team more efficient at threat hunting. As we continue to build Respond Analyst modules, we move closer to delivering on that promise — and dramatically improve your ability to defend your business.

Ripping off the Bandage: How AI is Changing the SOC Maturity Model

The introduction of virtual analysts, artificial intelligence and other advanced technologies into the Security Operations Center (SOC) is changing how we should think about maturity models. AI is replacing traditional human tasks, and when those tasks are automated the code effectively becomes the procedure. Is that a -1 or a +10 for security operations? Let’s discuss that.

To see the big picture here, we should review what a maturity model is and why we are using them for formal security operations. A maturity model is a process methodology that drives good documentation, repeatability, metrics and continuous improvement. The assumption being that these are a proxy for effectiveness and efficiency. The most common model used in Security Operations is a variant of the Carnegie Mellon, Capability Maturity Model for Integration (CMMI). Many process methods focus on defect management, this is even more evident in the CMMI since it originated in the software industry.

In the early 2000’s, we started using CMMI at IBM, Big Blue insisted that we couldn’t offer a commercial service that wasn’t on a maturity path and they had adopted CMMI across the entire company at that point. We had, at that time, what seemed like a never-ending series of failures in our security monitoring services, and for each failure a new “bandage” in the form of a process or procedure was applied. After a few years we had an enormous list of processes and procedures, each connected to the other in a PERT chart of SOC formality. Most of these “bandages” were intended to provide guidance and support to analysts as they conducted security monitoring and to prevent predictable failures, so we could offer a consistent and repeatable service across shifts and customers.

To understand this better, let’s look at the 5 levels of the CMMI model:

  1. Initial (ad hoc)
  2. Managed (can be repeated)
  3. Defined (is repeated)
  4. Measured (is appropriately measured)
  5. Self-optimizing (measurements leads to improvements)

This well-defined approach seemed to be perfect. It allowed us to take junior analysts and empower them to have a consistent level of service delivery. We could repeat ourselves across customers. We might not deliver the most effective results, but we could at least be reasonably consistent. As it turns out, people don’t like working in such structured roles because there’s little room for creativity or curiosity. Not surprisingly, this gave rise to the 18-24 month security analyst turn-over phenomenon. Many early analysts came from help desk positions and were escaping “call resolution” metrics in the first place.

Our application of SOC maturity morphed over the years from solving consistency problems into consistently repeating the wrong things because they could be easily measured. When failures happened, we were now in the habit of applying the same “bandages” over and over.  Meanwhile, the bad guys had moved on to new and better attack techniques. I have seen security operations teams follow maturity guidelines right down a black hole, when for example, a minor SIEM content change can take months, not the few hours it should take.

According to the HPE Security Operations Maturity report, the industry median maturity score is 1.4, or slightly better than ad-hoc. I’m only aware of 2 SOCs in the world that are CMMI 3.0.  So, while across the industry we are measuring our repeatability and hoping that it equates to effectiveness and efficiency, we are still highly immature, and this is reflected in the almost daily breaches being reported. You can also see this in the multi-year sine wave of SOC capability many organizations experience; it goes something like this:

  1. Breach
  2. Response
  3. New SOC or SOC rebuild
  4. Delivery challenges
  5. Maturity program
  6. Difficulty articulating ROI
  7. Cost reductions
  8. Outsourcing
  9. Breach
  10. Repeat

With a virtual analyst, your SOC can now leap to CMMI level 5 for what was traditionally a human-only task. An AI-based virtual analyst, like the Respond Analyst, conducts deep analysis in a consistent fashion and learns rationally from experience. This approach provides effective monitoring in real time and puts EVERY SINGLE security-relevant event under scrutiny. Not only that, you liberate your people from rigorous process control, and allow them to hunt for novel or persistent attackers using their creativity and curiosity.

This will tip the balance towards the defender and we need all the help we can get!

When Currency is Time, Spend it Threat Hunting

“Time is what we want most, but what we use worst.”
– William Penn

How many valuable cybersecurity tasks have you put aside due to the pressures of time? Time is currency and we spend it every moment we’re protecting our enterprises.

When we are constantly tuning, supporting and maintaining our security controls or chasing down an alert from an MSSP, only to discover it’s yet another false positive, we spend precious currency. When we create new correlation logic in our SIEM or decide which signatures to tune down to lower the volume of events to make it more manageable for our security team, we spend precious currency. When we analyze events from a SIEM to determine if they’re malicious and actionable or if a SIEM rule needs additional refinement, we spend precious currency. When we hire and train new analysts to cover churn, then watch them leave for a new opportunity – we waste currency and the investment hurts.

You can spend your “currency” doing pretty much anything, which is a blessing and a curse. We can (and do) waste an inordinate amount of time going down rabbit holes chasing false positives. We are forced to make choices: do we push back a request while we investigate the MSSP escalations or do we delay an investigation to provide the service agility the enterprise requires?

Both options are important, and both need addressing; forcing us to make a choice. In our gut we think the escalation is another false positive, but as cybersecurity professionals; we wait for the sword of Damocles to fall. It’s only a matter of time before one of these escalations is related to the thing we worry about most in our environments. Either way, something gets delayed…. hopefully just lunch.

Basing decisions on what we can neglect is reactive and unsustainable. It’s a matter of time until we choose to postpone the wrong thing.

We need to use our time more wisely.

Organizations need to spend precious “currency” focusing on higher value tasks, like threat hunting, that motivate their talent and provide value to the organization. But also need to maintain two hands on the wheel of lower value tasks that still need attention.

Organizations should implement automation tools to focus on the lower-value, repetitive tasks such as high-volume network security monitoring. Generating and receiving alerts from your security controls is easy, making sense and determining if they’re malicious and actionable is a different story. The decision to escalate events is typically inconsistent and heavily relies on the analyst making the decision. Factor in the amount of time required to gather supporting evidence and then make a decision, while doing this an additional 75 times an hour. As a defender, you don’t have enough “currency of time” to make consistent, highly-accurate decisions. Security analysts tasked with monitoring high-noise, low-signal event feeds is a misallocation of time that only leads to a lack of job satisfaction and burnout.

There is another way.

Employing Respond Analyst is like adding a virtual team of expert, superhuman analysts and will allow your team to, bring their talent and expertise to threat hunting. Adding Respond Analyst allows your talent to focus on higher value tasks and more engaging work so you can combat analyst burnout, training drains, and churn.

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.