Fight Fire with Fire:
How Security Automation Can Close the Vulnerability Gap Facing Industrial Operations

“Be stirring as the time; be fire with fire; threaten the threatener and outface the brow of bragging horror.”
William Shakespeare 1592

…or as Metallica once sang in 1982, Fight Fire with Fire!

There is a fire alight in our cyber world.  Threats are pervasive, the tech landscape is constantly changing, and now industrial companies are increasingly vulnerable with the advent of automation within their operations.  Last week a ransomware attack halted operations at Norsk Hydro ASA in both the U.S. and Europe, and just days later two U.S. chemical companies were also affected by a network security incident.

 

As manufacturing processes become increasingly complex and spread out around the world,
more companies will have to navigate the risk of disruption from cyber attacks. 

Bloomberg Cybersecurity

 

Industrial control systems (ICS), in particular, were not designed with cybersecurity in mind. Historically, they weren’t even connected to the internet or the IT network, but this is no longer the case. Automation and connectivity are essential for today’s industrial companies to thrive but this has also made them more vulnerable to attacks.

 

The more automation you introduce into your systems, the more you need to protect them. Along with other industries, you may potentially start to see a much stronger emphasis on cybersecurity.
Bloomberg Cybersecurity

 

Adding to the problem is a shortage of trained security staff to monitor the large volumes of data generated across the network that inevitably makes a plant’s operation even more vulnerable.

Fight the vulnerabilities that ICS automation causes with security automation

To close the vulnerability gap, industrial companies can fight fire with fire by embracing security automation. Extending automation tools beyond the industrial operations and into a plant’s security operations center can reduce the risk of a cyber attack. Security automation arms security teams with information to quickly identify threats so human analysts can act before a potential threat causes undue harm.

At Respond Software, we’re helping companies realize the power of automation with a new category of software called Robotic Decision Automation (RDA) for security operations. By augmenting teams with a ‘virtual analyst’, called the Respond Analyst, security teams can quickly automate frontline security operations (monitoring and triage).  Only the incidents with the highest probability of being malicious and actionable are escalated to human analysts for further investigation and response.

We believe that by combining human expertise with decision automation, industrial organizations can reduce their vulnerability risk profile.  The Respond Analyst can do the heavy lifting to cover the deluge of data generated each day and human analysts can elevate to focus on creative endeavors to remediate and contain threats faster.

It’s no question that industrial companies will continue to be targeted by bad actors. But now with front-line security automation, these organizations can also proactively safeguard operations against threats.

Be fire with fire.
W.S.

Read more:
3 Trends That Make Automation a Must for Securing Industrial Control Systems

The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.

Meta-Data

Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

3 Top Cybersecurity Trends for Channel Partners to Watch

We all know the next big IT shift towards AI and intelligent automation is on the horizon. Over the last few years, vendors and press have focused on the human-to-machine automation transformation. Many vendors promise solutions—but often those solutions are complex and not optimized for the channel.

The good news is that cybersecurity is primed and ready for automation now. But the question for Partners remains: How can VARs, Integrators, and MSSPs find the right solution that provides true human-to-machine technology to simplify life for their customers?

Here are 3 cybersecurity trends driving the industry towards automation and 1 simple recommendation that Channel Partners can leverage to get ahead of the game immediately:

Trend 1: Traditional console monitoring is ineffective

Security teams are spending too much time monitoring alerts that are providing little value for their efforts. Sifting through endless alerts with a high percentage of false positives is ineffective at best. It’s causing us to burn-out analysts and puts us in a continuous cycle of hiring and training new analysts. The analysts interviewed for the Voice of the Analyst (VOA) Survey help to inform us on where analyst time is better spent and what activities we should automate first. Automating workflow to increase analyst efficiency is important, but automating level 1 alert monitoring itself? That’s downright disruptive.

Cyentia Institute: Voice of the Analyst Survey, October 2017

Figure 1: We asked analysts to score their daily activities on a number of dimensions. One key finding is that analysts spend the most time monitoring, but it provides low value in finding malicious and actionable security threats. (Download VOA Survey here)

Trend 2: People shortage

Most security teams don’t complain about a lack of tools. They complain about a lack of people. Whether the budget won’t allow or skilled resources are in too high a demand to find (or retain), we’ve reached a point where supply has been outstripped by demand. What choice do we have? Leverage the power of machines to augment our security teams. This is finally possible with the advent of decision-automation tools that can off-load the task of console monitoring.

Bitdefender: CISOs’ Toughest Dilemma: Prevention Is Faulty, yet Investigation Is a Burden, April 2018

Figure 2. People shortage is a significant trend in our industry, forcing us to re-think how we’ll actively monitor our environments.

Trend 3: Too many tools

“Too many tools” is a regular complaint in organizations. Did you know most large organizations have on average 75+ security tools? Small organizations are not far behind. It’s all we can do to deploy these necessary security tools and maintain them let alone reviewing the endless alerts that these tools generate. What’s even more challenging is that we’ve seen an industry trend toward platform-based tools (e.g. SIEM or SOAR) that require engineering resources with the expertise to build and maintain platform content such as correlation rules and playbooks. Many organizations are overwhelmed by this task. In contrast, tools with expertise built-in, intelligent applications if you will, are what’s needed and they will change the way we think about platforms going forward.

Momentum Cyber February 2017 CYBERscape

Figure 3. Most organizations have dozens of tools to deploy and maintain.

An industry transformation is underway: Automation will disrupt the way cybersecurity is performed

We think 2019 will be the year of automation for cybersecurity. Customers will require automation to address the top 3 trends. They need to scale with the growing number of alerts and the increased complexity of monitoring today’s hybrid environments. Adding more people is not the answer. Finding ways to automate to off-load cumbersome tasks typically performed by humans is the answer.

This presents exciting new revenue opportunities for Channel Partners and also explains why we are experiencing increased momentum with: VARs, Integrators, and even MSSP’s. Respond Software is at the forefront of the industry transformation—applying machines to roles traditionally executed by humans.

One simple recommendation to gain a competitive advantage: the Respond Analyst

The Respond Analyst software is a scalable, plug-and-play “virtual analyst” that perfectly complements any security detection tool sale: Channel partners can increase revenue by providing both the tools and the Respond Analyst to monitor them.

This provides a unique selling opportunity for our Partners. Partnering with Respond Software gives customers—especially the mid-size enterprise ($50M-$1Bil revenue) simple solutions with fast results. Partners can also take advantage of recurring revenue, fast installations, and the potential to increase opportunities to sell more sensors.

To all of our potential partners: Please reach out if you’re interested in learning more about our solution and our partner program by registering at our partner page. Here’s an opportunity to bring new value to your customers and join us on our journey to bring automated security monitoring to the world.

For more information, read the Global Channel Partner Program Press Release

Must-Attend December 2018 Information Security Events & Webinars

Security Geek is back with the top recommendations for upcoming cybersecurity events in December! I picked these events and conferences because they provide a wealth of information, knowledge, and learning materials to help your security team improve its efficiency and effectiveness in defending your environment.

Here are the top shows to attend:

DataConnectors: December 5, 2018 | Dallas, TX

DataConnectors: December 6, 2018 | Washington, D.C.

DataConnectors: December 12, 2018 | Chicago, IL

DataConnectors: December 13, 2018 | Fort Lauderdale, FL

The Dallas, D.C., Chicago & Fort Lauderdale Cyber Security Conferences feature 40-60 vendor exhibits and 8-12 educational speaker sessions discussing current cybersecurity issues such as cloud security, email security, VoIP, LAN security, wireless security & more. Meet with industry veterans and learn about emerging cybersecurity technologies.

My favorite part about the DataConnectors events – they’re free!


Cloud Security Conference: December 10-12, 2018 | Orlando, FL
The Cloud Security Alliance event welcomes world-leading security experts and cloud providers to discuss global governance, the latest trends in technology, the threat landscape, security innovations and best practices in order to help organizations address the new frontiers in cloud security.

IANS: December 12, 2018 | Webinar

In this webinar, IANS Research Director Bill Brenner and IANS Faculty Member Dave Shackleford look back at the biggest security news trends of 2018, what made them significant and what it all could mean for the year ahead.

 

Carbon Black: December 19, 2018 | Webinar

Learn how CB Defense, a real-time security operations solution, enables organizations to ask questions on all endpoints and take action to remediate attacks in real-time.

To stay up-to-date on where the Respond Software team is heading, check out our events calendar! The subject matter experts and industry professionals at Respond are always in attendance and ready to share their knowledge expertise!

November Information Security Events You Don’t Want to Miss

Your favorite Security Geek is back with some great news – a list of upcoming cybersecurity shows and conferences you need be aware of during the month of November!

There are numerous information security events happening on a monthly basis and sometimes it can be difficult to navigate which ones provide value and disregard the shows that are a time-waste. This is where we can help you out.

We’ve outlined a few of the top shows you should be looking at below!

FS-ISAC Summit: Nov 11-14 | Chicago, IL

Are you in the financial services industry? Well, then this is the show for you!

As Partners in the Information Security community, we have all been challenged in 2018 with the onslaught of DDoS and phishing campaigns with payloads that have included credential stealing malware, destructive malware and ransomware. These challenges are expanding the responsibilities placed upon us as security professionals and requiring us to ensure we are following best practices.

The FS-ISAC conferences provide information and best practices on how cybersecurity teams in banking and financial institutions can help protect their networks.

DataConnectors: Nov 15, 2018 | Nashville, TN
DataConnectors: Nov 29, 2018 | Phoenix, AZ

The Nashville and Phoenix Cyber Security Conferences feature 40-60 vendor exhibits and 8-12 educational speaker sessions discussing current cyber-security issues such as cloud security, email security, VoIP, LAN security, wireless security & more.

The best part of the DataConnectors events – they’re free! Meet with industry veterans and learn about emerging cybersecurity technologies.

Cyber Security & Cloud Expo 2018: Nov 28 – 29, 2018 | Santa Clara, Ca

The Cyber Security & Cloud Expo North America 2018 will host two days of top-level discussion around cybersecurity and cloud, and the impact they are having on industries including government, energy, financial services, healthcare and more. Chris Calvert, Co-Founder and VP of Product Strategy at Respond Software, will discuss the current state of security operations and emerging trends that are changing out teams operate.

 

Cyber Security Summit: November 29, 2018 | Los Angeles, CA

The annual Cyber Security Summit: Los Angeles connects C-Suite & Senior Executives responsible for protecting their companies’ critical infrastructures with innovative solution providers and renowned information security expertise.

Each one of these conferences provides a wealth of information, knowledge and learning material to help your security team improve its efficiency and effectiveness in cyber threat hunting. To stay up-to-date on where the Respond Software team is heading, check out our events calendar! The subject matter and industry professionals at Respond are always in attendance and ready to share their expertise!

Cybersecurity is Complicated, Here’s a Little Help

If you’re like me, continuously listening to webinars & podcasts to broaden your knowledge of the security industry, emerging trends, and new threats – you’re always looking for reliable, thought-provoking sources to learn and educate yourself.

I guess you could call me a “Security Geek”!

I have always found podcasts to be a phenomenal resource to learn about industry trends and products or services that are revolutionizing how teams operate. Not only do you get a chance to listen to subject matter experts and thought-leaders talk about their industry knowledge, but also learn about an application’s benefits and value it brings to solve everyday challenges.

The best part, they are free learning-sessions from industry experts on new trends and applications you and your team can utilize!

Below are the top 4 podcast channels I frequently visit each week to stay updated on the cybersecurity industry, trends and useful advice – including our new Respond Software podcast.

  1. The Risky Business podcast, hosted by award-winning journalist Patrick Gray, features news and in-depth commentary from security industry luminaries. Risky Biz is a phenomenal source to stay updated on the latest cybersecurity news on a weekly basis and trends.
  2. The Unsupervised Learning Podcast series, hosted by cybersecurity professional Daniel Miessler, discusses current cybersecurity news, emerging technologies, and provides opinions and advice on the latest trends in security.
  3. The Defensive Security podcast, hosted by Jerry Bell and Andrew Kalat provides a fun take on recent security news. One of the intriguing aspects of their podcast is they recommend feedback and advice for business on what they can apply to keep their network secure. Their perspective and input on best practices is very fascinating.
  4. The Respond Software podcast series covers a wide range of topics and issues – providing a fantastic way to learn about emerging threats and trends, challenges in security operations and opinions from industry experts. One of the primary focuses of the Respond Software Podcast series focuses on the role of humans and technology in the cybersecurity space. The series also features prominent industry leaders like Raffy Marty, VP of Corporate Strategy at Forcepoint. In a recent podcast, Raffy discusses cybersecurity challenges that exist today, what technologies can help improve existing processes and how cybersecurity has changed over the years.

By listening to these podcasts, I have learned a tremendous amount about the cybersecurity industry, trends, threats and new technology that revolutionizes how teams operate.

If you’re waiting for our next podcast to be released and want to learn more about the cybersecurity industry and discover how Respond Analyst can help your team – register for our upcoming webinar on the new Respond Analyst Web Filter Module on November 7th! You will learn how real-time analysis and triage of web filter data, during network and endpoint analysis, gives security teams an edge in reducing response times and limiting the impact of some of the most stealthy attacks!

A new tool for defenders – Real-time analysis of Web Proxy data

When I got back into the office after taking a short break to recharge my batteries, I was really excited to be speaking with my colleagues at Respond Software about the upcoming release of our web filtering model for our Respond analyst. You see, over the last few months we’ve been working tirelessly to build a way to analyze web filtering event data in real-time. Now that I’m sitting down to write this blog, the fruit of all the hard work our team has put into making this a reality is really sinking in. We’ve done it! It’ s now available as part of the Respond Analyst!

This was no small feat, as most of you in the security operations world would know.

You may ask why we chose to take this challenge on.  The answer is quite simple, there is a ton of valuable information in web filtering data and it’s extremely difficult for security teams to analyze these events in real-time due to the sheer volume of data generated by enterprises. What a perfect opportunity for us to show off the Respond Analyst’s intelligence and capability.

Up until now, security operations and IR teams have pivoted to using web filtering data for investigations once they’ve already been alerted to an attack through threat hunting or some other form of detection.  Processing all of the web filtering data for an organization in a SIEM or similar has just been way too expensive to do. In fact, most organizations can’t even afford to store this data for a “reasonable” amount of time for investigators to dig through.

Think about it for a second, each web page visited can generate a number of new web requests to pull back content from different sources. Then picture each employee using the internet for most of day; navigating the web through their day-to-day tasks, a few personal items between meetings, all this amounts to hundreds of web page visits each day. If you have a few hundred employees, the volume of data generated by the web filtering solution quickly becomes unmanageable. Well now we’re able to process all of these events in real-time.

Consider the questions you are able to ask of the data without even taking the assigned web filtering category into account…

  • Analyze each component of the HTTP header
  • Perform user agent analysis
  • Take a look at how suspicious the requested domain is
  • Perform URL string comparisons to all other requests over an extended period of time
  • Compare each attribute to information you’ve gathered in your threat intel database

But why stop there…

  • What about looking at whether the pattern of behavior across a set of requests is indicative of exploit kit delivery?
  • Maybe you suspect that these requests are related to command-and-control activity
  • What about the upload of documents to a filesharing service, is that data exfiltration or simply everyday user activity?

Web filtering data can also leverage the power of integrated reasoning.  When web filtering data is combined with IDS/IPS sensors, Anti-malware technology and contextual sources like vulnerability data and critical asset lists, you are able to form an objective view of your enterprise’s threat landscape.  Beyond the analysis of each of these data sources, the Respond Analyst accurately scopes all events related to the same security incident together for a comprehensive incident overview.  The Respond Analyst then assigns an appropriate priority to that incident and documents all the details of the situation and presents this information to you.  This is, by far, the most efficient way to reduce attacker dwell time.

We have a long way to go and many more exciting Respond Analyst skills & capabilities on the way. I couldn’t be prouder of all the work we’ve achieved and the release of our Web Filtering model.

Way to go Respond team!

4 Reasons Your MSSP Might Not Be Providing Dependable Security Monitoring

Unless your goal with your Managed Security Service Provider to simply check your audit requirement box, you are likely not getting the dependable security monitoring you are looking for.

Reason #1 – One Size Doesn’t Fit All
The first reason is the general “one size fits all/most” model that MSSP’s are forced to work in so they can make a profit. My introduction to the one size fits all/most model goes back to when I started in cybersecurity and worked for a large Tier-1 MSSP. We applied “recommended signature sets” to provide higher fidelity alerting as somewhat of a self-serving tale told by MSSPs to justify the event funnel where events are filtered out and never presented to an analyst for analysis. While this helps keep super noisy signatures from coming to the console (who would have the time to weed thru them to find the needle in that haystack?) it also creates a significant visibility gap. The event funnel also helped keep our SIEM from tipping over.

Filtering is something we as an industry have unfortunately come to accept as the solution to address the exponential problem of data growth and lack of skilled analysts. This is mainly due to technology and human limitations. This is where expert systems, AI and ML can be a big help.

Reason #2 – False Positive Headaches
How many times have you been woken up at 2:00 AM by your MSSP for an escalation that turned out to be a false positive? Consider how many hours you have spent chasing down an escalation that was nothing. When an escalation comes in from your MSSP do you jump right up knowing there is a very high probability this escalation is malicious and actionable, or do you finish your lunch believing it will likely be another waste of time? Chasing down false positives is not only a drain on time and resources, but they are also an emotional drain for the security Incident Responders. People want to do work that adds value; expending cycles and finding out it was a waste of time is disappointing. I have yet to come across any organization that is ok with the level of false escalations from their MSSP.

Reason #3 – Generic Analysis
The third reason your MSSP might not be providing the value you need is because the MSSP analysts are not focused solely on your business. With a typical MSSP, you get a general set of SIEM intrusion detection content (e.g. correlation rules, queries) that is built to address a very generalized set of use cases that can apply to most, if not all, customers. If you want custom detection content, your only option has generally been to pay for a managed SIEM dedicated to you. You may be sending logs from a set of data sources to your MSSP, but do they have the proper detection content to evaluate those logs? In my years of SOC consulting, I have had an insider view of some of the detection content being used MSSP’s – my impression was that the content was generalized and basic. There was no cross-telemetry correlation to speak of, and very little content that could be considered advanced or line of business focused. Without this level of visibility, I question how dependable the analysis results will be.

Reason #4 – Tribal Knowledge
The challenge of knowing all the subtle nuances of your enterprise is something an MSSP will never achieve. Understanding account types and which assets are more critical than others is unique to each enterprise. And this information changes overtime. How is an outsider that may have dozens or even several hundred other customers supposed to know the nuances of your users, systems, or specific business practices, etc? There is a myriad of unwritten knowledge that is necessary to be able to effectively monitor and accurately decide which security events are worthy of escalating for response, and MSSPs often times do not have the company context to make good decisions for their customers.

If you are outsourcing your security monitoring or considering it to reduce cost or add capacity, take a look at Respond Analyst. You can manage your own Security Monitoring and Triage program with our pre-built expert decision system – no staffing required. Respond Analyst is like having your own team of Security Analysts working for you, 24×7 regardless of your company size or maturity.

Respond Software Named Top 25 CyberSecurity Innovators

As the new Product Marketing Manager at Respond Software, I knew when joining the team they were doing some outstanding work. Simplifying the complexity of network security monitoring and triage and giving hope to small security teams working to defend their business.

The hard work and dedication from the team has been paying off!

We are proud to announce Respond Software has been selected as one of the Top 25 CyberSecurity innovators by Accenture Innovation Awards! The 25 leading innovations consist of a diverse batch of cutting-edge concepts, developed by pioneers in our eight global themes. These innovations are reshaping our world and unlocking new value and benefits for all parties.

I tip my hat to the amazing product and engineering teams that have developed Respond Analyst to tackle some of the complexity in security operations.

Thank you, Accenture Innovation Awards for recognizing Respond Software as a top CyberSecurity innovator! We are excited to be a part of such an amazing and forward-thinking group!

When Currency is Time, Spend it Threat Hunting

“Time is what we want most, but what we use worst.”
– William Penn

How many valuable cybersecurity tasks have you put aside due to the pressures of time? Time is currency and we spend it every moment we’re protecting our enterprises.

When we are constantly tuning, supporting and maintaining our security controls or chasing down an alert from an MSSP, only to discover it’s yet another false positive, we spend precious currency. When we create new correlation logic in our SIEM or decide which signatures to tune down to lower the volume of events to make it more manageable for our security team, we spend precious currency. When we analyze events from a SIEM to determine if they’re malicious and actionable or if a SIEM rule needs additional refinement, we spend precious currency. When we hire and train new analysts to cover churn, then watch them leave for a new opportunity – we waste currency and the investment hurts.

You can spend your “currency” doing pretty much anything, which is a blessing and a curse. We can (and do) waste an inordinate amount of time going down rabbit holes chasing false positives. We are forced to make choices: do we push back a request while we investigate the MSSP escalations or do we delay an investigation to provide the service agility the enterprise requires?

Both options are important, and both need addressing; forcing us to make a choice. In our gut we think the escalation is another false positive, but as cybersecurity professionals; we wait for the sword of Damocles to fall. It’s only a matter of time before one of these escalations is related to the thing we worry about most in our environments. Either way, something gets delayed…. hopefully just lunch.

Basing decisions on what we can neglect is reactive and unsustainable. It’s a matter of time until we choose to postpone the wrong thing.

We need to use our time more wisely.

Organizations need to spend precious “currency” focusing on higher value tasks, like threat hunting, that motivate their talent and provide value to the organization. But also need to maintain two hands on the wheel of lower value tasks that still need attention.

Organizations should implement automation tools to focus on the lower-value, repetitive tasks such as high-volume network security monitoring. Generating and receiving alerts from your security controls is easy, making sense and determining if they’re malicious and actionable is a different story. The decision to escalate events is typically inconsistent and heavily relies on the analyst making the decision. Factor in the amount of time required to gather supporting evidence and then make a decision, while doing this an additional 75 times an hour. As a defender, you don’t have enough “currency of time” to make consistent, highly-accurate decisions. Security analysts tasked with monitoring high-noise, low-signal event feeds is a misallocation of time that only leads to a lack of job satisfaction and burnout.

There is another way.

Employing Respond Analyst is like adding a virtual team of expert, superhuman analysts and will allow your team to, bring their talent and expertise to threat hunting. Adding Respond Analyst allows your talent to focus on higher value tasks and more engaging work so you can combat analyst burnout, training drains, and churn.

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.