What’s Old is New: How Old Math Underpins the Future of A.I. in Security Operations

Most of us engineers know the truth—A.I. is just old math theory wrapped in a pretty package.  The deep learning algorithms used for Neural Networks? Yep, you guessed it, those were developed in 1943!

For those of us in Security Operations, the underpinning mathematical theories of probability will lead us into the future. Probability theory will automate human analysis–making real-time decisions on streaming data.

Probabilistic modeling will fill the gaps that our SecOps teams deal with today:  Too much data and not enough time. We humans have a very difficult time monitoring a live streaming console of security events.  We just can’t thread it all together with our limited knowledge, biases, and the small amount of time we have to interact with each new event.

Making instant decisions as data is streamed real-time is near impossible because there is:

    • too much info and data to process,
    • not enough meaning—we don’t understand what the data is telling us,
    • poor memories—can’t remember things two hours ago let alone, days, week’s or months before.

Enter Probability Theory

Watch my short video to learn how Probability Theory will fundamentally change the future of Security Operations by expanding our ability to analyze more data across our environments than ever before.

Click here to watch now.

Jumping to a New Curve

In the business classic “The Innovator’s Dilemma“, author Clayton Christensen shows how jumping to a new productivity curve is difficult for incumbent leaders but valuable for new innovators.  I think a lot about this concept for cybersecurity. The world has changed dramatically these last 5-10 years and the curve most enterprises are on results in lots of siloed detectors, rudimentary processing, people-centric processes, and high costs to maintain platforms. The solutions for these problems had great promise in the beginning but still can’t provide the level of productivity necessary to keep up with advances by the adversary. Workflow automation helps, but not enough to address the “orders of magnitude” problem that exists. The scale is definitely tipped in favor of the attackers.  So how do we think out of the box to help companies jump to that new productivity curve?

Helping Customers Jump to a New Curve of Productivity

Three years ago, we started on a mission to help security operations teams right the balance between attackers and defenders. We are on the front-lines to change the status quo and to bring in a new way of thinking to defend the enterprise.

At Respond Software, we strive to unlock the true potential of Man + Machine —without bankrupting security teams. We aim to elevate the human analysts/incident responders to do what they do best (be curious, think outside the box, proactively take action) and let the machines do what machines do best (consistently analyze huge amounts of data thoroughly and accurately based on hundreds of dimensions). In short, security teams can use modern processing and computing techniques to help jump to a new curve and better defend their enterprise.

Today, our product, the Respond Analyst, is fulfilling that mission for customers around the globe. In fact, over the last 30 days, our Robotic Decision Automation product actively monitored billions of live events, vetted those into tens of thousands of cases, and escalated (only!) hundreds of incidents to our customers’ incident responders. What’s more, our security operations software customers were able to give the Respond Analyst feedback on what they liked, what they didn’t like and how to improve the results.  They now have an analyst on their team that can plow through the alerts and invoke expert judgement to group and prioritize them into incidents. This eliminates a huge amount of time wasted chasing false positives while freeing analysts to focus on threat hunting, deeper investigations, and proactive security measures.  What a change for those teams!

New $20 Million Investment = More Status Quo Busting

To continue these efforts and to expand to meet increasing demand, we are pleased to announce our $20M Series B round of financing.  The round was led by new investor ClearSky Security, with additional investment from our existing investors, CRV and Foundation Capital.

We are extremely pleased to add ClearSky Security to our team. ClearSky’s depth of cybersecurity knowledge and experience—both personally amongst the partners and from backing successful companies such as Demisto and Cylance—will be extremely helpful as we look to establish our innovative robotic decision automation software in more security operations teams. On top of it, we get Jay Leek, current ClearSky Managing Director and former CISO at Blackstone, to be on our Board.  See our press release (and the accompanying video) for more details and his perspective.

I’d also like to thank the hard work and dedication of the entire group of Responders that got us to where we are today. As I recently told the security operations software team, I’m certainly psyched to get the endorsement and funding from three world-class investors. Even more so, I look forward to using the funds to work with ClearSky to further innovate, provide service to customers, and expand our reach to help more security operations teams take the fight to the adversaries…and save money while they do it.  It’s time for security operations to bust through the status quo and jump to a new curve of productivity, capability and job satisfaction.

It’s time for the next phase of Respond Software.

Watch and Read More:


Video:  Jay Leek shares his reasons for investing in Respond Software (on the way to the airport in an Uber)!

Press Release:  Respond Software Raises $20 Million to Meet Growing Demand for Robotic Decision Automation in Security Operations

 

Fight Fire with Fire:
How Security Automation Can Close the Vulnerability Gap Facing Industrial Operations

“Be stirring as the time; be fire with fire; threaten the threatener and outface the brow of bragging horror.”
William Shakespeare 1592

…or as Metallica once sang in 1982, Fight Fire with Fire!

There is a fire alight in our cyber world.  Threats are pervasive, the tech landscape is constantly changing, and now industrial companies are increasingly vulnerable with the advent of automation within their operations.  Last week a ransomware attack halted operations at Norsk Hydro ASA in both the U.S. and Europe, and just days later two U.S. chemical companies were also affected by a network security incident.

 

As manufacturing processes become increasingly complex and spread out around the world,
more companies will have to navigate the risk of disruption from cyber attacks. 

Bloomberg Cybersecurity

 

Industrial control systems (ICS), in particular, were not designed with cybersecurity in mind. Historically, they weren’t even connected to the internet or the IT network, but this is no longer the case. Automation and connectivity are essential for today’s industrial companies to thrive but this has also made them more vulnerable to attacks.

 

The more automation you introduce into your systems, the more you need to protect them. Along with other industries, you may potentially start to see a much stronger emphasis on cybersecurity.
Bloomberg Cybersecurity

 

Adding to the problem is a shortage of trained security staff to monitor the large volumes of data generated across the network that inevitably makes a plant’s operation even more vulnerable.

Fight the vulnerabilities that ICS automation causes with security automation

To close the vulnerability gap, industrial companies can fight fire with fire by embracing security automation. Extending automation tools beyond the industrial operations and into a plant’s security operations center can reduce the risk of a cyber attack. Security automation arms security teams with information to quickly identify threats so human analysts can act before a potential threat causes undue harm.

At Respond Software, we’re helping companies realize the power of automation with a new category of software called Robotic Decision Automation (RDA) for security operations. By augmenting teams with a ‘virtual analyst’, called the Respond Analyst, security teams can quickly automate frontline security operations (monitoring and triage).  Only the incidents with the highest probability of being malicious and actionable are escalated to human analysts for further investigation and response.

We believe that by combining human expertise with decision automation, industrial organizations can reduce their vulnerability risk profile.  The Respond Analyst can do the heavy lifting to cover the deluge of data generated each day and human analysts can elevate to focus on creative endeavors to remediate and contain threats faster.

It’s no question that industrial companies will continue to be targeted by bad actors. But now with front-line security automation, these organizations can also proactively safeguard operations against threats.

Be fire with fire.
W.S.

Read more:
3 Trends That Make Automation a Must for Securing Industrial Control Systems

The Science of Detection Part 3: A Closer Look at the “Detectors” You Rely on When Hunting for Evidence

This is the third blog in my science of detection series. In the previous parts, we examined the key elements of a data source and considered integrated reasoning. Today, I’ll be taking a closer look at the signal quality we get from the various “detectors” that we use to find malicious activities in our environment.

Be sure to check back in the coming weeks to see the next blogs in this series. In part four, I’ll be talking about architectural approaches to detection, and looking at how we collect and aggregate information so that it’s useful to our security programs. I’ll be making some predictions about the progress we’ll see in this area in the future, because I think the old way of doing things has reached a dead end.

Security analysts have many different information sources—“detectors”—to consider when making decisions about whether or not they see malicious activity taking place in their environment. Each detector has a purpose, and each contributes some degree of differential value to the ultimate decision, but only a few of them were specifically designed for security applications. That complicates things.

What’s interesting about these information sources is that each must be interpreted and analyzed in a different way in order to assemble enough information to get a truly comprehensive picture of what’s taking place in the environment. They also operate at different levels of abstraction (for example, signatures are much more abstract than raw data), which means that a key task in analyzing any attack is assembling a corroborative summary using as many diverse information sources as possible.

Assembling such a summary involves multidimensional analysis. It’s tremendously important that we bring the latest advances in analytical reasoning and mathematical and scientific research to bear on our security programs and how we leverage information within them.

With this in mind, let’s talk about the information sources we use, explain their most common applications, and put them into context.

Raw Data

Network packets are all the communications that transit your network. Very often they’re encrypted. The highest-end security programs might include complete packet capture, but that gets very expensive quickly. Packet capture offers the highest fidelity but most dilute set of information for incident detection. A short-term packet capture solution (that holds data for 30-60 days) often ends up being of little use forensically because incidents are most often detected later in their lifecycle. The next-best-thing to complete packet capture is probably a combination of NetFlow and network security sensors.

Logs, at their most basic, are just records of system or user activity. Some of them are relevant for security detection purposes, but most are not. Historically speaking, logs were usually written to manage application and system problems, and they tend to be highly inconsistent in their content, their format, and their usefulness for security.

When a specific security control is violated, or an attempt to violate it is made, a log event is generated. There’s always some chance that the activity is malicious in nature. How big is this chance? Well, it’s different for every single log message and log source. This makes the aggregation and timeline of logs more important than any single log event when it comes to inferring or understanding malicious activity.

This is why we use rules. Rules help us interpret and contextualize logs, and thus slightly improve their utility for detection purposes.

The problem is: how many failed logins does it take before you know you have a hijacked account instead of a forgetful user? How different is the number of failed logins it would take to raise our suspicion on a Monday morning from what it’d take on a Wednesday afternoon? Sometimes we do see security avoidance behaviors in logs (for instance, clearing them), but user mistakes can and do explain these things most often, and it’s hard to know when to dig in.

Meta-Data

Network flow data show the connection details and the amount of data transferred between hosts on your network (and out to the Internet). They’re like the network equivalent of monitoring who’s calling whose cell phone within a criminal syndicate. Network graph analysis and visualization are useful approaches to understanding NetFlow data.

Indicators (of malicious or suspicious activity)

Signatures of known attacks and other indicators of malicious code may be detected through sensors when monitoring network communications. These are short, hexadecimal character sequences known to be contained within attack payloads. In order to ensure a match when an attack occurs, even when written with a highly specific sequence of bytes in mind they often don’t account for all other possibilities of non-malicious occurrences of the same sequence in a data stream and thus they’re written loosely and thus produce a large number of false alerts. There are currently over 57,000 IDS signatures in existence: only a tiny subset of these are relevant at any given moment in time. This produces a high volume of false or nuanced alerts, further obscuring valuable detection signals. Signatures benefit from being analyzed by machines rather than humans because of the depth of analysis needed to separate out the relevant information. It’s also very important to consider where and how you place sensors because their value is directly related to their visibility.

Threat intelligence is another indicator. Yes, it also suffers from a volume problem, and its volume problem is almost as bad as that of network security sensors. Threat intelligence lists try not to omit potential malicious attacks and thus produce a high volume of alerts, which are hard for humans to analyze. Threat intelligence includes lists of IP addresses, domains and known bad file hashes. I consider known good file hashes to be valuable intelligence, too. Once again, combinations of threat indicators offer much higher fidelity as evidence of real threat activity.

Heuristics are behavioral indicators. For example, an alert might be generated when a piece of software takes an action that’s not normal for that software, such as spawning an additional process outside of user-approved space. Heuristics are a library of past incident observations, and as such, are completely historically focused. Although it’s valuable not to fall for the same thing twice, these tend to have a short lifespan when it comes to high accuracy.

First Order Processing

Rules follow a predictable structure (Activity — Threshold — Context — Action) to identify known suspicious activity. Known suspicious activities are described using Boolean logic or nested searches, a threshold is set, and if this is reached or crossed, a notification is sent to a monitoring channel for human evaluation.

At the most atomic level, there are fewer than 130 rules in regular use. In fact, in most organizations fewer than 45 are implemented. Rules are most valuable when they’re used to enforce logic that’s specific to your company’s unique business challenges, such as possible fraud scenarios.

Context—additional information about the entities being investigated and the relationship between them—can help you answer questions about the potential impact of attacks in progress and your vulnerability to them. It’s a key component in initial processing.

Statistics and metrics are important in guiding your operations: self-reflection and dispassionate measurement are critical to the effective application of detection science. You can measure attributes like coverage and performance, or calculate cost- or time-per-detection by data source and use this information to guide you in deploying your sensor architecture. Statistical analysis can be a powerful tool for uncovering attackers’ latest stealth techniques. Any activity that’s too close to the center of a normal bell curve might be hiding something in the noise—says the ever-suspicious security investigator.

Second Order Processing

Behaviors, patterns, and baselines are very commonly used to measure and score users’ stealthy or suspicious behaviors. The goal is to identify the users who either pose an insider threat or whose machines have been compromised by malicious code. Maintaining a library of first-order information that you’ve collected over time and conducting periodic calculations against it can help you pinpoint things that might be suspicious. “Repeat offender” is a catchphrase for a reason.

Nth Order Processing

Anomalies, clusters, affinity groups, and network graphs can reveal some very nuanced attacks. Running advanced algorithms across large amounts of data can yield interesting results.

A common fallacy is that anomalies are more likely to be malicious. That’s simply not true. The way our networks are interconnected today makes for all sorts of anomalies in all layers of the technology stack. These provide investigators the same sort of analytical puzzle as network security signatures do.

Some of these algorithms have well-understood security applications. One example is clustering: when you cluster IDS data, what you find most often are false positives, because they occur in highly predictable ways. When a particular signature generates alerts for what’s actually regular business traffic, the same alert will be triggered every time that business process takes place. It thus produces a very obvious cluster that you can exclude when looking for malicious activity.

The more information known to be unimportant that we can remove, the more clearly we can see what else is going on. This is where analytical detection comes into its own. Very often, we run algorithms on security data simply to see if a subject matter expert can interpret the outcome. Possessing both domain expertise and knowledge of data science is critical if you want to understand what advanced algorithms are telling you.

Visualization and hunting are an nth order processing task. Using tools that allow you to pivot and display related datasets is the ultimate form of security threat hunting, and it’s also the most fun. You can derive some detection value from considering any layer of detectors through the lens of a visual tool.

Do you think I’m about to tell you there’s another layer called “artificial intelligence”? If so, you’re wrong.

The next layer is simply making a decision: has something malicious occurred? The more information we have to feed into the decision-making process, the more effective and deeper the decision will be. All of the information sources listed above have something of value to contribute.

But you have to ask yourself: how many of these factors can analysts consider in real time as they watch events streaming across a console?

If you’d like to make it possible for your security operations team to incorporate input from a greater variety of detectors and information sources into their decision-making processes and workflows, consider adding the Respond Analyst to your team. Built to integrate with a broad array of today’s most popular sensors, platforms and solutions, the Respond Analyst brings specialized threat intelligence and detailed local contextual information to bear on every decision it makes about which events to escalate. Quite simply, it’ll give your ability to interpret and analyze detection data a boost—and allow your analysts to consider a far wider variety of sources.

To learn more about how the Respond Analyst can help your business become more thorough and derive greater insight from the detectors in your environment, contact us to schedule a demo today.

Neither SIEM nor SOAR–Can Security Decisions be Automated? Patrick Gray and Mike Armistead Discuss

We’ve asked the questions before, but we’ll ask it again: how much time does your security team spend staring at monitors? How about investigating false-positives escalated from an MSSP? More importantly, how are small security teams expected to cope with the growing amount of security data?

The world of security operations is changing. Extra processing power combined with faster mathematical computations, means security monitoring and event triage can now be analyzed at machine-scale and speed. With new innovations that leverage decision-automation, security organizations can analyze incidents more efficiently than ever before. Security teams no longer have to tune down or ignore low-signal events. Instead, technologies can now recognize patterns to identify malicious attacks that may have otherwise been overlooked.

So how will these new technologies impact security operations moving forward?

Mike Armistead, Respond Software CEO, recently sat down with Patrick Gray, from Risky Business, to discuss the state of information security today. In the 30-minute podcast, Mike and Patrick shed light on the future of security operations, discussing the limitations of traditional security monitoring/analysis techniques and the power of new technologies, like decision automation to change security forever.

During this podcast you’ll learn to:

  • Identify the biggest mistakes security teams make today and how to avoid it.
  • Manage the onslaught of data.
  • Increase your team’s capacity.
  • Stop wasting time chasing false-positives.

Listen to the full podcast, here!

Learn more about what the Respond Analyst can do for you!

3 Reasons Understaffed Security Teams Can Now Sleep at Night

If you feel overwhelmed with security operations, you’re not alone. Matter of fact, it’s a common theme we hear all the time: “We’re overloaded and need help!” We’ve been in the trenches, building security operations for mid to large enterprises, so we understand the unique pressure IT and security teams feel. It’s not easy balancing it all—especially for mid-sized enterprises with resource-constrained security teams.

Cybersecurity in mid-sized companies has unique challenges. With fewer resources and tighter budgets, IT teams are spread thin while wearing multiple hats. Unfortunately, sometimes security projects accumulate, leaving teams exposed and overwhelmed. But it doesn’t have to be this way—there is a viable solution.

Here are the three biggest challenges security teams face and why The Respond Analyst helps them sleep soundly at night.

Reason #1 – We don’t have enough time

Our customers need to free time to work on priority projects and initiatives. We designed our product to provide expert intrusion analysis without all the fuss of deploying extensive technology stacks that require significant upfront and continued investment. We’re here to simplify the process, not add complexity. Security event console monitoring is the way of the past and we free our customers from staring at security consoles and instead move them toward higher value tasks and initiatives.

Within seven days, The Respond Analyst has learned its environment and is finding actionable incidents for our customers. The setup process is simple: 1) deploy a virtual appliance or install our software, 2) direct security feeds to our software and 3) add simple context. There is no significant time commitments or in-depth expertise in security operations required.

Reason #2 – We need additional security expertise

One of the biggest challenges our customers face is finding the right people and retaining them. This challenge is expected to grow with an ever competitive job market, resulting in higher wages and more movement at a time when organizations are trying to implement steady security programs. To say it’s difficult is an understatement.

We don’t expect our customers to be experts in intrusion analysis and security operations—that is why they’ve partnered with us. The Respond Analyst is an expert system that automates the decision making of a front line security analyst. This pre-packaged intelligence requires no security expertise to deploy. There is no use case development, programming of rules, or tagging of event data. Well vetted incidents, without all the fuss, are the result of a well designed expert system.

Reason #3 – We don’t have the time, money or desire to build a legacy SOC

Many organizations understand the old way of building the legacy SOC with SIEM is not the future. Indeed, it’s not even keeping up with today’s threats. Not only is it less effective then solutions such as The Respond Analyst, but it is also significantly higher cost and results in a far lengthier Return on Investment timeframe.

The process of building a SIEM with 80+ data sources (where most really only look at 5 or less), hiring, training and retaining experienced intrusion analyst, and implementing a sophisticated process to keep it glued together, is outdated. Of course, this was the best we could do given the technology and understanding we had at the time, but now we have a better way. Old models have since been replaced and our customers receive the benefit of avoiding frustration and high cost by using a pre-packaged expert system.

Times have changed and with the emergence of expert systems, like The Respond Analyst, we have brought technology where traditionally we’ve had large investments and lengthy time-intensive projects. The result is mid-sized enterprise customers now have an option to operate at maturity levels beyond large traditional enterprise operations by leveraging expert systems. This new approach frees up time, provides needed expertise and saves our customers the headache and cost of legacy solutions. And better yet, our customers gain relief from the stress of understaffed resources and can relax knowing we have their security operations covered.

Read more:

Must-Attend December 2018 Information Security Events & Webinars

Security Geek is back with the top recommendations for upcoming cybersecurity events in December! I picked these events and conferences because they provide a wealth of information, knowledge, and learning materials to help your security team improve its efficiency and effectiveness in defending your environment.

Here are the top shows to attend:

DataConnectors: December 5, 2018 | Dallas, TX

DataConnectors: December 6, 2018 | Washington, D.C.

DataConnectors: December 12, 2018 | Chicago, IL

DataConnectors: December 13, 2018 | Fort Lauderdale, FL

The Dallas, D.C., Chicago & Fort Lauderdale Cyber Security Conferences feature 40-60 vendor exhibits and 8-12 educational speaker sessions discussing current cybersecurity issues such as cloud security, email security, VoIP, LAN security, wireless security & more. Meet with industry veterans and learn about emerging cybersecurity technologies.

My favorite part about the DataConnectors events – they’re free!


Cloud Security Conference: December 10-12, 2018 | Orlando, FL
The Cloud Security Alliance event welcomes world-leading security experts and cloud providers to discuss global governance, the latest trends in technology, the threat landscape, security innovations and best practices in order to help organizations address the new frontiers in cloud security.

IANS: December 12, 2018 | Webinar

In this webinar, IANS Research Director Bill Brenner and IANS Faculty Member Dave Shackleford look back at the biggest security news trends of 2018, what made them significant and what it all could mean for the year ahead.

 

Carbon Black: December 19, 2018 | Webinar

Learn how CB Defense, a real-time security operations solution, enables organizations to ask questions on all endpoints and take action to remediate attacks in real-time.

To stay up-to-date on where the Respond Software team is heading, check out our events calendar! The subject matter experts and industry professionals at Respond are always in attendance and ready to share their knowledge expertise!

Mid-sized Enterprises: Want Robust, Sustainable SecOps? Remember 3 C’s

Cybersecurity is tricky business for the mid-sized enterprise.

Attacks targeting mid-sized companies are on the rise, but their security teams are generally resource constrained and have a tough time covering all the potential threats.

There are solutions that provide sustainable security infrastructures but the vendor landscape is confusing and difficult to navigate. With smaller teams and more than 1,200 cybersecurity vendors in the market, it’s no wonder mid-sized enterprise IT departments often stick with “status quo” solutions that provide bare-minimum coverage. The IT leaders I talk to, secretly tell me they know bare-bones security is a calculated risk but often executive support for resources is just not there.  These are tradeoffs that smaller security teams should not have to make.

Here’s the good news.  Building a solid enterprise-scale security program without tradeoffs is possible. To get started IT leaders should consider the 3 C’s of a sustainable security infrastructure: Coverage, Context, and Cost.

In part 1 of this 3-part blog series, we will deep-dive into the first “C”: Coverage.

When thinking about coverage, there are two challenges to overcome. The first challenge is to achieve broad visibility into your sensors. There is a wide array of security sensors and it’s easy to get overwhelmed by the avalanche of data they generate. Customers often ask me: Do we have to monitor everything? Where do I begin? Are certain sensor alerts better indications of compromise than others?

Take the first step: Achieve visibility with appropriate sensor coverage

To minimize blind spots, start by achieving basic 24 x 7 coverage with continuous monitoring of Network Intrusion Detection & Prevention (NIDS/NIPS) and Endpoint Protection Platform (EPP) activity. NIDS/NIPS solutions leverage signatures to detect a wide variety of threats within your network, alerting on unauthorized inbound, lateral, and outbound network communications. Vendors like Palo Alto Networks, TrendMicro and Cisco have solid solutions. Suricata and Snort are two popular open-source alternatives. EPP solutions (Symantec, McAfee, Microsoft) also leverage signatures to detect a variety of threats (e.g. Trojans, Ransomware, Spyware, etc) and their alerts are strong indicators of known malware infections.

Both NIDS/NIPS and EPP technologies use signatures to detect threats and provide broad coverage of a variety of attacks, however, they do not cover everything.  To learn more on this topic read our eBook: 5 Ingredients to Help your Security Team Perform at Enterprise-Scale

To gain deeper visibility IT departments can eventually start to pursue advanced coverage.

With advanced coverage, IT teams can augment basic 24 x 7 data sensor coverage by monitoring web proxy, URL filtering, and/or endpoint detection and response (EDR). These augmented data sources offer opportunities to gain deeper visibility into previously unknown attacks because they report on raw activity and do not rely on attack signatures like NIDS/NIPS and EPP. Web proxy and URL filtering solutions log all internal web browsing activity, and as a result, provides in-depth visibility into one of the most commonly exploited channels that attackers use to compromise internal systems. In addition, EDR solutions act as a DVR on the system, recording every operation performed by the operating system—including all operations initiated by adversaries or malware. Of course, the hurdle to overcome with these advanced coverage solutions is managing the vast amounts of data they produce.

This leads to the second coverage challenge to overcome—obtaining the required expertise and capacity necessary to analyze the mountains of data generated.

As sensor coverage grows, more data is generated with each sensor type, creating data with unique challenges. Some sensors are extremely noisy and generate massive amounts of data. Others generate less data but are highly specialized and require a great deal more skill to analyze. To deal with the volume of data, common approaches are to ‘tune down’ sensors (which literally filters out potentially valuable data). This type of filtering is tempting since it essentially reduces the workload of a security team to a more manageable level. In doing so, however, clues to potential threats stay hidden in the data.

Take the second step: Consider security automation to improve coverage with resource-constrained teams.

Automation effectively offers smaller security teams the same capability that a full-scale Security Operations Center (SOC) team provides a larger organization, at a fraction of the investment and hassle.

Automation improves the status quo and stops the tradeoffs that IT organizations make every day. Smaller teams benefit with advanced security operations. Manual monitoring stops. Teams can keep up with the volume of data and can ensure that the analysis of each and every event is thorough and consistent. Security automation also provides continuous and effective network security monitoring and reduces time to respond. Alert collection, analysis, prioritization, and event escalation decisions can be fully or partially automated.

So to close, more Coverage for smaller security teams is, in fact, possible: First, find the right tools to gain more visibility across the network and endpoints. Second, start to think about solutions that automate the expert analysis of the data that increased visibility produces.

But, remember, ‘Coverage’ is just 1 part of this 3-part puzzle. Be sure to check back next month for part 2 of my 3 C’s (Coverage, Context, Cost) blog series. My blog on “Context” will provide a deeper dive into automation and will demonstrate how mid-sized enterprise organizations can gain more insights from their security data—ultimately finding more credible threats.

In the meantime, please reach out if you’d like to talk to one of our Security Architect to discuss coverage in your environment.

November Information Security Events You Don’t Want to Miss

Your favorite Security Geek is back with some great news – a list of upcoming cybersecurity shows and conferences you need be aware of during the month of November!

There are numerous information security events happening on a monthly basis and sometimes it can be difficult to navigate which ones provide value and disregard the shows that are a time-waste. This is where we can help you out.

We’ve outlined a few of the top shows you should be looking at below!

FS-ISAC Summit: Nov 11-14 | Chicago, IL

Are you in the financial services industry? Well, then this is the show for you!

As Partners in the Information Security community, we have all been challenged in 2018 with the onslaught of DDoS and phishing campaigns with payloads that have included credential stealing malware, destructive malware and ransomware. These challenges are expanding the responsibilities placed upon us as security professionals and requiring us to ensure we are following best practices.

The FS-ISAC conferences provide information and best practices on how cybersecurity teams in banking and financial institutions can help protect their networks.

DataConnectors: Nov 15, 2018 | Nashville, TN
DataConnectors: Nov 29, 2018 | Phoenix, AZ

The Nashville and Phoenix Cyber Security Conferences feature 40-60 vendor exhibits and 8-12 educational speaker sessions discussing current cyber-security issues such as cloud security, email security, VoIP, LAN security, wireless security & more.

The best part of the DataConnectors events – they’re free! Meet with industry veterans and learn about emerging cybersecurity technologies.

Cyber Security & Cloud Expo 2018: Nov 28 – 29, 2018 | Santa Clara, Ca

The Cyber Security & Cloud Expo North America 2018 will host two days of top-level discussion around cybersecurity and cloud, and the impact they are having on industries including government, energy, financial services, healthcare and more. Chris Calvert, Co-Founder and VP of Product Strategy at Respond Software, will discuss the current state of security operations and emerging trends that are changing out teams operate.

 

Cyber Security Summit: November 29, 2018 | Los Angeles, CA

The annual Cyber Security Summit: Los Angeles connects C-Suite & Senior Executives responsible for protecting their companies’ critical infrastructures with innovative solution providers and renowned information security expertise.

Each one of these conferences provides a wealth of information, knowledge and learning material to help your security team improve its efficiency and effectiveness in cyber threat hunting. To stay up-to-date on where the Respond Software team is heading, check out our events calendar! The subject matter and industry professionals at Respond are always in attendance and ready to share their expertise!

Why It’s Time to Go Back To The Basics of SOC Design

The average SOC is no more prepared to solve their cybersecurity issues today, than they were 10 to 20 years ago. Many security applications have been developed to help protect your network, but SOC Design has traditionally remained the same.

Yes, it’s true we have seen advancements like improved management of data with SIEMS and Orchestration of resolutions, but these tools haven’t resolved the fundamental challenges. Data generated from the most basic security alerts and incidents are overwhelming and still plague the most advanced security organizations.

Which begs the question: How are smaller, resource-constrained security organizations expected to keep up when even enterprise-sized organizations can’t?

According to a recent article in Computer Weekly, the issue is that most organizations, even with the tools & the know-how, are still getting the basics all wrong.

“Spending on IT security is at an all-time high. The volume of security offerings to cover every possible facet of security is unparalleled…The reason so many organisations suffer breaches is simply down to a failure in doing the very basics of security. It doesn’t matter how much security technology you buy, you will fail. It is time to get back to basics.”.

The article mentions that security operations teams need to focus these four key areas to really see any impact positively affecting their SOC design:

  1. Security Strategy
  2. Security Policy
  3. User Awareness
  4. User Change

But is it as simple as this?

The answer is a resounding YES!

There is no question that it’s still possible to cover the basics in security strategy and achieve enterprise security results. Our recommendation? Start with the most tedious and time-sucking part of security analyst role — analysis and triage of all collected security data. Let your team focus on higher-priority tasks like cyber threat hunting. It’s where you’ll get the biggest bang for your buck.

Join our growing community! Subscribe to our newsletter, the "First Responder Notebook," delivered straight to your inbox.