I recently visited Norwich University, the oldest private military college in America and an NSA & DHS Center of Academic Excellence for Cyber Defense. I had the opportunity to speak with some of the students in their Computer Security and Information Assurance program, and I was asked a question by Keely Eubank about the ability of attackers to leverage stealth techniques to hide from algorithms. It was a very insightful question and it got me thinking about the topic of “Algorithmic Stealth.”
Information Security has always been an arms race. Getting around more advanced cyber-defenses has always required stealth techniques. For example, I remember the days when almost every attack happened on Friday afternoon around 3:00pm before a 3-day weekend because the system administration team was having Friday beers and wouldn’t be back to work before Tuesday. Plenty of time to break in, steal what you were after and then clean up before they got back to work. My pager (yeah that’s old school) went off every 3-day weekend for years.
Eventually the bad guys took a statistics course and realized that defenders had a real challenge with volume, so they switched their loudest attacks to Wednesday at 10:00am. That is weekly peak network traffic and they realized it was easier to hide in all that noise than the quiet of the weekend. Once we were able to suppress some of the noise they moved to a “living off the land” model so that they looked like regular administrators and used tools that were natural to the environment. Tools like TeamViewer and Powershell were particular favorites. They were both hiding in the noise AND not introducing new binaries into the environment, so they were that much harder to detect.
Now that advanced math and algorithms are becoming the detection methods of choice, there is an inevitable progression for attackers to research techniques for “algorithmic stealth.” Luckily for us there are lots of approaches to math-based detection being developed at the moment. Each of these would need to be specifically defeated and thus a combination of them would be incredibly difficult to completely circumvent. When machines do security monitoring human behavior, bias and decision making can no longer be the main thrust of stealth technique development. They have to hide from the algorithms.
This means a couple of things for us defenders. Basic anomaly detection is the weakest of the modern approaches as it suffers from the same signal-to-noise ratio problem that traditional signature detection methods do. The modern enterprise is full of anomalies due to the complex and poorly coordinated way our enterprise technology is cobbled together. However, signatures and anomalies can both reduce uncertainty by some amount and that is still valuable. The optimal detection operation will “see“ simultaneously in multiple telemetries; use the analogy of visible light, infrared, thermal, radio wave, etc… AND leverage diverse mathematical methods that provide a Venn diagram of opportunities to recognize malicious activity. All phases of an attack can be targeted for detection differently. At Respond Software we call this Integrated Reasoning.
I’d like to thank Norwich University’s Applied Research Institute (NUARI) and the Norwich faculty for the chance to learn something cool from their amazing students. As we develop new tools and techniques to defend our organizations, I am reminded that our very depth of experience (in the status quo) can blind us to new approaches and listening closely to the questions of the next generation of defenders can open new avenues of defensive research. Understanding and defeating algorithmic stealth is a new frontier for security research and active defense. Welcome to AI in Security, I think you knew it wouldn’t be all fun.