Why We’ll See AI in Security Operation Centers Sooner Rather Than Later

Art by Pinguino Kolb

I’ve had a few debates with InfoSec colleagues of mine about the current and future efficacy of AI within the security field.

Their general stance is that AI for InfoSec is crap, garbage, and snake oil, and that it will continue to be so for the foreseeable future. They say it’s basically too hard of a problem, to emulate the complexity and creativity of what an analyst does, etc.

I agree with them that this is the current state, but I believe this will change very quickly. I also can’t help but notice that these are the exact same noises that were made about Chess, Go, and Poker, and in the span of around 11 seconds of time we’ve seen those challenges go from insurmountable to trivial.

They have an advantage of having messed with a lot of these bad products in the real world, while I’ve not. I think I have the advantage of reading a ton of books about AI, and watching the field closely. And not just reading the stuff myself, but consuming what the best minds are saying about how quickly AI is improving. I recommend What to Think About Machines That Think for getting some of that perspective. I’ve also been a security analyst myself, and deal constantly with the challenges of logging and response.

Anyway, let’s call that a draw for the sake of argument. I think I have a move that makes it somewhat irrelevant.

The standard for AI to become useful (and therefore prolific) within InfoSec is not being better than humans—it’s being able to do just about anything at all.

Just as with satellite imagery analysis, audio recording analysis, security camera monitoring, log data analysis—and other similar disciplines—the case against humans (and for AI) is multidimensional.

  1. First, and most importantly—there aren’t enough humans to look at the content. There are exabytes of data being produced and only a small handful of people to look at the stuff.

  2. The marginal cost of training humans is the same as training the first one, whereas it’s virtually zero for adding additional AIs.

  3. Humans are trained inconsistently.

  4. Humans get tired and bored.

  5. Humans have biases that can vary their analysis even if the training were consistent.

The list goes on, but the most important points are that there aren’t nearly enough people to look at the content that needs to be seen, and even if there were we wouldn’t be able to look at it as consistently as a fleet of algorithms.

The straw man everyone is attacking is the idea of AI security agents becoming smarter and more creative than a fully trained L1 or L2 analyst. It’s a straw man because I don’t know anyone who’s arguing that. That could take 5, 10, or 20 years—or it could never happen at all. I think it could happen much sooner, but I’m agnostic on this point.

Ultimately, though, it doesn’t really matter.

What matters is the value that AI can bring to the thousands upon thousands of companies generating terabytes of business exhaust data that nobody is looking at.

If AI agents can be unleashed on all that data and find something (really anything of value in the mess) and then surface that to a human—then it’ll be invaluable and the market around AI security analysts will thrive.

In short, it’s a low bar because of how much data is currently going unanalyzed, and because that bar is so low I believe we’ll hit it sooner than most think.

So my prediction for this is that we’ll see companies using AI analyst technologies pointed at IT and IS exhaust data in significant numbers within five years. That doesn’t mean replacing L1 analysts. It means needing to hire fewer of them, or hiring them to be L2 analysts instead.

And—most importantly—it means a whole lot more of the data produced within a company will be seen by someone—even if that someone is an algorithm.

Notes

  1. By the way, this is also going to be a boon for the other spaces I mentioned, like listening to audio recordings, looking at image data, etc.—basically anyplace where there’s too much data to look at and far too few trained people to do the analysis.

Related posts: