My former “colleagues” have written several serious pieces of research about why a SOC without humans will never happen (“Predict 2025: There Will Never Be an Autonomous SOC”, “The “Autonomous SOC” Is A Pipe Dream”, “Stop Trying To Take Humans Out Of Security Operations”). But I wanted to write a funny companion to this called “How to Talk to Idiots Who Believe in ‘Humanless SOC’.” Here it is, but it is definitely a rant and not technical guidance, mind you.
I think most of us will encounter people who believe that a Security Operations Center (SOC) fully staffed by machines and with no humans anywhere will actually happen. Now, I think those people are delusional, but it is interesting to try to study those delusions. Try to psychoanalyze them, perhaps. Maybe this points to some suppressed childhood trauma, I dunno…
Years ago, I had an old and wise mentor who explained everything weird in the (human) universe by a unique (for each occurrence) blend of two forces: corruption and stupidity. Perhaps this can be applied here? Some may believe this out of ignorance (see more on this below) while others choose to believe it because their VC funding depends on it…
Anyhow, let’s look at the extreme fringe of a fringe. You may meet people who think that artificial intelligence today is so advanced that human presence inside the SOC is not necessary. Today! They actually think AI can already replace all humans in a SOC! Some of them even have a demo ready, powered by … ahem … “a demo-ready AI” that works — you guessed it! — in a demo. Sadly, it will never deliver even a tiny fraction of the promised benefits once confronted with a real-world, messy environments full of outdated systems, API-less data stores, tribal knowledge, junior IT people, and sprinkled with human incompetence…
Similarly, some people have never seen how a large enterprise functions, so they make assumptions about automation possibilities that are just wildly off. They struggle to grasp the complexity of a “typical” (ha! as if!) enterprise “layered cake” environment, with its layers of technology ranging from 1970s mainframes to modern serverless and gen AI systems.
To elaborate on the lack of enterprise environment knowledge, what makes it even worse is common reliance on tribal knowledge of unique systems — knowledge that only exists in the minds of specific individuals. It’s very difficult, if not impossible, for any automated system (whether AI-powered or not) to make decisions based on context that simply isn’t present in computers…
In other cases, an utter lack of understanding of how modern (and especially not-so-modern) security operations centers, and detection and response teams operate comes up. Some snakeoil sellers of “humanless SOC’” rely on things like ”this needs a current asset list, we will just query CMDB or Attack Surface Manager.” Ah, a CMDB that was last updated in 2008, and an ASM that covers a third of the environment … suuure. They often promise (or, worse: ask the customer to!) to “fix these issues before deployment,” failing to acknowledge that some of these issues have persisted for decades. “Decades, Karl!” That’s like 10+ years! 🙂
Yet another category of people believe in a humanless SOC based on their complete lack of understanding of threats. In fact, they shift their AI so far right (“AI SOC = better alert triage”), and neglect bad detection content altogether… And, yes, threat actors sometimes know the environment better than the defenders do. I’m optimistic that in the long term, with the wider adoption of cloud computing, the occasional attacker advantage will vanish. Defenders will collect more data on their environments and be able to keep it updated (well, I can hope, can I?) Today, however, it is just not the case.
Now, what about trying to match the quality of a bad SOC, like one run by a low-end MSSP vendor? As I alluded before, artificial intelligence today seems close to matching the quality of a bad SOC without any humans. To this, I add: If you lower the bar enough, you can match the quality of a bad SOC even without AI. Just connect your SIEM alerts to an alert distribution mechanism like email. Done! You have a really, really, really, really bad SOC, and without any humans. And without AI too!
So using this argument (“I can replicate a really bad SOC with AI”) is essentially cheating (more seriously, if one can replicate a “mediocre+” MDR but without any human “butts in seats”, this can be a decent business!)
Finally, there is one delusion that’s actually worthy of deeper analysis: the belief that AI will soon advance so rapidly and so massively that it will replace all humans in the SOC. Let’s not turn this into “are LLM a path to AGI?”; actual AI experts can debate this one. We will focus on the SOC.
Let’s start this discussion with good news. Several years ago (2021), I was a long-term optimist, but a short-term skeptic about AI in security. Now, I’m even more optimistic in the long term and cautiously optimistic in the short term. Despite my optimism, I don’t see a short-to-medium-term trajectory for AI that would lead to a humanless SOC. I do see a lot of AI use in the SOC, to be sure, but a SOC run by humans!
Notably, when we developed Autonomic Security Operations (ASO), we stressed that humans are central to modern security operations (as they are with our own D&R capabilities). We also mentioned the many tools used in such operations, including of course AI.
Where can you go from here? We can discuss what’s possible, and increased automation of your security operations center is definitely on that list. We can also explore the potential pathways that might eventually (EVENTUALLT!) lead to a humanless SOC. However, this is the world of tomorrow…
… and we are back to today!
Here are my Top Reasons Why a SOC Without Humans Will Not Happen:
- Tribal Knowledge: Crucial knowledge for alert triage, investigation and detection authoring often exists only in someone’s head, not in any automated or even any digital system (you gen AI “agent” may read the pages of an analog notebook, to be sure, but a human is needed to shove said notebook in front of a robot’s all-seeing-eye…)
- Adaptable Attackers: Creative attackers will continue to outsmart automated (including gen AI — powered) defenses, as they possess the ingenuity and adaptability that machines currently lack (this argument very much applies to short-to-medium term and I make no promises for long term, mind you, AGI FTW … but LATER!)
- Security Data Quality: Many AI projects are limited by the quality of their data. Building an excellent “AI SOC” requires vast amounts of high-quality data, which is often unavailable, and this is doubly so for company-specific data (we can debate how attack-surface-agnostic you can make this in later blogs…)
These are just a few of the main reasons why a fully automated (humanless, fully autonomous, etc) SOC is not feasible in the near future. If you encounter someone who believes in this fallacy, remind them of the importance of tribal knowledge, expert intuition, attacker adaptability, and the limitations of current AI technology due to insufficient data quality. These challenges remain largely insurmountable, even with projected technological advancements.
Finally…
A critical challenge in writing this blog is my unwavering belief in the relentless pursuit of automation within a detection and response domain. Ideas like ASO (and its origins) have demonstrated that an engineering mentality and a drive to automate more activities are crucial for building a modern SOC. In fact, SRE’s job is to “automate yourself out of your job”, but here lies a paradox: humans are needed to automate humans out of a human job, yet this loop is endless…
Related posts: