Cybersecurity is constantly evolving. Attackers are continuously innovating which means security operations center (SOC) teams must also refine their tactics. Most recently, the rapid adoption of AI and GenAI-powered tools in the enterprise are creating new and highly accessible attack surfaces for threat actors, and legacy tools are struggling to keep pace.
Last year, GenAI was predominately used in consumer-facing applications but now it’s entered the enterprise, embedding itself in business processes and software. Companies like Microsoft, Google and Meta have integrated it in their products and are charging for it like they do any other service or offering. Some companies are starting to bring the technology in-house to build their own large language model (LLM) powered tool.
While these tools promise to increase efficiency and productivity for SOC teams, they also afford attackers the same opportunities enabling them to carry out more sophisticated attacks at scale and speed. GenAI will commoditize the sophisticated attack. Techniques that were once only used by highly skilled hackers will be accessible to all thanks to GenAI.
Attackers were quick to jump on the GenAI bandwagon. They’re manipulating LLMs to exploit vulnerabilities in enterprise systems and supply chains and deploying identity attacks that take advantage of GenAI’s ability to create compelling fakes at scale. To keep pace, SOC teams need to fight AI with AI.
The new battleground — fighting AI with AI
GenAI-powered tools are being used by SOC teams to streamline processes while augmenting work and productivity. And attackers are using it the same way, automating various parts to work faster and quicker. Take a phishing email — using GenAI, attackers can draft emails void of typos, grammatical errors and robotic language. These emails are more convincing, more humanlike, making them harder to spot. Attackers are also using GenAI to mimic user behavior or generate tailored methods to bypass defenses like multi-factor authentication. As attackers leverage AI to scale their efforts, security teams must counter with equally powerful AI-driven solutions to effectively defend and identify attacks.
When armed with AI-powered solutions, SOC teams can move at the same scale and speed that attackers do. With the shift to hybrid and multi-cloud environments, organizations are defending a hybrid attack surface that includes cloud, endpoints, network, identity, SaaS, and GenAI. Behavior-based AI tools help SOC teams detect threats across this hybrid attack surface, triaging, correlating and prioritizing threats with the highest degree of accuracy so SOC defenders can isolate and contain real attacks. AI enables defenders to sift through information and threat signals to see if indicators of attacker behavior are happening within their organization’s environment. Considering traditional threat detection tools generate an influx of alerts, AI helps reduce clutter to uncover real threats with more efficiency and accuracy.
Lessons learned from early GenAI
As the adoption of GenAI across the cybersecurity industry continues, there are important lessons to keep in mind from the early implementations. One key takeaway is the need for robust human oversight to mitigate risks, as AI can introduce new vulnerabilities if not properly managed. Additionally, keeping in mind there is no universal approach to AI and that AI alone isn’t enough to defend against GenAI-driven trackers.
There is no one-size-fits-all approach to AI
Not all AI is created equal, as different problems require specific techniques and approaches. Applying the wrong technique — like using a neural network for a task better suited to decision trees — can lead to inefficiencies and suboptimal results.
Understanding the nuances of each problem is essential for selecting the right AI method. For instance, knowing which AI techniques to apply to “width problems” is crucial because not all models are equally effective at handling complex, multi-dimensional data. Width problems involve datasets with diverse features, where the model must capture relationships across many variables. Techniques such as wide & deep learning, feature selection, attention mechanisms, and ensemble methods are commonly used to address these challenges, ensuring that AI models perform effectively across varied and broad data landscapes.
Organizations can’t just hire AI experts
Security threats evolve rapidly, driven by human behaviors and sophisticated attack techniques. Because of this, AI alone isn’t enough to defend against them. Without an understanding of these behaviors, an AI-powered system can miss critical nuances or flag too many false positives. This is why when it comes to applying AI to address these challenges, organizations can’t just hire data scientists or AI experts. The most effective AI-driven cybersecurity solutions come from the integration of deep security research and human behavioral insights with data science techniques. While AI can automate and analyze large volumes of data quickly, it’s the context provided by security experts that makes these insights actionable.
It’s not enough to just apply machine learning algorithms or anomaly detection to cybersecurity data; the models need to be trained and refined by professionals who understand the behaviors and tactics of attackers. This combination of expertise ensures that AI tools are not only efficient but also precise in identifying real threats and reducing noise, leading to a more resilient security posture.
With more organizations using GenAI tools, SOC teams are facing a new attack surface, one that can only be protected with AI. Looking ahead, the most promising use of AI in cybersecurity will come through in investigation, triage, prioritization and response. The industry is also likely to see a more sophisticated multi-model that uses GenAI and classic machine learning. As attackers continue to leverage AI to scale and sharpen their attacks, only equally intelligent defenses can effectively safeguard critical systems.