Why GenAI fails at full SOC automation – Security Boulevard

7 minutes, 10 seconds Read

AI-SOC-failA rapidly growing number of organizations are exploring the use of generative AI tools to transform business processes, improve customer interactions, and enable a variety of new and innovative use cases. But technology leaders who hope to harness GenAI tools to build a completely autonomous security operations center (SOC) might need to keep their expectations in check.

The reasons, Forrester Research analysts Allie Mellen and Rowan Curran say in a new research note, have to do with the fact that AI-enabled tools, just like traditional ones, have limitations that keep autonomous security decision making out of reach.

“[There’s] a deeper issue at play here that is as fundamental to security as time itself: enterprise data consolidation and access is an absolute bear of a problem that is unsolved. Put more simply, security tools can’t ingest, store, and interpret all enterprise data. And more than that, security tools don’t play nice together anyway.”
Allie Mellen and Rowan Curran

Here’s what you need to know about the limitations of GenAI in automating your SOC.

[ Related: GenAI and threat modeling: 3 AppSec benefits | Gartner Report: Mitigate Supply Chain Risk ]

.ai-rotate {position: relative;}
.ai-rotate-hidden {visibility: hidden;}
.ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;}
.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;}

Techstrong Podcasts

Data and integration challenges 

Getting all of an enterprise’s data into one place is challenging and costly, something that has been demonstrated with the security information and event management (SIEM) market, the two researchers said in a recent research note:

“Further, continuous training on this data is expensive and resource intensive. These two factors make this approach nearly impossible if accuracy and timeliness are important, which in this instance they are.” 

Difficulty with security tool integration is another major issue, the researchers noted. Until organizations are able to seamlessly get their security tools talking with one another, GenAI tools will be somewhat limited in their ability to analyze and interpret all available data and make smart autonomous decisions, they said.

“Using LLMs [Large Language Models] to support querying large, complex data architectures simply isn’t feasible today — anomaly detection, predictive modeling, etc. are still required.” 

Reality does not match the high expectations of AI

Business and technology leaders have high expectations for the potential for GenAI tools such as ChatGPT, GitHub CoPilot, AlphaCode, Claude, and Gemini to radically transform their operations in the next few years. A study by McKinsey Global showed that business leaders expect AI-enabled tools to help increase corporate profits by up to $4.4 trillion a year by making decisions that are “remarkably human.”

McKinsey Global expects generative AI tools to make the biggest difference in areas such as high technology, banking, and life sciences. In a more recent survey, by EY, 43% of 1,405 enterprise organizations surveyed said they had already invested in GenAI technology for use cases such as employee training and collaboration and customer sales and service. Many of the those who have already invested in AI tools are still at the proof-of-concept stage, while 20% have implemented pilot projects. 

AI is making a difference on the cybersecurity front as well, with a growing number of vendors integrating various AI features into their products, especially in areas such as anomaly detection and behavior analysis. Gartner expects that, by 2027, GenAI will help organizations reduce false-positive rates for application security testing (AST) and threat detection by 30%. But the analyst firm also believes that attacks leveraging AI, will force organizations to deploy more human resources— not fewer — in response:

“Security operation chatbots [will] make it easier to surface insights from SOC tools. But experienced analysts are still needed to assess the quality of the outputs, detect potential hallucination and take appropriate actions according to the organization’s requirements.” 

Taking it a step at a time

So how can organizations harness AI in the SOC — and what can they reasonably expect by way of potential operational and efficiency gains? Ali Khan, field CISO at ReversingLabs, expects that AI will help speed up analysis significantly in the SOC. Organizations can expect AI to improve key metrics such as mean time to detect (MTD) a cyber-incident, mean time to respond (MTR), and mean time to contain (MTC).

“If you can reduce IR playbook down to minutes from a ticket opening to closing, autonomously, then you have achieved nirvana.”
Ali Khan

However, because of the many process, cultural, and engineering changes involved, getting there will be a challenge for legacy organizations that are adopting AI for the first time, Khan said. He recommended that SOC leaders start by identifying their biggest gaps and the challenges they encountered in addressing those gaps. Starting with the “why” can help stakeholders arrive at better decisions about the generative AI tool they need, he said.

“Do a trial run on a small set of hosts to identify how it would actually work in a simulated environment before opening up Pandora’s box.”
—Ali Khan

Khan said IT and business leaders need to make decisions based on their organization’s existing technology, explaining that if current processes are not already enabled for autonomy, then the returns from GenAI could be less than optimal.

“Autonomous SOCs are like autonomous highways: If every single car on the highway has full self-driving, then the margin of error is very small.” 
—Ali Khan

Starting small and keeping expectations in check is an approach that Gartner recommends as well. Gartner identifies the first wave of AI-enabled security tools as giving SOC leaders a way to replace existing query-based search processes with conversational prompts.

The analyst firm expects that these tools will be especially useful in threat analysis and threat hunting by enabling better alert enrichment and improved alert scoring, and that they will also give a boost to areas such as attack surface summarization. threat summarization, and mitigation assistance. In the second phase, starting this year, Gartner expects security vendors to start adding features to their products that will allow organizations to enable a more automated defense capability.

However, because these tools cannot explain their generated response to an unfolding situation, security leaders will unlikely trust them enough to fully automate their defenses right away, Gartner said:

Mandatory approval workflows and detailed documentation will be necessary until the organization gains enough trust in the engine to incrementally increase the level of automation.” 

AI and your organization’s security: It’s a question of trust

Patrick Tiquet, vice president of security and architecture at Keeper Security, said there are multiple use cases for GenAI technology that make it a boon in the SOC, including the ability of AI tools to analyze massive datasets for anomalies faster than any team of humans can.

However, using GenAI for complete SOC automation raises some issues that organizations need to contend with, he said. One significant limitation with GenAI models in security is their tendency to hallucinate — to come up with assessments that sound plausible and even accurate but that the model cannot explain. While the recommendation could well be something worth investigating, it’s risky to allow fully automated decision making based on the information alone.

The implementation of AI-powered cybersecurity tools in the SOC requires a comprehensive strategy that includes other technologies as well as human expertise to provide a layered defense against evolving threats, Tiquet said. He advocates for organizations paying attention to the basics before considering advanced detection methods using AI.

“For example, does your organization have a cybersecurity control framework in place? Do you have password management handled? While these may not be the most exciting controls to talk about, these are the ones that stop the vast majority of breaches.” 

Reality check AI in your organization

For the moment, GenAI works best in providing human analysts with better information for decision making, said Chris Morales, chief information security officer at Netenrich. Machine intelligence can augment human intelligence, particularly in areas such as data management, detection engineering, and security analysis, he said.

It’s still unclear, however, how AI use will evolve in the SOC over the next few years. But there are ways in which organizations can prepare now, Morales said.

“Learn and promote the use of prompt engineering into daily routines. It is still too soon to fully realize what is going to be possible, but it is better to get started now to embrace whatever that future might bring.”
Chris Morales

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Jai Vijayan. Read the original post at: https://www.reversinglabs.com/blog/why-genai-fails-at-full-soc-automation

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts