AI security challenges in generative AI adoption – SiliconANGLE News

2 minutes, 49 seconds Read

In their rush to leverage generative artificial intelligence, many firms prioritize innovation over security, despite recognizing its importance. This complicates AI security challenges, an IBM Corp. study reveals.

Organizations need to prioritize the protection of large language models and generative AI from emerging threats, according to Jake King (pictured), head of threat and security intelligence, Elasticsearch B.V.

Jake King, head of threat and security intelligence, Elasticsearch B.V talking to theCUBE about AI security challenges at RSA Conference 2024

Elasticsearch’s Jake King talks to theCUBE about AI security challenges.

“We’re surfing the wave of LLMs and technology with AI,” he said. “I think as practitioners gain experience and understanding of where their risks are in these systems, it’s going to be important to keep up with the capabilities to monitor and observe those risks and be able to do something about it. It’s a pretty interesting challenge.”

King spoke with theCUBE Research’s Dave Vellante and Shelly Kramer at the RSA Conference, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how companies that want to protect large language models need to apply detection rules, use AI to keep up with diversity and focus on rapidly responding to threats. (* Disclosure below.)

Navigating AI security challenges for safer innovation

Operators and creators of models should consider original OS research, apply detection rules to see risks in their environment and use AI to keep up with the diversity of LLMs. They also must discern patterns and ensuring unification of the system, according to King.

“When some models provide things like latency information, others may provide whether the question was potentially hitting a data set or meta-condition. So, applying these kind of standards allows us to write generic rules that apply to multiple vendors,” King said. “As we add to integrations, I’m sure there’s ways of using AI to help that integration process — not that we’re doing that today, but certainly is an interesting premise and definitely a use case I can see being used.”

The debate over building security models on-premises versus in the cloud is ongoing, with different use cases and no one right answer. However, the trend is likely moving toward secure hosted or cloud-delivered solutions and the influence of standards such as the NIST Cybersecurity Framework, King explained.

“Just seeing standards like the NIST standard … will be really interesting,” King said. “I think we’ll start to see not necessarily perspectives, but actual opinions on why a customer may choose to run a model internally versus using a hosted model with a cloud provider.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of the RSA Conference:

[embedded content]

(* Disclosure: Elasticsearch B.V. sponsored this segment of theCUBE. Neither Elasticsearch nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts