The Shift to Continuous AI Model Security and Pen Testing –

1 minute, 21 seconds Read

Governance & Risk Management
RSA Conference

Aaron Shilts of NetSPI on Security Challenges, Threats of AI Models

Aaron Shilts, president and CEO, NetSPI

The challenges of securing proprietary data within AI models and the paradigm shift in enterprise security are brought about by the widespread adoption of AI models. Adversaries are exploiting vulnerabilities in AI models, employing techniques like “jailbreaking” to extract or manipulate proprietary information, said Aaron Shilts, president and CEO, NetSPI.

See Also: The Security Testing Imperative

Jailbreaking could pose serious threats, particularly in sensitive industries like healthcare, where patient records and health data must remain confidential, he said.

“There are different techniques that bad actors can use to get the wrong information out and that leads to a data breach. Another example is using an AI model to generate something nefarious that you don’t want it to create. For instance, information on weapons or making drugs and things like that,” Shilts said. “You don’t necessarily want an AI model to inform a malicious actor on what they could do. So putting guardrails in there is important.”

In this video interview with Information Security Media Group at RSA Conference 2024, Shilts also discussed:

  • The shortage of skilled professionals in AI security;
  • The need for continuous security assessments over one-time security audits;
  • The importance of asset discovery and full visibility into IT infrastructure to prevent data breaches.

In his more than 20 years of industry leadership, Shilts has built innovative and high-performing organizations. Prior to joining NetSPI, he was the executive vice president of worldwide services at Optiv, where he led one of the industry’s largest mergers.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts