Evolving red teaming for AI environments – Security Intelligence

4 minutes, 20 seconds Read

Evolving red teaming for AI environments

The businessman holding a tablet in the red lab
The businessman holding a tablet in the red lab

As AI becomes more ingrained in businesses and daily life, the importance of security grows more paramount. In fact, according to the IBM Institute for Business Value, 96% of executives say adopting generative AI (GenAI) makes a security breach likely in their organization in the next three years. Whether it’s a model performing unintended actions, generating misleading or harmful responses or revealing sensitive information, in the AI era security can no longer be an afterthought to innovation.

AI red teaming is emerging as one of the most effective first steps businesses can take to ensure safe and secure systems today. But security teams can’t approach testing AI the same way they do software or applications. You need to understand AI to test it. Bringing in knowledge of data science is imperative — without that skill, there’s a high risk of ‘false’ reports of safe and secure AI models and systems, widening the window of opportunity for attackers.

And while important, testing in the AI era needs to consider more than just model and weight extraction. Right now, not enough attention is being placed on securing AI applications, platforms and training environments, which have direct access or sit adjacent to an organization’s crown jewel data. To close that gap, AI testing should also consider machine learning security — machine learning security operations or ‘MLSecOps.’ This approach helps assess attacks against the machine learning pipeline and those originating from backdoored models and code execution within GenAI applications.

Ushering in a new era of red teaming

That’s why IBM X-Force Red’s new Testing Service for AI is delivered by a team with deep expertise across data science, AI red teaming and application penetration testing. By understanding algorithms, data handling and model interpretation, testing teams can better anticipate vulnerabilities, safeguard against potential threats and uphold the integrity of AI systems in an increasingly AI-powered digital landscape.

The new service simulates the most realistic and relevant risks facing AI models today, including direct and indirect prompt injections, membership interference, data poisoning, model extraction and adversarial evasion to help businesses uncover and remediate potential risks. Specifically, the testing offering covers four main areas:

  1. GenAI application testing
  2. AI platform security testing
  3. MLSecOps pipeline security testing
  4. Model safety and security testing

AI and generative AI technologies will continue to develop at breakneck speed, and new risks will be introduced along the way. Meaning, red teaming AI will need to adapt to match this innovation in motion.

X-Force Red’s unique AI testing methodology is developed by a cross-team of data scientists, AI red teamers, and cloud, container and application security consultants, who are regularly creating and updating methodologies. This unique approach incorporates both automated and manual testing techniques and includes methodology from NIST, MITRE ATLAS and OWASP Top 10 for Large Language Model Applications.

Red teaming is a necessary step in securing AI, but it’s not the only step. IBM’s Framework for Securing AI details the likeliest attacks against AI and the defensive approaches most important to help secure AI initiatives quickly.

If you’re attending the RSA Conference in San Francisco, come to IBM’s booth # 5445 on Tuesday, May 7 at 2:00 p.m. to learn more about AI testing and how it differs from traditional approaches.

Register for the webinar

More from Offensive Security

AI vs. human deceit: Unravelling the new age of phishing tactics

7 min readAttackers seem to innovate nearly as fast as technology develops. Day by day, both technology and threats surge forward. Now, as we enter the AI era, machines not only mimic human behavior but also permeate nearly every facet of our lives. Yet, despite the mounting anxiety about AI’s implications, the full extent of its potential misuse by attackers is largely unknown. To better understand how attackers can capitalize on generative AI, we conducted a research project that sheds light on…

X-Force identifies vulnerability in IoT platform

4 min readThe last decade has seen an explosion of IoT devices across a multitude of industries. With that rise has come the need for centralized systems to perform data collection and device management, commonly called IoT Platforms. One such platform, ThingsBoard, was the recent subject of research by IBM Security X-Force. While there has been a lot of discussion around the security of IoT devices themselves, there is far less conversation around the security of the platforms these devices connect with.…

When the absence of noise becomes signal: Defensive considerations for Lazarus FudModule

7 min readIn February 2023, X-Force posted a blog entitled “Direct Kernel Object Manipulation (DKOM) Attacks on ETW Providers” that details the capabilities of a sample attributed to the Lazarus group leveraged to impair visibility of the malware’s operations. This blog will not rehash analysis of the Lazarus malware sample or Event Tracing for Windows (ETW) as that has been previously covered in the X-Force blog post. This blog will focus on highlighting the opportunities for detection of the FudModule within the…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.

Subscribe today

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts