NIST A.I. Security Report: 3 Key Takeaways for Tech Pros – Dice Insights

7 minutes, 19 seconds Read

The interest in artificial intelligence, and generative A.I. specifically, continues to grow as platforms such as OpenAI’s ChatGPT and Google Bard look to upend multiple industries over the next several years. A recent report by research firm IDC found that spending on generative A.I. topped $19 billion in 2023; that A.I. spending is expected to double this year and reach $151 billion by 2027.

For tech professionals looking to take advantage of the lucrative career opportunities this developing field offers, understanding how these A.I. models work is essential. While many of these conversations focus on how these platforms can automate manual processes and streamline operations, there is a growing concern about how A.I. can be corrupted and manipulated… and why it’s critical to know these aspects of the technologies, as well.

To shed additional light on these issues, the National Institute of Standards and Technology (NIST) released a new paper entitled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” which delves into security and privacy issues organizations can face when deploying A.I. and machine learning (ML) technologies. The document details several troubling security concerns, including scenarios such as corrupt or manipulated data used to train Large Language Models (also known as “poisoning”), vulnerabilities in the supply chain and breaches involving personal or corporate data.

“Despite the significant progress that A.I. and machine learning (ML) have made in a number of different application domains, these technologies are also vulnerable to attacks that can cause spectacular failures with dire consequences,” the NIST report warns in its introduction.

While the NIST report is primarily written for A.I. developers, other tech and cybersecurity pros can benefit from reading the document and incorporating its lessons into their skill sets, especially as A.I. becomes a greater part of their day-to-day responsibilities.

“Understanding the evolving threat landscape and the techniques adversaries are using to manipulate A.I. is key and critical for defenders to be able to test these use cases against their own models to effectively secure their A.I. systems and to defend against A.I.-powered attacks,” said Nicole Carignan, vice president of strategic cyber A.I. at security firm Darktrace.

The NIST report offers a guideline for how tech professionals should approach A.I., which can make them more valuable to their current organization or potential employers. Several security experts and industry insiders offered their views on the three key takeaways from the document.

A.I. Security Matters Right Now

The NIST paper outlines several significant security issues A.I. and generative A.I. technologies are vulnerable to, whether from a malicious actor or using bad data to program the models. 

The four major threats NIST identifies include:
 

  • Evasion attacks: The technique is designed to create malicious content or code after the A.I. model is deployed.
  • Poison attack: This technique uses corrupt or malicious data to damage—or poison—the model before deployment.
  • Privacy attacks: These incidents involve gathering private personal data or sensitive company information by exploiting weaknesses in the model.
  • Abuse attacks: In this scenario, an insertion of incorrect or dubious information into a source (such as a webpage or online document) that an A.I. then absorbs as part of its training.

“There are many opportunities for bad actors to corrupt this data—both during an A.I. system’s training period and afterward, while the A.I. continues to refine its behaviors by interacting with the physical world,” the NIST authors note. “This can cause the A.I. to perform in an undesirable manner. Chatbots, for example, might learn to respond with abusive or racist language when their guardrails get circumvented by carefully crafted malicious prompts.”

The rapid emergence of various A.I. tools over the past year demonstrates how quickly things can change for the cybersecurity workforce and why tech pros need to remain up-to-date, especially regarding security, said Dean Webb, cybersecurity solutions engineer with Merlin Cyber.

“While A.I. defensive tools will help to counterbalance most A.I.-driven attacks, the A.I.-enhanced generation of phishing and other social engineering attacks goes up directly against often-untrained humans,” Webb told Dice. “We will have to find better means of automating defenses on corporate as well as personal emails, texts and chatbots to help us hold the line when it comes to A.I.-enhanced social engineering.”

While large companies such as OpenAI and Microsoft can deploy specialized red teams to test their A.I. products for vulnerabilities, other organizations don’t have the experience or resources to do so. Still, with generative A.I. becoming more popular, enterprises will need security teams that understand the technologies and their vulnerabilities.

“As A.I. is used in more and more software systems, the task of securing A.I. against Adversarial Machine Learning (AML) attacks may increasingly fall under the responsibility of organizational security departments,” said Theus Hossman, director of data science at Ontinue. “In anticipation of this shift, it’s important that CISOs and security experts acquaint themselves with these emerging threats and integrate this knowledge into their broader security strategies.”

Build Secure A.I. Code and Applications

The NIST report details how generative A.I. LLMs can be corrupted during the training process. 

Corruptions within the development process also demonstrate that tech professionals, developers and even cybersecurity workers need to take the same approach to A.I. as they would when creating secure code for other types of applications.

“A.I. safety and A.I. innovation go hand-in-hand. Historically, security was an afterthought in the development of A.I. models, leading to a skills gap between security practitioners and A.I. developers,” Darktrace’s Carignan told Dice. “As we continue to embark on the A.I. revolution, innovation research and information sharing across the industry is essential for both A.I. developers and security practitioners to expand their knowledge.”

As technology becomes more ingrained into organizations’ infrastructure, developing A.I. models and anticipating how they can be corrupted will be an essential skill for developers and security teams hunting for vulnerabilities, noted Mikhail Kazdagli, head of A.I. at Symmetry Systems.

“When A.I. algorithms are trained on data that is incorrect, biased, or unrepresentative, they can develop flawed patterns and biases. This can lead to inaccurate predictions or decisions, perpetuating existing biases or creating new ones,” Kazdagli told Dice. “In extreme cases, if the data is maliciously tampered with, it can lead to unpredictable or harmful behavior in A.I. systems. This is particularly significant when the A.I. is employed in decision-making processes. The integrity and quality of the data are thus critical in ensuring that A.I. systems function as intended and produce fair and reliable outcomes.”

Adversaries Understand A.I. … and Tech Pros Should, Too

Since ChatGPT’s release in November 2022, researchers have warned how adversaries—whether cybercriminals or sophisticated nation-state actors—are likely to take advantage of these new platforms.

Already, phishing and other cyber threats have been linked to the malicious use of generative A.I. technologies, and these trends are likely to increase, the NIST paper noted. This means tech and cybersecurity pros must know about the vulnerabilities inherent in A.I. models and how adversaries exploit these flaws.

“Threat actors and adversaries are not only seeking to utilize A.I. to optimize their operations, but geopolitical threat actors are also looking to gain valuable A.I. intellectual property. Adversaries are looking for vulnerabilities to obtain valuable IP—like models or weights used within models—or the ability to extract the sensitive data the model was trained on,” Carignan explained. “Attackers could have various AML goals like poisoning the competition, reducing accuracy to outperform competitors or to control the processing or output of a machine learning system to be used maliciously or to impact critical use cases of A.I.”

As A.I. and machine learning applications become more commonplace, not only will tech and cybersecurity pros need to understand what they can and cannot do, but that knowledge will need to be disseminated throughout an organization, noted Gal Ringel, CEO at Mine, a data privacy management firm.

This will require knowing how attackers are exploiting the technology and what defenses can prevent threats from spiraling out of control.

“For those unaware of the full extent of new attack techniques, putting an infrastructure in place that is agile and flexible enough to hold up to them will be virtually impossible,” Ringel told Dice. “Considering the evolution of deepfakes and audio cloning, among other things, a baseline of A.I. literacy is going to become a must for everyone using the internet in a few years, and frameworks like an updated NIST can provide a baseline for the first wave of people educating themselves on the topic.”

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts