The flurry of AI-related activity has certainly begun. Earlier this week, the Department of Commerce’s National Institute of Standards and Technology (NIST) released new draft guidance to improve the safety, security and trustworthiness of AI systems. This guidance comes from several directives and 180-day goals set forth in President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order) on October 30, 2023.
Big Picture
NIST issued three draft publications to assist organizations in (1) addressing the unique risks posed by generative AI, (2) implementing secure software development practices for generative AI and dual-use foundation models and (3) promoting transparency in digital content and to combat the rise of “synthetic” content, which has been created or altered by AI. NIST also issued a draft publication focused on driving worldwide development and implementation of AI standards. These four NIST publications are initial drafts that NIST released to solicit public commentary before submitting final versions later this year. Each publication includes specific topics on which NIST seeks feedback. Comments for each publication are due on June 2, 2024.
In addition to these four drafts, the NIST also announced “NIST GenAI,” a program to “assess generative AI technologies developed by the research community from around the world.” Finally, as part of the Department of Commerce’s AI initiative, the U.S. Patent and Trademark Office (USPTO) separately published a request for comment on April 30, 2024, soliciting feedback about the impact of AI on the definitions and interpretations of i) prior art, ii) knowledge of a person having ordinary skill in the art (known as PHOSITA) and, iii) patentability of claimed inventions. The USPTO must receive written comments before July 29, 2024.
Regardless of an organization’s industry focus, its clear that the federal and state governments are keen to ensure the safe, transparent and fair use of AI. While the NIST guidance is not yet final, its guidance documents, together with other AI-related laws, regulations and guidelines—both proposed and finalized—signal where government oversight and expectations are heading. It would be prudent for organizations to begin developing a framework under which they will consider both the development and/or deployment of AI.
The Details
The first draft publication NIST issued is the Generative AI Profile (GAI Profile), intended to help organizations identify unique risks posed by generative AI. The GAI Profile proposes actions for managing the risk posed by generative AI according to an organization’s unique goals and priorities. The GAI Profile also identifies twelve (12) risks unique to or exacerbated by generative AI, including: (1) access to Chemical, Biological, Radiological, and Nuclear (CBRN) information; (2) Confabulation, also known as “hallucinations”; (3) Dangerous or Violent Recommendations; (4) Data Privacy; (5) Environmental impacts; (6) Human-AI Configuration that introduces human foibles or bias; (7) Information Integrity; (8) Information Security; (9) violation of Intellectual Property rights; (10) Obscene, Degrading and/or Abusive Content; (11) Toxicity, Bias, and Homogenization in data inputs and outputs; and (12) lack of transparency that undermines Value Chain and Component Integration. The GAI Profile is meant to be a companion to the broader AI Risk Management Framework (AI RMF) that NIST released last year and includes a grid with 50 pages of action items that organizations can implement to help mitigate the risks of generative AI listed above.
The second draft NIST publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (GAI SSDF Guide) is designed to help companies implement secure software development practices for generative AI and dual-use foundation models. This guide augments the Secure Software Development Framework published by NIST in 2022. The GAI SSDF Guide expands on best practices recommended throughout the software development life cycle and adds notes and recommendations for those who produce, acquire or use generative AI or dual-use foundation models. The GAI SSDF Guide provides a starting point for planning and implementing a risk-based approach to software design for AI models. The GAI SSDF Guide also discusses how developers and AI users should address training data, data collection processes and quality assurance metrics to prevent AI poisoning, bias, homogeneity and tampering.
The third draft NIST publication, Reducing Risk Posed by Synthetic Content, offers technical advice on how to promote transparency in digital content and combat the rise of “synthetic” content, which has been created or altered by AI. This guide analyzes existing standards, tools, methods and practices. It also proposes the development of further standards and techniques to authenticate content and track its provenance, label synthetic content, prevent generative AI from producing child sexual abuse materials or non-consensual imagery of real individuals, test software and audit and maintain such synthetic content. These techniques are intended to improve public trust in digital content through science-backed standards.
The fourth NIST publication, A Plan for Global Engagement on AI Standards (Engagement Plan), focuses on driving worldwide development and implementation of AI standards. Previously, the President’s Executive Order directed the U.S. government to seek the benefits of AI while mitigating risks like “exacerbating inequality, threatening human rights, and causing other harms” to people around the world. In the spirit of reaching global AI standards, this Engagement Plan establishes a plan to coordinate efforts with allies and partners and drive the development and implementation of AI-related consensus standards, cooperation and information-sharing.
Separately as stated above, NIST also announced “NIST GenAI.” By centrally evaluating generative AI tools, NIST’s GenAI program seeks to (1) monitor the creation of benchmark datasets, (2) facilitate the development of content authenticity detection technologies for different modalities (e.g., text, audio, imaging, video, code), (3) conduct different analytical tests and (4) promote the development of technologies that can identify the source of fake or misleading information.
NIST GenAI’s first pilot aims to measure and understand how AI can differentiate between synthetic and human-generated content in “text-to-text” and “text-to-image” cases. NIST GenAI welcomes diverse teams—from academic, industry and other research labs—to contribute to generative AI research through the NIST GenAI program. Those who participate in the pilot will either test their system’s ability to generate synthetic content that is indistinguishable from human-produced content or will be tested on their system’s ability to detect synthetic content created by large language models and generative AI models. Pilot participation registration opens in May and the submission deadline is August 2024.
Manatt will continue to closely monitor federal and state developments related to AI and offer guidance in the evolving regulatory landscape, as we expect activity to continue over the latter half of this year. Meanwhile, businesses that intend to implement generative AI models should continue to stay vigilant about changing guidelines and regulations and apply recommended best practices.
For more information and resources, please visit our dedicated Artificial Intelligence webpage.
This post was originally published on 3rd party site mentioned in the title of this site