The actions were among several announced by the Department of Commerce at the roughly six-month mark after Biden’s executive order on artificial intelligence.

United States Department of Commerce Building (Photo by James Leynse/Corbis via Getty Images)

The National Institute of Standards and Technology announced a new program to evaluate generative AI and released several draft documents on the use of the technology Monday, as the government hit a milestone on President Joe Biden’s AI executive order.

The Department of Commerce’s NIST was among multiple agencies on Monday that announced actions they’ve taken that correspond with the October order at the 180-day mark since its issuance. The actions were largely focused on mitigating the risks of AI and included several actions specifically focused on generative AI.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” Commerce Secretary Gina Raimondo said in a statement. “With these resources and the previous work on AI from the Department, we are continuing to support responsible innovation in AI and America’s technological leadership.”

Among the four documents released by NIST on Monday was a draft version of a publication aimed at helping identify generative AI risks and strategy for using the technology. That document will serve as a companion to its already-published AI risk management framework, as outlined in the order, and was developed with input from a public working group with more than 2,500 members, according to a release from the agency.


The agency also released a draft of a companion resource to its Secure Software Development Framework that outlines software development practices for generative AI tools and dual-use foundation models. The EO defined dual-use foundation models as those that are “trained on broad data,” are “applicable across a wide range of contexts,” and “exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters,” among other things. 

“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” Laurie E. Locascio, NIST director and undersecretary of commerce for standards and technology, said in a statement.

NIST also released draft documents on reducing risks of synthetic content — that which was AI-created or altered — and a plan for developing global AI standards. All four documents have a comment period that ends June 2, according to the Commerce release.

Notably, the agency also announced its “NIST GenAI” program for evaluating generative AI technologies. According to the release, that will “help inform the work of the U.S. AI Safety Institute at NIST.” Registration for a pilot of those evaluations opens in May.

The program will evaluate generative AI with a series of “challenge problems” that will test the capabilities of the tools and use that information “promote information integrity and guide the safe and responsible use of digital content,” the release said. “One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording.”


The release and focus on generative AI comes as other agencies similarly took action Monday on federal use of such tools. The Office of Personnel Management released its guidance for federal workers’ use of generative AI tools and the General Services Administration released a resource guide for federal acquisition of generative AI tools. 

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts

FedScoop TV