The U.S. Department of Commerce released a guiding document Tuesday for operations of its AI Safety Institute, run out of the National Institute of Standards and Technology. The institute was created in November 2023 to support the mandates given to the Department of Commerce under President Joe Biden’s October 2023 executive order.
The new Strategic Vision contains three focal goals: advance the science of AI safety; articulate, demonstrate and disseminate the practices of AI safety; and support institutions and entities coordinating AI safety protocols.
“Safety breeds trust, trust provides confidence in adoption, and adoption accelerates innovation,” the Strategic Vision says.
A key challenge the Strategic Vision aims to confront is the lack of global standards and testing metrics to effectively evaluate safety in AI systems. It also looks to bring more coordinated global effort to developing testing and validation metrics for AI systems, as well as asking for national laboratory and other federal agency participation.
Conducting assessments, A/B testing and red teaming efforts in emerging systems to assess various security threats are among the methods the Strategic Visions champions to ensure AI systems are rights- and security-preserving prior to deployment.
Three overarching words — “possible,” “actionable,” and “sustainable” — will also serve as guiding principles in AISI’s ongoing work to better evaluate the societal impact of advanced AI systems.
The AISI will then work to further evangelize the methodologies and metric systems which arise from thoroughly testing fledgling AI technologies..
“AlSI’s projects may contribute to scientific reports, articles, guidance, and practices to help ensure that rigorous Al safety research, testing, and guidance inform major domestic Al safety legislation or policy,” the guidance reads.
Just last month the AISI added several new members to its leadership team.
This post was originally published on 3rd party site mentioned in the title of this site