NIST launches initiatives to enhance AI safety and security – American Banker

3 minutes, 55 seconds Read
image

On Monday, the National Institute of Standards and Technology (NIST) announced efforts to help the public improve the safety, security and trustworthiness of artificial intelligence. The efforts include guidance on detecting, authenticating and labeling synthetic content and a competition to create tools that help do just that.

NIST’s efforts were accompanied by the U.S. Patent and Trademark Office publishing a request for public comment (RFC) on how AI could affect the patentability of various works. The office released guidance in February on the patentability of inventions created with the assistance of AI. The newest request focuses on the more technical matters of “prior art” and how generative AI changes the level of skill “a person having ordinary skill in the art” has — two ideas invoked in patent law.

Two NIST documents released Monday address AI risks. While banks are not specifically required to maintain AI risk management frameworks, regulators generally expect banks to maintain cybersecurity risk frameworks, which can cover risks posed by AI. For banks large enough to develop their own software and AI systems, one of the new documents will also serve as a template for risk management, much in the way many banks base their cybersecurity framework off NIST’s.

The efforts directly respond to President Joe Biden’s October executive order on AI, which among other things called for regulatory agencies to create and clarify regulations on AI. In the six months since that executive order, the Commerce Department, which houses both NIST and the patent office, has been working to research and develop guidance needed to safely harness the potential of AI while minimizing its risks, according to U.S. Secretary of Commerce Gina Raimondo.

“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” Raimondo said. 

As part of the announcement, NIST issued four guidance documents on Monday. The first two will help manage the risks of generative AI — the technology that enables chatbots and text-based image and video creation tools — and serve as companions to NIST’s AI Risk Management Framework and Secure Software Development Framework.

The first document provides a full overview of the risks associated with generative AI; the second focuses on securely developing the models. The standards could impact banks’ risk assessments of generative AI in contexts such as the risks and rewards of using it in customer service scenarios.

The third publication seeks to reduce the risks of synthetic content generated or altered by generative AI. Specifically, it evaluates the existing practices that generative AI companies deploy to disclose the provenance of the content their models generate. These methods include digital watermarking, which involves invisibly modifying generated images to mark them as created by generative AI.

While watermarks in particular have shortcomings, strategies like it could improve banks’ ability to identify doctored images of photo IDs and checks. The document overviews the current state of the art in detecting AI-generated images, which can be helpful to banks seeking to detect deepfakes.

The fourth publication is a plan for developing global AI standards. This plan involves coordinating with other countries to adopt shared standards such that, for example, countries share the same or compatible standards for testing the safety of AI systems and measuring their energy consumption. The plan, if followed, could help global banks reduce compliance costs and allow them to deliver the same AI-powered products to multiple customer bases.

NIST also opened registration for two parallel competitions on Monday, one soliciting generative AI models that can summarize a set of 25 related documents in less than 250 words, and another soliciting models that can determine whether such a summary is written by a human or by AI. The point of the competition is to better understand how well AI can imitate humans, how well AI can discriminate between human-written and AI-written texts, and what factors play into the success of these systems.

NIST promises a similar competition “coming soon” that will relate to text-to-image systems, which create images based on a prompt. This competition will also focus on identifying whether an image is created by a human (possibly photographs or drawings) or generated synthetically by AI. The competition could provide a better public understanding of how companies can identify fake images from real ones.

AI presents risks that are “significantly different from those we see with traditional software,” according to NIST director Laurie E. Locascio, necessitating separate but parallel efforts on secure software development and cybersecurity as a whole.

“These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation,” Locascio said.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts