US Senate Proposes New Bill to address AI security concerns – MediaNama.com

3 minutes, 22 seconds Read
image

A new Bill called the Secure Artificial Intelligence Act, has been proposed in the US Senate. The Bill aims to tackle security vulnerabilities associated with Artificial Intelligence Systems. It proposes creating a database of all confirmed or attempted incidents of security attacks on significant AI systems and creating a “Security Center” at the National Security Agency (NSA) to engage in security research for AI systems.

The Bill defined artificial intelligence security vulnerability  as “a weakness in an artificial intelligence system that could be exploited by a third party to subvert, without authorization, the confidentiality, integrity, or availability of an artificial intelligence system.” These included instances of data poisoning (negatively modifying the dataset of a model), evasion attacks (altering a model’s behaviour), privacy-based attacks and abuse attacks. It also warns of AI safety incidents wherein a person can be harmed and artificial intelligence security incidents wherein information can be extracted from a model.

The Bill introduced by Senators Mark Warner and Thom Tillis will have to go through the committee before it can be taken up by the Senate.

 Create a database to track vulnerabilities

 The Bill called for ‘National Institute of Standards and Technology’  and the ‘Cybersecurity and Infrastructure Security Agency’ to create a “National Vulnerability Database “. This database will function as a public repository of all artificial intelligence security vulnerabilities. The database must allow private sector entities, public sector organizations, civil society groups, and academic researchers to report such incidents. The database is required to contain all confirmed or suspected artificial intelligence security and safety incidents while maintaining the confidentiality of the affected party. The Bill also required for the incidents to be classified in a manner that makes accessibility easier. It also called to prioritise including incidents concerning models which are used in critical infrastructure, safety-critical systems and large-scale commercial or public sector entities and incidents that could have a “catastrophic impact on the people or economy of the United States.”

Additionally, the Bill proposed updating the ‘‘Common  Vulnerabilities and Exposures Program’’, which is the current reference guide and classification system for all information security vulnerabilities sponsored by the Cybersecurity and Infrastructure Security Agency. It called to assess how the process of documenting instances of AI security and safety incidents in the program can be improved.

Establish an Artificial Intelligence Security Centre

The Security Centre established by the NSA must make available a research test-bed for private sector and academic researchers to engage in artificial intelligence security research on a subsidized basis. The centre must develop guidance to prevent “counter-artificial intelligence techniques” which are techniques meant to extract information from a model and modify its behaviour. It also proposed promoting the use of artificial intelligence for managers of national security and making the models available to other federal agencies on a cost-recovery basis. The Bill also called to provide researchers with resources as outlined by President Joe Biden’s executive order on ensuring AI safety.

Evaluate consensus standards and supply chain risks

The Bill also acknowledged the need to update certain practices while considering AI. The Bill also called toevaluate whether existing voluntary consensus standards for vulnerability reporting effectively accommodate artificial intelligence security vulnerabilities.” The Bill postulates that there may be a need to update the widely accepted standards for reporting security vulnerabilities with the rise of artificial intelligence.

Further, it called to reevaluate best practices concerning supply chain risks associated with training and maintaining artificial intelligence models. These could include risks associated with:

  • reliance on remote workforce and foreign labour for tasks like data collection, cleaning, and labelling
  • human feedback systems used to refine AI systems
  • inadequate documentation of training data and test data storage, as well as limited provenance of training data
  • using large-scale, open-source datasets in the public and private sector developers in the United States
  • using proprietary datasets containing sensitive or personally identifiable information.

STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Also Read:

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts