Senators this week introduced a new bill that would update cybersecurity information-sharing programs to better incorporate AI systems, in an effort to improve the tracking and processing of security incidents and risks associated with AI.
With both private sector companies and U.S. government agencies trying to better understand the security risks and threats associated with generative AI and the deployment of AI systems across various industries, “The Secure Artificial Intelligence Act of 2024″ would specifically look at collecting more information around the vulnerabilities and security incidents associated with AI. Currently, the existing processes for vulnerability information sharing – including the National Institute of Standards and Technology’s (NIST) National Vulnerability Database and the CISA-sponsored Common Vulnerabilities and Exposures program – “do not reflect the ways in which AI systems can differ dramatically from traditional software,” senators Mark Warner (D-Va.) and Thom Tillis (R-NC) said in the overview of their new bill.
“When it comes to security vulnerabilities and incidents involving artificial intelligence (AI), existing federal organizations are poised to leverage their existing cyber expertise and capabilities to provide critically needed support that can protect organizations and the public from adversarial harm,” according to the overview of the bill. “The Secure Artificial Intelligence Act ensures that existing procedures and policies incorporate AI systems wherever possible – and develop alternative models for reporting and tracking in instances where the attributes of an AI system, or its use, render existing practices inapt or inapplicable.”
Under the new bill, these existing databases would need to better incorporate AI-related vulnerabilities, or a new process would need to be created to track the unique risks associated with AI, which include attacks like data poisoning, evasion attacks and privacy-based attacks. Already, researchers have identified various flaws in and around the infrastructure used to develop AI models, and in several cases these have been tracked through known databases and programs. Last year, for instance, the NVD added critical flaws in platforms used for hosting and employing large language models (LLMs), such as an OS command injection bug (CVE-2023-6018) and authentication bypass (CVE-2023-6014) in MLflow, a platform to streamline machine learning development.
Another priority is to establish a voluntary public database that would track reports of safety and security incidents related to AI. The reported incidents would involve AI system widely used in the commercial or public sectors, or AI systems used in critical infrastructure or safety-critical systems, which would result in “high-severity or catastrophic impact to the people or economy of the United States.”
The bill would also establish an Artificial Intelligence Security Center at the NSA, which would serve as an AI research testbed for private sector researchers and help the industry develop guidance around best AI security practices. Part of this would be to develop an approach for what the bill calls “counter-artificial intelligence,” which are tactics around manipulating an AI system in order to subvert the confidentiality, integrity or availability that system. Additionally, it would direct CISA, NIST and the Information Communications Technology Supply Chain Risk Management task force to create a “multi-stakeholder process” for developing best practices related to supply chain risks associated with training and maintaining AI models.
The Secure Artificial Intelligence Act of 2024 joins an influx of other legislative proposals over the past year, and an overall flurry of government activity like the White House’s AI executive order in 2023, to better understand the security risks associated with AI. Last year, the Testing and Evaluation Systems for Trusted AI act was proposed in October 2023 by senators Jim Risch (R-Idaho) and Ben Ray Lujan (D-N.M.). The bill would require NIST and the Department of Energy to develop testbeds for assessing AI tools and supporting “safeguards and systems to test, evaluate, and prevent misuse of AI systems.” Warner has also introduced previous bills centered around AI security, including the Federal Artificial Intelligence Risk Management Act in November 2023, which would establish guidelines to be used within the federal government to mitigate risks associated with AI.
“As we continue to embrace all the opportunities that AI brings, it is imperative that we continue to safeguard against the threats posed by – and to — this new technology, and information sharing between the federal government and the private sector plays a crucial role,” said Warner in a statement. “By ensuring that public-private communications remain open and up-to-date on current threats facing our industry, we are taking the necessary steps to safeguard against this new generation of threats facing our infrastructure.”
This post was originally published on 3rd party site mentioned in the title of this site