RSAC AI is a double-edged sword in that the government can see ways in which the tech can protect and also be used to attack Americans, says US Homeland Security Secretary Alejandro Mayorkas.
On the one hand, artificial intelligence can automate computer networks to become more efficient and smarter at defending against threats to US critical infrastructure and protecting the nation’s citizens from harm, Mayorkas told the RSA Conference in San Francisco in a keynote session.
However, terrorists and criminals are also apparently tapping this tech to automate attacks against those same critical assets and perpetuate crimes including child sexual exploitation and abuse — some of the things Homeland Security is working to defend against.
Whenever government officials talk about using AI for day-to-day operations, the usual concerns spring to mind, such as the potential for agents to misuse surveillance technologies as well as the biases inherent in machine learning.
Heading off those fears, Mayorkas said his department’s Office for Civil Rights and Civil Liberties is there, at an “institutionalization” level, to balance the interests of protecting the Land of the Free with the need to respect citizens’ privacy.
Last month, Homeland Security also set up its own Artificial Intelligence Safety and Security Board. And in February, it rolled out the AI Corps initiative, which aims to hire 50 tech experts in the field this year.
That AI safety and security board this week met for the first time, he said. And despite criticism that the board is essentially stacked with Big Tech big cheeses — who may be inclined to put profits before people’s privacy and safety — responsible AI advocate Rumman Chowdhury, who also sits on the safety board, and Mayorkas pushed back on this critique during their joint keynote.
“What we heard yesterday is an articulation of the fact that the civil liberties, civil rights implications of AI really are part and parcel to safety,” Mayorkas said. “We cannot consider safety and safe use of AI to be a potential perpetuation of implicit bias, for example.”
He also pointed to three pilot programs underway at Homeland Security that Mayorkas said help the department advance its mission. One involves using LLMs to assist in Homeland Security investigations.
“We may have a task force that is investigating a narcotics case on the West Coast, and a different group of agents working in an international money laundering scheme on the East Coast. And there is no perceived connectivity between the work of the two,” Mayorkas explained.
We are now ingesting their criminal investigative reports into a database, and we will be able to use AI to identify connections
“But we are now ingesting their criminal investigative reports into a database, and we will be able to use AI to identify connections that we otherwise would not be aware of,” he added.
Another pilot will assist the Federal Emergency Management Agency (better known as FEMA) to help “resource-poor, target-rich communities apply for grants to make sure they are not disenfranchised,” Mayorkas said.
These are the federal-government-issued funds made available for pre- and post-emergency and disaster relief to aid communities hit by natural disasters and other critical events.
And the third program will use LLMs to train Homeland Security officers who work with refugees and asylum seekers applying for citizenship in the US.
“A refugee officer in training can now actually pose questions to a machine that has been trained to act as a refugee, both substantively and stylistically,” Mayorkas said.
By “stylistically,” he explained: “Very often, people who have gone through trauma tend not to be forthcoming in revealing traumatic experiences, and we have trained the machine to be similarly reticent.” ®
This post was originally published on 3rd party site mentioned in the title of this site