Top cyber and intelligence officials told a Senate panel Wednesday that the U.S. is prepared to handle election interference threats later this year, but stressed that AI-generated content will further challenge authorities in their ability to verify sham content.
The remarks came just under six months before a November U.S. election that’s running in parallel with dozens of other elections around the world this year.
“Since 2016, we’ve seen declassified intelligence assessments name a whole host of influence actors who have engaged in, or at least contemplated, election influence and interference activities — including not only Iran, Russia and the PRC, but also Cuba, Venezuela, Hezbollah and a range of foreign hacktivists and profit-motivated cybercriminals,” said Senate Intelligence Committee chair Mark Warner, D-Va. in opening remarks.
“Have we thought through the process of what we do when one of these [election interference] scenarios occurs?” said vice chair Marco Rubio, R-Fla.
“If tomorrow there was a … very convincing video of a candidate that … comes out within 72 hours to go before election day of that candidate saying some racist comment or doing something horrifying, but it’s fake — who is in charge of letting people know this thing is fake?” he said.
National Intelligence Director Avirl Haines touted many of the intelligence community’s tools available to detect and dismantle fake election content, including a DARPA-backed multimedia authentication tool.
CISA Director Jen Easterly also said her agency has been working directly with AI firms like OpenAI to handle election threats, encouraging them to drive their users to web pages run by the National Association of Secretaries of State that provides election resources in a bipartisan manner.
She said Americans should be confident about the security of the coming election but stressed the U.S. can’t be complacent. The threats facing Americans voting in November are “more complex than ever.”
The hearing underscored the situational challenges of handling election information and results: who should Americans trust on the final vote, and if false information proliferates through social media, which U.S. authorities tell Americans the content is a sham?
Lawmakers butted heads with Haines over the notification process involved in relaying to the public where fake information is coming from, and whether ODNI should be the harbinger for policing content versus just attributing content to malicious actors.
Sen. James Risch, R-Idaho, brought up a contested 2020 missive about whether the infamous Hunter Biden laptop story was Russian disinformation, calling it “deplorable.”
Who would speak up to say this letter is “obviously false,” he asked Haines.
“I don’t think that it’s appropriate for me to be determining what is true and what is false in such circumstances,” Haines replied, arguing it was not her job to make determinations of what current or former intelligence officials declare.
Sen. Angus King, I-Maine, said ODNI should set its crosshairs on whether election claims are a part of foreign disinformation operations, which could sometimes involve declassifying IC information to warn the public.
“I don’t want the U.S. government to be the truth police,” he said. “That’s not the job of the U.S. government.”
Consumer-facing AI tools have given ordinary people a trove of ways to increase productivity in their workplaces and day-to-day lives, but researchers and officials for months have expressed fears over how the platforms can be used to sow political discord through technologies like voice cloning and image creation.
Tech and AI companies in February made commitments to watermarking AI-generated content linked to elections, though some critics are wary of whether the voluntary measures are strong enough to taper back false and misleading images or text disseminated over social media.
A loss of faith in electoral systems at home has officials fearing it could lead to a repeat of the widespread voter fraud claims that emerged during the 2020 presidential election, which ended in the January 6 attack on the U.S. Capitol.
On the domestic front, election workers worry they will face threats of violence from voters who don’t accept the polling results.
Key federal agencies in March resumed discussions with social media firms over removing disinformation on their sites as the November election nears, a stark reversal after the Biden administration for months froze communications with social platforms amid a pending First Amendment case in the Supreme Court, Warner said last week.
“If the bad guy started to launch AI-driven tools that would threaten election officials in key communities, that clearly falls into the foreign interference category,” he said at the time, but noted it may not necessarily take a formal definition of misinformation, and may be deemed a “whole other vector of attack.”
AI companies have been found sweeping chat logs to root out malicious actors or hackers looking to augment their approaches into networks, Nextgov/FCW previously reported. Of many use cases, foreign spin doctors have improved their disinformation campaigns by using generative AI to make their fraudulent English-language posts sound more realistic.
“We’re going to count on you,” Warner told the witnesses in closing remarks. “This is the most important election ever.”
This post was originally published on 3rd party site mentioned in the title of this site