Some privacy advocates say they’re terrified by Google’s announcement this week that it’s testing a way to scan people’s phone calls in real time for signs of financial scams.
Google unveiled the idea Tuesday at Google I/O, its conference for software developers. Dave Burke, a Google vice president for engineering, said the company is trying out a feature that uses artificial intelligence to detect patterns associated with scams and then alert Android phone users when suspected scams are in progress.
Burke described the idea as a security feature and provided an example. Onstage, he got a demonstration call from someone impersonating a bank who suggested that he move his savings to a new account to keep it safe. Burke’s phone flashed a notification: “Likely scam: Banks will never ask you to move your money to keep it safe,” with an option to end the call.
“Gemini Nano alerts me the second it detects suspicious activity,” Burke said, using the name of a Google-developed AI model. He didn’t specify what signals the software uses to determine a conversation is suspicious.
The demonstration drew applause from the conference’s in-person audience in Mountain View, California, but some privacy advocates said the idea threatened to open a Pandora’s box as tech companies race to one-up one another on AI-enabled features for consumers. In interviews and in statements online, they said there were numerous ways the software could be abused by private surveillance companies, government agents, stalkers or others who might want to eavesdrop on other people’s phone calls.
Burke said onstage that the feature wouldn’t transfer data off phones, providing what he said was a layer of potential protection “so the audio processing stays completely private.”
But privacy advocates said on-device processing could still be vulnerable to intrusion by determined hackers, acquaintances with access to phones or government officials with subpoenas demanding audio files or transcripts.
Burke didn’t say what kind of security controls Google would have, and Google didn’t respond to requests for additional information.
“J. Edgar Hoover would be jealous,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, an advocacy group based in New York. Hoover, who died in 1972, was director of the FBI for decades and used wiretaps extensively, including on civil rights figures.
Cahn said the implications of Google’s idea were “terrifying,” especially for vulnerable people such as political dissidents or people seeking abortions.
“The phone calls we make on our devices can be one of the most private things we do,” he said.
“It’s very easy for advertisers to scrape every search we make, every URL we click, but what we actually say on our devices, into the microphone, historically hasn’t been monitored,” he said.
It’s not clear when or whether Google would implement the idea. Burke said onstage that the company would have more to say in the summer. Tech companies frequently test ideas they never release to the public.
Google has wide reach in the mobile phone market because it’s behind the most widely used version of the Android mobile operating system. About 43% of mobile devices in the U.S. run on Android, and about 71% of mobile devices worldwide do so, according to the analytics firm StatCounter.
“Android can help protect you from the bad guys, no matter how they try to reach you,” Burke said.
Meredith Whittaker, a former Google employee, was among those to criticize the scam-detection idea. Whittaker is now president of the Signal Foundation, a nonprofit group that supports the privacy-centric messaging app Signal.
“This is incredibly dangerous,” Whittaker wrote on X.
“From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w/ seeking reproductive care’ or ‘commonly associated w/ providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing,’” she wrote.
When Google posted about the idea on X, it got hundreds of responses, including many positive ones. Some said the idea was clever, and others said they were tired of frequent phone calls from scammers.
Americans ages 60 and older lost $3.4 billion last year to reported digital fraud, according to the FBI.
Tech companies have sometimes resisted dragnet-style scanning of people’s data. Last year, Apple rejected a request to scan all cloud-based photos for child sexual abuse material, saying scanning for one type of content opens the door for “bulk surveillance,” Wired magazine reported.
But some tech companies do scan massive amounts of data for insights related to targeted online advertising. Google scanned the emails of non-paying Gmail users for advertising purposes until it ended the practice in 2017 under criticism from privacy advocates.
Kristian Hammond, a computer science professor at Northwestern University, said the Google call-scanning idea is the result of a “feature war” in which the big players in AI technology “are continually trying to one-up each other with the newest whiz-bang feature.”
“We have these micro-releases that are moving fast. And they’re not necessary, and they’re not consumer-focused,” he said.
He said the advances in AI models are legitimately exciting, but he said it was still too early to see what ideas from tech companies would take off.
“They haven’t quite figured out what to do with this technology yet,” he said.
This post was originally published on 3rd party site mentioned in the title of this site