Hacker Stole Secrets From OpenAI – SecurityWeek

3 minutes, 15 seconds Read
image

The New York Times reported on July 4, 2024, that OpenAI suffered an undisclosed breach in early 2023.

The NYT notes that the attacker did not access the systems housing and building the AI, but did steal discussions from an employee forum. OpenAI did not publicly disclose the incident nor inform the FBI because, it claims, no information about customers nor partners was stolen, and the breach was not considered a threat to national security. The firm decided that the attack was down to a single person with no known association to any foreign government.

Nevertheless, the incident led to internal staff discussions over how seriously OpenAI was addressing security concerns.

“After the breach, Leopold Aschenbrenner, an OpenAI technical program manager, focused on ensuring that future A.I. technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets,” writes the NYT.

Earlier this year, he was fired, ostensibly for leaking information (but more likely because of the memo). Aschenbrenner has a slightly different version on the official leak story. In a podcast with Dwarkesh Patel (June 4, 2024), he said: “OpenAI claimed to employees that I was fired for leaking. I and others have pushed them to say what the leak was. Here’s their response in full: Sometime last year, I had written a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI. I shared that with three external researchers for feedback. That’s the leak… Before I shared it, I reviewed it for anything sensitive. The internal version had a reference to a future cluster, which I redacted for the external copy.”

Clearly, OpenAI is not a happy ship, with many different opinions on how it operates, how it should operate, and where it should be going. The concern is not so much about OpenGPT (which is gen-AI) but on the future of AGI (artificial general intelligence). 

The former ultimately transforms knowledge it learns (generally from scraping the internet), while the latter is capable of original reasoning. Gen-AI is not considered to be a threat to national security, although it may increase the scale and sophistication of current cyberattacks.

AGI is a different matter. It will be capable of developing new threats in cyber, the kinetic battlefield, and intelligence – and OpenAI, DeepMind, Anthropic and other leading AI firms and technologies are all rushing to be first to market. The concern over the 2023 OpenAI breach is that it may show a lack of security preparedness that could really endanger national security in the future.

Advertisement. Scroll to continue reading.

“A lot of the drama comes from OpenAI really believing they’re building AGI. That isn’t just a marketing claim,” said Aschenbrenner, adding, “What gets people is the cognitive dissonance between believing in AGI and not taking some of the other implications seriously. This technology will be incredibly powerful, both for good and bad. That implicates national security issues. Are you protecting the secrets from the CCP? Does America control the core AGI infrastructure or does a Middle Eastern dictator control it?”

As we get closer to developing AGI, the cyber threats will shift from criminals to elite nation state attackers – and we see time and again that our security is insufficient to defend against them. On the back of a relatively insignificant breach at OpenAI (and we must assume that it was no worse than the firm told its employees), Aschenbrenner raised general and genuine concerns over security – and for that, it seems, he was fired.

Related: Former OpenAI Employees Lead Push to Protect Whistleblowers Flagging AI Risks

Related: OpenAI’s Altman Sidesteps Questions About Governance

Related: UN Adopts Resolution Backing Efforts to Ensure Artificial Intelligence is Safe

Related: AI Weights: Securing the Heart and Soft Underbelly of Artificial Intelligence

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts