Why Corporate Regulation Cannot Safeguard Against AI Existential Risks – Forbes

3 minutes, 13 seconds Read
image

As artificial intelligence continues to advance at an unprecedented pace, concerns about existential risks posed by this transformative technology have received increasing attention. While existential risks to humanity may seem remote, given the magnitude of the consequences, it is still important to consider them. Yet, the discourse surrounding AI risks is often ill-considered, focusing heavily on “Terminator scenarios,” while neglecting the more pressing threat from bad actors.

Existential risks from AI can be broadly categorized into two types: unintended consequences and intentional misuse by malicious entities. Unintended consequences refer to scenarios in which AI systems, as they become increasingly sophisticated, inadvertently cause catastrophic harm as they start to operate outside of human control. Much like in the action film The Terminator, a “judgment day” scenario could lie ahead if AI tech ever becomes self aware.

Alternatively, intentional misuse from bad actors, such as terrorist groups or hostile nations, involves deliberately weaponizing AI to inflict damage on a global scale.

Unintended consequences have received a disproportionate amount of the attention. On the one hand, there are some good reasons for this. Google botched the rollout of its Gemini chatbot when it produced overly politically correct imagery, and to this day Gemini continues to recommend adding glue to pizza sauce. Such accidents are far from constituting existential threats, but they demonstrate an inability of corporations to control their own products.

On the other hand, companies have strong reputational and financial incentives to avoid harming their customers and often engage in months or years of pre-deployment testing. Even after product releases, they work diligently to correct problems as soon as they are identified. Furthermore, we are likely to witness a gradual escalation of problems before any catastrophic “existential” event occurs, providing ample opportunity for course correction.

In contrast, the threat posed from bad actors weaponizing AI has been largely relegated to the sidelines of current discourse. But it is a near certainty this risk is a real concern. Foreign adversaries like Iran, North Korea, Russia, and China want nothing more than to gain a “technological leg up” on the United States. Even so, it may be underground networks of radical extremists who are most dangerous. In this sense, anti-terrorism policies likely offer the best roadmap for how to deal with these national security concerns. A response will require involvement from multiple levels of government, with changes made to everything from immigration and customs enforcement to intelligence gathering.

Yet, proposed AI safety legislation, such as California’s draft AI bill, tends to focus primarily on regulating large technology companies, rather than addressing the potential misuse of AI by bad actors. This misplaced focus may stem from an anti-business bias of those pushing the legislation, raising questions about their true motives. Rather than address actual risks posed by AI, proponents may be more concerned with punishing big business. This is a sad development, as AI can indeed be used to cause harm if it falls into the wrong hands.

To effectively mitigate risks posed by AI, the national security establishment must recognize the need to prepare for the technological advancements that lie ahead. This challenge extends far beyond the borders of California and even the United States. While tech companies do bear some responsibility, and should develop stronger safeguards against hackers who might seek to access and weaponize their technologies, the notion that governments can simply dictate terms to big business and expect “AI safety” to be the end result is seriously misguided.

Existential risks posed by AI are not likely to come from the few large technology companies dominant in Silicon Valley. Rather, they are likely to come from bad actors, both state and non-state, and our policy response should begin with this in mind. By focusing too narrowly on regulating big business, we risk overlooking the most pressing threats to our national security and leaving ourselves vulnerable to catastrophic harm.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts