The Biden administration has identified artificial intelligence and the cloud computing capabilities that power it as critical elements of national power. Its wide-ranging October 2023 Executive Order EO 14110 (“AI EO”) establishes reporting requirements for the developers of dual-use foundation models and cloud providers with foreign users. Though directionally correct, these requirements should be updated to better serve the administration’s national security objectives.
First, the AI EO applies an arbitrary threshold for computing power as an indicator of malicious activity and does not account for malevolent alternatives to the cloud, such as networks of connected devices. Second, the ambiguity of the reporting guidance will place a heavy administrative burden on companies and make it difficult for the government to sort through an overwhelming amount of information to identify the most concerning data. A better approach would be for the government to work closely with American technology companies to prioritize the most significant signals, preserve U.S. cloud dominance, and anticipate future challenges.
Background – The Race for AI Leadership
American and Chinese companies dominate the global market for cloud computing services and frequently compete in Africa, Southeast Asia, and elsewhere. In January, Google announced the opening of its first cloud region on the African continent in Johannesburg, South Africa. The project, which is expected to contribute over $2.1 billion to South Africa’s GDP and boost Africa’s digital transformation, solidified Google’s cloud foothold on the continent. Yet, four months before Google’s cloud region became operational, Chinese company Alibaba had already opened its own cloud region in South Africa in partnership with BCX, a subsidiary of the Telkom Group. The close-quarters clash between Google and Alibaba is one of many examples of how American and Chinese companies are jockeying for influence in emerging markets.
America reaps the benefits of its companies’ global leadership in artificial intelligence and cloud computing. Amazon Web Services, Microsoft Azure, and Google Cloud control 67% of worldwide cloud infrastructure. Cloud computing has become a source of leverage, and many foreign governments depend on the critical systems American technology companies provide. In Ukraine, Microsoft has provided extensive assistance to Volodymyr Zelenskyy’s government by transitioning essential services to the cloud, better enabling the country’s resistance and efficient operations in the field. In Israel, Google and Amazon Web Services have collaborated on a $1.3 billion cloud services project for the Israeli government and military.
Limiting China’s ability to use advanced artificial intelligence is a major American objective that crosses party lines and has featured in the policy agendas of both the Trump and Biden administrations. In January 2021, the Trump administration issued Executive Order 13984 (“Cyber EO”) directing the Department of Commerce to create “Know Your Customer” (KYC) requirements for US cloud providers and to consider conditions for foreign use of U.S. Infrastructure-as-a-Service (“IaaS”).
In October 2022, the Biden administration banned the exports of high-quality microchips to China to reduce that country’s ability to build advanced semiconductors. The AI EO of 2023 largely extends the Cyber EO’s KYC requirements to foreign resellers of U.S. cloud services. Three months ago, the Department of Commerce followed up on the AI EO by proposing a new rule that would impose significant KYC, monitoring, and reporting obligations on U.S. IaaS companies and their foreign partners. Each of those steps demonstrates federal efforts to rein in cloud computing, thereby shaping potential adversaries’ abilities to access advanced AI.
Revising the AI Executive Order
To address China’s threat and the AI arms race, the Biden administration should rework two important elements of the AI EO to better align with its national security objectives. First, the AI EO puts an exact number to the amount of computing power it considers worrying: any model trained with computing power “greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations.” Companies developing models with computing power that meet these thresholds must report to the Department of Commerce. When a threshold is so specific, it makes it easier for malicious actors to adjust their behavior to avoid detection.
Vladimir Putin, for instance, has decades of sanctions evasion under his belt. Such lessons have made it that much easier for Russia to circumvent some of the most recent restrictions applied following Russia’s 2022 invasion of Ukraine. More significantly, Russia has served as a global model for other isolated authoritarian leaders in North Korea and Iran. These two countries have used the penalty-dodging tactics of Moscow as public examples of how to respond to the predictable toolkit of Western-imposed punishments.
Outlining specific thresholds without a clear rationale has raised issues in other contexts. For example, anti-money laundering (“AML”) regulations often require banks and other financial businesses to file reports when transactions exceed $10,000 or some other noted amount. However, research has shown that such compliance programs have a 95-98% false positive rate. The current AML regime is not efficient, but it has been much improved by regular collaboration between banks and law enforcement officials and by smart and intuitive data analytics.
This issue is further complicated by the potential for AI systems to be trained on decentralized networks of connected devices and, therefore, not require large-scale cloud computing.
Second, the AI EO builds upon the Cyber EO and directs the Department of Commerce to require foreign resellers of U.S. Infrastructure-as-a-Service (“IaaS”) to put in place Know Your Customer (“KYC”) requirements for the use of cloud computing. In January, the Department of Commerce proposed a rule outlining these requirements in response to the Trump administration’s Cyber EO and the Biden administration’s AI EO.
The January Commerce rule, which is open for comment until the end of April, instructs IaaS providers and their foreign resellers to maintain comprehensive customer identification programs (CIPs), perform customer verification, and maintain identifying information about their foreign customers. The dollar and storage costs of such CIPs would be enormous. An estimated 2.3 billion people use personal clouds, which does not even account for company or organizational accounts. The rule could also put IaaS providers on a collision course with various EU data privacy laws, including the General Data Protection Regulation (GDPR).
The well-intentioned KYC requirement within the AI EO could be seen abroad as laying the groundwork for the U.S. to shape the behaviors of foreign governments and companies by monitoring – and, in the future, potentially limiting – access to cloud computing and artificial intelligence. This perception could have the unintended consequence of leading foreign companies to avoid U.S. cloud computing services, even those provided by resellers, and developing alternative models of collaboration with cloud providers from China and elsewhere.
Such a dynamic is similar to how countries worldwide – including Russia and China – are building alternatives to the SWIFT international payments system in response to American sanctions on Russia and China. Thus, the proposed CIP and KYC rules could diminish the competitiveness of American cloud providers and undermine American national security objectives.
An Alternative Approach
A better approach to monitoring the development of computing power to support advanced AI systems and when and how adversaries might be using it is to foster strong collaboration with American cloud providers. The government should work with American cloud providers to understand how they currently limit access to dangerous thresholds of computing power or use by an adversary. Cloud providers have had decades to develop sophisticated internal warning systems to address such issues for business and compliance reasons.
In addition to informing U.S. policy approaches, building upon existing processes at major U.S. cloud providers can help foster the long-term collaboration necessary in this rapidly evolving field. The Microsoft Threat Intelligence Center (“MSTIC”) provides an effective model for what such public-private cooperation could look like. MSTIC shares information about hacking, fraud, and malicious web traffic with the government, better positioning authorities to respond to issues in an effective, proportionate, and timely manner.
Collaborating closely with U.S. cloud providers could also keep the government current with new AI developments, such as advances in edge-to-cloud and multi-cloud capabilities. It is likely that within the next two years, organizations across the world will have access to low-code or no-code (“LC/NC”) options to easily build their own augmented or automated processes without relying as much on cloud vendors. Rather than be boxed in by computing power thresholds and react to these developments, Washington can empower American businesses of all sizes to offer, sell, and scale LC/NC programs that serve foreign users while considering technical solutions to monitor their use at an appropriate level.
The AI EO is directionally correct and provides much-needed guidance to organizations and people. It also displays a nuanced understanding of AI’s technological, organizational, and market dynamics. But it can be improved. The Biden administration can better align it with national security objectives by updating the AI EO’s computing power threshold and KYC requirements.
This post was originally published on 3rd party site mentioned in the title of this site