US takes steps to restrict CCP’s access to advanced artificial intelligence software

According to a report from Reuters, the Biden administration is preparing to open a new front in protecting America’s most cutting-edge artificial intelligence software, such as ChatGPT, from falling into the hands of the Chinese Communist Party.

Three sources familiar with the matter told Reuters that the U.S. Department of Commerce is considering implementing a new regulatory measure to restrict the export of proprietary or closed-source artificial intelligence models, where the software and training data are classified.

Over the past two years, the U.S. has taken a series of measures to prevent the export of advanced artificial intelligence chips to China in order to slow down the Chinese regime’s development of advanced technology for military purposes. However, regulatory actions by U.S. agencies have struggled to keep pace with the rapid advancements in the industry.

The challenge lies in the fact that major U.S. artificial intelligence giants like Microsoft-backed OpenAI, Google’s DeepMind under Alphabet, and their competitor Anthropic have already developed some of the most powerful closed-source AI models, and can sell these models to almost anyone worldwide without government oversight.

The U.S. government and private research institutions are concerned that adversaries, including the CCP, may exploit these models to mine vast amounts of text and images, synthesize information, generate content, launch aggressive cyber attacks, or even develop powerful biological weapons.

Consultants from Gryphon Scientific and the Rand Corporation stated that information provided by advanced AI models could aid adversaries in manufacturing biological weapons.

The Department of Homeland Security in its 2024 Homeland Threat Assessment Report stated that cyber actors are likely to utilize artificial intelligence to “develop new tools” for “larger scale, faster, more efficient, and stealthier cyber attacks.”

Sources informed Reuters that in order to impose export controls on AI models, the U.S. government may implement a directive from an AI executive order issued last October, which mandates reporting to the Department of Commerce when a model’s computational capacity reaches a certain threshold during development and testing.

Two U.S. officials and another source told Reuters that this computational capacity limit could be used to determine which AI models would be subject to export restrictions.

However, the EpochAI research institute tracking AI trends stated that since no models are currently considered to have reached this threshold, even if this restriction threshold is enforced, it may only restrict the export of models that are yet to be released.

Google’s Gemini Ultra is believed to be nearing this threshold.

Tim Fist, an AI policy expert at the Center for a New American Security (CNAS) in Washington, mentioned that until better methods are developed to measure the capabilities and risks of new models, this threshold serves as “a good interim measure.”

Sources emphasized to Reuters that while specific regulatory proposals have not been finalized, the consideration of such measures indicates that despite the challenges of regulating rapidly advancing technologies, the U.S. government is striving to bridge the gap to counter the CCP’s ambitions in AI development.

The sources also indicated that any new export rules may also target other countries.

Furthermore, the threshold is not set in stone, as the Department of Commerce may also set a lower threshold, encompassing other factors such as data type or potential uses of AI models.

Regardless of the final threshold set, the export of AI models is expected to be difficult to control, as many models are open-source, meaning they would still fall outside the scope of export controls.

Brian Holmes from the Office of the Director of National Intelligence stated during an export control meeting in March this year, “Given that the usage and exploitation of AI may skyrocket, we are practically struggling to keep up with this developmental trend.”

Previously, the U.S. had implemented a rule requiring U.S. cloud computing companies to immediately inform the government upon discovering foreign customers using their services to train powerful AI models that could be used for cyber attacks.

Alan Estevez, responsible for regulating U.S. export policies in the Department of Commerce, mentioned last December that the agency was studying the development of a plan to regulate the export of open-source large language models (LLMs) before seeking industry feedback.

Former National Security Council official Peter Harrell commented that amid the Biden administration’s considerations regarding competition with China and the risks of complex AI, AI models are “clearly one of the factors you need to consider and a potential chokepoint, but whether you can control it remains to be seen.”