In a recent article published in The Washington Post, Jack Crovitz, a strategic deployment expert from the Silicon Valley tech company Palantir Technologies, shared his groundbreaking insights on how the United States can prevent the Chinese Communist Party from leveraging American artificial intelligence (AI) models for “anti-American” activities.
Palantir, known for its work in big data analysis and AI research, counts the Central Intelligence Agency (CIA) as its first client and has received early investments from CIA-affiliated venture funds. With successes like accurately locating the mastermind behind the 9/11 terrorist attacks, Osama bin Laden, and aiding in uncovering the Wall Street “Madoff Fraud Case,” Palantir has garnered significant acclaim.
Crovitz highlighted recent incidents where Chinese operatives exploited AI models from leading American companies like OpenAI and Anthropic to spread “anti-American” Spanish-language newspaper articles, infiltrate Vietnamese government institutions, and create “social media monitoring tools” to assist Chinese authorities in monitoring Western social media networks. More alarmingly, Anthropic reported that hackers supported by the Chinese government used their AI robot Claude to launch cyberattacks on Western tech companies, banks, and government agencies.
“These events shed light on a blind spot in the US-China AI competition,” Crovitz wrote. “The real competition lies not just in capabilities but in control.”
He cautioned that as the US advances in AI development, the risk of adversaries exploiting these technologies for anti-American activities also increases. Crovitz warned that if Silicon Valley and the US government invest billions in developing the most advanced AI only for them to be weaponized by adversaries to undermine American freedoms and national security, it would be a tragic outcome.
According to Crovitz, policymakers often mistakenly believe that leading in AI model development equates to winning the competition. However, even if American labs attain a technological edge in AI research, defeat in a broader strategic competition looms if China and other hostile entities can freely misuse American cutting-edge AI models for malicious purposes.
He stressed that unless the US takes immediate action to establish basic security standards for AI labs, the rampant misuse of AI models will continue to pose a serious threat to American freedoms and security.
To address these challenges, Crovitz proposed three strategic measures to reverse this perilous trend. First, he recommended that the US government mandate all American AI labs to prohibit hostile regimes from using their models for malevolent purposes and to report any blatant policy violations.
Almost all analysts believe that foreign governments, such as Russia and China, possess the capability to infiltrate America’s largest AI labs, steal models, or other intellectual property. A recent report funded by the State Department highlighted that many tech experts at the forefront of AI research privately acknowledge that existing security measures are insufficient to fend off persistent attacks aimed at intellectual property theft.
Secondly, the US government should set clear objectives to prepare all leading labs to resist coordinated attacks from sophisticated state-level adversaries seeking to steal model weights (core parameters of AI systems) or other sensitive information. As President Trump stated, safeguarding America’s leadership in AI is a matter of national security, and lax security measures in cutting-edge model labs cannot be tolerated.
“The federal government should incentivize major AI developers to adopt high-standard information security measures, whether through direct legislation, formal conditions for federal AI procurement, or informal advocacy by federal officials,” Crovitz suggested.
Lastly, policymakers must prioritize mandatory reporting of significant cases of malicious misuse by AI companies. Establishing an incident reporting system carries low compliance costs and allows policymakers to fully grasp the severity of the problem.
Such significant incidents or network security events should be reported to relevant federal agencies, such as the AI Standards and Innovation Center, the Cybersecurity and Infrastructure Security Agency, and the Artificial Intelligence Security Center. Provisions like the “Cyber Incident Reporting” clause in the Defense Federal Acquisition Regulation Supplement can serve as valuable references for such requirements.
Crovitz emphasized that American taxpayers and investors have poured billions into ensuring the US develops the most advanced AI. “If this investment ultimately leads to systems being weaponized by adversaries immediately, resulting in the erosion of American freedoms, prosperity, and national security, it would be a catastrophic strategic failure.”
“America must protect its most valuable technologies, secure its labs, and ensure that American innovation serves American interests,” he added.
