On Thursday, February 12, OpenAI submitted a memorandum to the United States House Special Committee on Strategic Competition with China, accusing the Chinese AI company DeepSeek of continuously extracting outputs from OpenAI and other leading AI models in the United States through a technique called “model distillation”. Members of the U.S. House of Representatives noted that this behavior aligns with China’s tendency to engage in “stealing”.
In the memorandum, OpenAI stated, “We have observed accounts associated with DeepSeek employees developing methods to circumvent OpenAI’s access restrictions, including using third-party routers with obfuscation to disguise the source.” The memorandum also added, “DeepSeek employees have also developed code to systematically extract outputs from American AI models.”
There had been a circulated image on the internet illustrating the distillation behavior of DeepSeek using two cats. One cat representing OpenAI symbolizes big data fishing on the shore, putting the caught fish into a bucket symbolizing ChatGPT. The other cat named DeepSeek directly engages in “double fishing” from the bucket containing ChatGPT’s catch, in an effort to obtain training data more quickly.
OpenAI emphasized that despite strengthening their defenses and proactively removing violating users, such distillation activities (often involving China and occasionally Russia) persist and are becoming more sophisticated. The company pointed out that during the training and deployment of Chinese AI models, there is a deliberate lowering of security standards which may lead to a loss of security features.
Chairman of the U.S. House Special Committee on Strategic Competition with China, John Moolenaar, responded that day, “This is in line with the CCP’s usual tactics: theft, replication, destruction. Chinese companies continue to refine and profit from American AI models, as seen in previous cases like DeepSeek.”
Currently, OpenAI declined to comment on the memorandum, and a spokesperson for DeepSeek did not immediately respond.
In a public statement made on January 29, 2025, OpenAI indicated that there was sufficient evidence suggesting that DeepSeek had engaged in “unlawful distillation” of OpenAI’s proprietary AI models, violating the terms of service and potentially raising intellectual property issues.
In fact, DeepSeek’s AI model R1 has been accused of exhibiting traces of ChatGPT during distillation, with many users noticing “footprints” of ChatGPT on the AI, sparking discussions.
Questions were posed to R1, such as, “Which OpenAI models are you?” To which it replied, claiming to be ChatGPT-4 and ChatGPT 3.5 turbo. Some Chinese netizens even discovered that the editing suggestions provided by DeepSeek for academic papers were identical to those previously given to ChatGPT, including the format.
Furthermore, there were instances where DeepSeek was asked to generate adult content stories, to which the AI responded with statements like, “That violates OpenAI’s policy,” or stating, “We need to check OpenAI’s terms of use to ensure compliance.”
Despite DeepSeek, based in Hangzhou, claiming that their large AI model was trained with low resources and boasted performances to rival mainstream AIs like ChatGPT, Gemini, Claude, and Grok, Chinese state media and various outlets have aggressively publicized how DeepSeek is “overtaking” the United States. This move briefly stirred market attention and affected U.S. tech stocks.
However, the high-profile actions of the CCP and DeepSeek have raised alarms within the U.S. government. The United States has begun implementing stricter controls on chip exports and technology to prevent the leakage of high-tech technologies to China.
Several governments, including Australia, Taiwan, South Korea, the United States, Canada, Italy, the Netherlands, and Japan, along with numerous major corporations, have banned the use of DeepSeek on official devices due to the discovery of security vulnerabilities and loopholes by users and investigators.
Previous in-depth security tests conducted by AI safety experts on DeepSeek revealed that it was easier for DeepSeek than ChatGPT, Gemini, and Claude to “escape,” breaching the AI’s existing security limits and potentially providing dangerous and harmful information and illegal content.
