Multiple ChatGPT Violating Accounts Suspended, Suspected to be Associated with the Chinese Communist Party.

OpenAI released its latest “Open Threat Report” on Tuesday (October 7th), confirming that it has shut down several ChatGPT accounts that were suspected to be associated with the Chinese Communist government. These accounts were allegedly seeking to develop large-scale surveillance tools using AI and assist in activities like phishing that violate OpenAI’s national security policies.

This report highlights the potential security risks posed by authoritarian regimes abusing generative AI technology in the midst of the United States and China competing in the development and regulation of AI technology.

According to OpenAI’s findings, the closed accounts had requested various specific monitoring tools aimed at conducting large-scale online or offline surveillance. This behavior goes against the company’s policy prohibiting unauthorized monitoring using its models. Specific instances include:

– Requesting social media “listening” tools
– Some accounts asked ChatGPT for assistance in designing promotional materials and project plans to develop an AI-driven social media listening tool allegedly for government clients. This tool, known as the “Detector,” is capable of crawling social media platforms like X (formerly Twitter), Facebook, Instagram, Reddit, TikTok, and YouTube to scan for specific extremist rhetoric, ethnic, religious, and political content.

– Seeking Uyghur-related warning models
– Another account suspected to be linked to the Chinese Communist government requested ChatGPT’s help in writing a proposal to establish a “High-Risk Uyghur-Related Inflow Warning Model.” The proposal aimed to analyze transportation booking records and cross-reference them with police records to track the travel patterns of the Uyghur population.

In addition to monitoring-related requests, OpenAI also found that these China-related accounts were engaging in other malicious and technical exploration activities:

– Assisting in malicious software activities and phishing
– Several Chinese-speaking accounts used ChatGPT to assist in phishing and malware activities.

– Researching DeepSeek
– These accounts also requested ChatGPT models to study the Chinese AI company DeepSeek to explore how additional automation capabilities could be achieved through DeepSeek.

Furthermore, OpenAI mentioned in the report the malicious activities from other regions, illustrating that AI misuse has become a global issue.

OpenAI also disabled accounts linked to a suspected Russian criminal group that used ChatGPT to assist in developing malicious software, including remote control trojans and credential stealing tools.

Despite ongoing malicious activities, OpenAI stated that since publicly releasing threat reports in February 2024, the company has reported over 40 network accounts violating its usage policies. OpenAI added that their models have successfully rejected malicious requests and there is no evidence to suggest that their models have provided criminals with enhanced capabilities for new types of attacks.

The report mentioned that although the closed accounts in question were suspected to be associated with the Chinese government, it’s worth noting that OpenAI models are not officially provided services within mainland China. The company speculates that these users may have accessed their website through VPNs or other means.

OpenAI also warned in the report that “China (Chinese Communist Party) is making substantial progress in advancing its authoritarian AI version,” highlighting the significant risks of AI technology being abused under authoritarian rule.