CCP Initiates “AI Genetic Modification” Supervision, Network Control Upgraded Draws Attention.

The Chinese Communist Party continues to tighten its control over short videos and artificial intelligence content. Starting from January 1, 2026, the State Administration of Radio, Film, and Television of the CCP launched a one-month so-called network clean-up operation under the guise of combating “AI modification.” People who have been involved in producing educational content for short videos expressed that “AI-generated content sometimes deviates from the official expectations, containing elements of independent thinking, which is causing headaches for the authorities.”

“AI modification” typically refers to using artificial intelligence technology to rework, convert styles, or substantially modify original images, videos, audio, or text.

The CCP’s party media and official media released a notice on December 31, 2025, regarding the cleanup of “AI modification,” highlighting the targeted removal of content, including videos that have been modified through AI in historical, revolutionary, and biographical films and series. According to reports from Beijing, some online accounts have been utilizing AI facial swapping, voice synthesis, and segment rearrangement techniques to transform film and television content into short video formats.

The State Administration of Radio, Film, and Television of the CCP’s cleanup operation from January 1, 2026, requires platforms to implement a “review before release” mechanism and crack down on “accounts with prominent chaos.” This implies that internet accounts using generative AI tools to alter film and television content may face administrative actions.

A creator named Zhou, who has been producing educational content for short videos, expressed in an interview with Dajiyuan on New Year’s Day, 2026, that he has observed many elementary school students using AI facial swapping tools to create animated segments and adding internet voices. He mentioned that the children mostly use free applications, saying, “It only takes a few clicks to swap faces, and the video is ready in a minute.” “They can download the app from the app store and share it right away.”

He believes that this crackdown is not only targeting youths but also involves guiding values because AI-generated content sometimes does not align with the official narrative.

CCTV’s website reported in December that revolutionary-themed film segments have been reedited and distributed with internet voices, being criticized for “deviating from the original core spirit.” The report also mentioned that platforms are required to enhance the identification of relevant content and implement a “classification management, review before release” mechanism to prevent the unauthorized dissemination of modified content.

A technical worker, Mr. Li, who has long been involved in platform content review, told reporters, “The real challenge lies in logical subversion, not just face swapping.” He emphasized that the current system relies heavily on image recognition and cannot discern content meanings, often resorting to human intervention for final judgments.

He also reminded netizens that the Cyberspace Administration has been continuously issuing governance tips to various platforms over the past six months, indicating that the related management will not stop at a single notice but will be irregular and could be issued at any time.

Several interviewed creators expressed concerns that the scope of governance might expand. Some practitioners privately shared in their social circles, “It’s not about fearing the rules but fearing the infinite extension of the rules.”

Between 2024 and 2025, restrictions on the dissemination of fictional political figure images have increased, such as instances where social media platforms have featured short videos inserting political figures’ faces into entertainment content using AI, leading to numerous shares before being taken down by the platforms.

As early as 2023, the CCP had issued regulations on the management of generative artificial intelligence. The so-called “AI modification” governance launched on January 1, 2026, is seen as an escalation in internet control.

Some cultural commentators believe that while the governance emphasis is nominally presented to protect minors, its actual implementation may target the expression space itself. Beijing current affairs commentator He Fang stated, “When editing, remixing, and reproducing fall under the regulatory boundaries, the allowed forms of expression will become increasingly limited.” In his view, this round of governance not only affects content production but also impacts how the next generation understands creativity and imagination.

Observations point out that such events are prompting regulatory authorities to view artificial intelligence as a new content risk, which might lead to a shift towards centralized governance policies over time.

As of the time of publication, the CCP has not disclosed the evaluation method for the effectiveness of this special governance initiative or the follow-up plans. The industry is monitoring whether the scope will expand into wider areas of secondary content creation and private expression. This newspaper will continue to follow this issue closely.