China’s social platform “WeChat Official Account” content creators are facing increasing implicit rules and uncertainties. Recently, one author disclosed in an article that they have encountered platform review multiple times, summarizing the mechanism as a three-tier system of “red line, yellow line, gray line”. The interviewee stated that for a long time, online content creators and ordinary netizens have always been under invisible surveillance.
According to the latest article revealed by China Digital Times titled “Experience of Sensitivity Filtering in WeChat Official Account Articles”, the review process is divided into three levels. The red line tier involves sensitive issues such as national policies, violence, religion, and ethnicity. Once touched upon, the article will be directly banned. The yellow line tier allows publication of articles but limits advertising. The gray line tier is often triggered by reader complaints, and the platform tends to comply with the emotional side of the complaint, potentially leading to article deletion with authors having almost no channels for appeal.
On September 30, internet writer Mr. Zhao told a reporter from Epoch Times, “This grading mechanism seems clear but is actually opaque. Authors are unable to understand the specific trigger points or receive official explanations, so they can only write by repeatedly testing boundaries.” He believes that although it appears to be the platform’s operation, it is actually the Communist Party officials outsourcing review responsibilities to enterprises, forcing each author to self-regulate in uncertainty.
Many online authors share similar sentiments. A new media writer, Mr. Shao Kang, commented on a social platform, “Writing for the official account is like stepping on landmines. You never know where the sensitive words are buried.”
The aforementioned article also indicates that the platform’s judgment heavily relies on automatic filtering using a sensitive vocabulary database, lacking contextual judgment. This often results in cases of “better to kill a thousand by mistake”. Authors are often unaware of the violation reasons and can only continuously revise based on experience. An article author expressed, “Many times, when an article is deleted, I don’t even know what the issue was.”
A network technician, Mr. Chen Hao from Hebei (alias), told reporters that this “vocabulary-based” review logic is very rough, as the system does not understand context. Once a word from the database is triggered, it sets off an alarm. Human review is limited, leading to many harmless content being mistakenly blocked.
The Communist Party’s internet censorship system has a deep historical background. Since the widespread of the internet at the end of the last century, authorities have gradually constructed multi-level monitoring, from the Great Firewall (GFW) to automatic filtering on social media, forming one of the world’s largest censorship networks. In recent years, with tightening public opinion environment, officials demand platforms to take on more “content review responsibility”. WeChat Official Account as a vast self-media platform naturally becomes a focus of supervision.
A scholar from Anhui pointed out that the logic of the Communist Party authorities can be summarized in one phrase as “ensuring political security”. “This is not a choice of a particular platform but a necessity under the system environment. To avoid risks, companies often prefer excessive review. So-called internet security has been elevated to political security.”
Studies have also revealed the scale of this review system by the Communist Party. According to insiders, major platforms maintain massive sensitive vocabulary databases, with some entries reaching tens of thousands, covering politics, history, religion, social events, and are quickly updated before and after major meetings or events.
Citizen Lab at the University of Toronto in Canada has previously pointed out in research reports that platforms like WeChat commonly use dynamically updated vocabularies and strengthen sanctions during politically sensitive periods. For example, before the CCP’s 19th National Congress, terms related to leaders and meeting topics were widely blocked.
Since 2011, there have been continuous compilations and disclosures of blocked terms forming a “sensitive vocabulary” archive. The data shows that newly added prohibited words are concentrated on group events, social movements, rights defense cases, and criticisms of the system, with explosive increases around significant political junctures.
Many ordinary users also shared their experiences. A netizen from Zhejiang said, “I just shared a link to a foreign media report, and the next day, my account was blocked without reason and no opportunity for appeal.”
Mr. Zhang from Shandong complained on his WeChat moments, “A few grievances and the system blocked me, and my friends couldn’t see it at all. I didn’t realize that moments are also subject to review.”
A technician added, “Platforms rely on keyword matching and also grade user behavior using algorithms. Once marked by the system as high-risk, even ordinary sentences may trigger bans.”
In such an environment, many users are gradually engaging in self-censorship, refraining from freely commenting on social issues. A Sichuan writer who has long studied internet dynamics, Mr. Bi, stated, “In most countries, platform review typically revolves around public safety and law, but in China, it’s more about political stability. Many netizens understand this, hence choosing silence.”