TikTok Recently Issued Large-scale Bans Possibly Due to Automated Review Malfunction

On December 16th, several TikTok users reported that their accounts were suspended by the platform without receiving a clear explanation, with some accounts marked as “permanently banned”. Many users shared screenshots on social media showing that the punishment notifications stated they had violated “relevant laws and policies”, with the penalty showing as “permanent” or “indefinite”, but the system did not specify the specific violations, items, or triggers.

In WeChat and other social media chat groups, netizens discussed the widespread account suspensions on TikTok that occurred on the afternoon of the 16th. Some users mentioned that they did not receive any warnings or restrictions before their accounts were banned.

Mr. Wang, a WeChat user and TikTok user, told Epoch Times on the 17th: “As far as I know, there may be tens of thousands of accounts suspended this time. It is not clear whether it was a platform-wide governance measure or an error in the automated system, but this scale has not been seen before.” Wang explained that in the past, individual account penalties were usually isolated cases that ordinary users did not easily notice.

Furthermore, Mr. Wang mentioned that previous account penalties typically targeted clear violations involving sensitive content, “ordinary users would not feel it, unless they really used obvious sensitive words, such as ‘a certain person’ or ‘big big.'”

A TikTok blogger from Nanjing, Mr. Zhang, told reporters that two of his friends’ accounts were suddenly banned by the system that night, with the same notification of “violating relevant laws and policies”. The suspended users later submitted appeals, and about 4 hours later, their accounts were restored to normal, but the platform did not provide further explanation for the ban.

According to official data, as of March 2025, there were approximately 1 billion TikTok users in China, with around 700 million active users. Several users reported that some accounts marked as “permanently banned” were gradually unblocked at night or early the next day after the ban, but the account penalty records page still retained the previous suspension and restriction records. Users mentioned that the platform did not issue a unified announcement explaining the reasons for the bans, the appeal process, or the unblocking mechanism.

Additionally, some users mentioned that during the account suspension period, they were unable to log in properly, and their wallet balances, group purchase orders, and refund operations were temporarily restricted, raising concerns about user fund security and appeal transparency.

According to media reports, in recent months, some regions have accelerated the construction of public opinion monitoring centers, and in internal documents and meetings, they have requested “minute-by-minute handling” of network information during major sensitive periods. Meanwhile, the Cyberspace Administration has been continuously promoting the “clean internet action”, resulting in a significant increase in the number of content reviews platforms need to handle.

A source, Mr. Qin Gang, told reporters two weeks ago that the authorities have held multiple coordination meetings in recent months, requiring local cyber administrations to report on monitoring node access status and conducting stress tests on some pilot points, including keyword triggers, abnormal traffic increases, and message duplication and spread. He stated that following these requirements, platforms often initially increase risk control trigger standards, impose restrictions on accounts, and then correct misjudgments through appeals or manual reviews.

Mr. Zou, a network technician at a company in Shenzhen, told Epoch Times that large content platforms currently rely on automated auditing systems for high-frequency risk scans. If there are adjustments to keyword libraries, model parameters, or risk thresholds in a short period, it may trigger widespread consistent penalties. He said, “If subsequent confirmation of misjudgment is found, platforms usually gradually restore account permissions through manual review or system rollback, this time probably with tens of thousands of misjudged users.”

Epoch Times first reported on December 1st that as China’s internet governance intensifies, the Communist Party’s Cyberspace Administration and other departments are leading the development of an upgraded version of the network warning system that conducts high-frequency monitoring of social platforms and web content. Once the system identifies information considered sensitive, it will trigger an alert and attempt automatic interception.

Mr. Zou mentioned that the system simultaneously imposed restrictions on multiple accounts within a short period, but many users did not see specific violation contents or trigger reasons on the penalty page. Users could only wait for system reevaluation or manual intervention through appeals, during which their account functions were limited, and related penalty records may still be retained.

Interviewees believe that the concentrated occurrence of account suspensions and unblocking in this instance reflects the actual effectiveness of the automated auditing mechanism under high-intensity operations.