Chinese Communist Party’s Internet Army Exposed to have Massively Interfered in Japanese Elections: Expert Analysis

During the 2026 Japanese House of Representatives election, an investigation revealed suspected AI fake accounts related to China spreading negative messages about Prime Minister Konoe Sanae on social media platforms, sparking concerns about Chinese interference in the election.

Experts warn that such actions by the Chinese Communist Party may not directly impact election results, but could weaken trust in democratic societies over the long term, deepen divisions, and recommend that governments around the world establish long-term immunity within their systems and societies.

According to a report by Nikkei on February 22, around 400 social media accounts with Chinese background on the X.com platform coordinated actions to spread negative messages against Prime Minister Konoe Sanae before and after the House of Representatives election on February 8.

These accounts mainly highlighted Konoe’s statements related to Taiwan’s military defense in 2025, portraying her as a leader who could provoke war, stirring up fear among the public about a potential conflict with the Chinese Communist Party. They also linked this to issues such as rising prices and the depreciation of the Japanese yen, suggesting that her strong policies towards China could lead to economic retaliation.

The report mentioned that many of these accounts had been inactive or unrelated to politics in the past but suddenly started posting intensively two weeks before the election, with their operating pattern resembling the common Chinese Communist Party tactic of “spamouflage” propaganda networks.

Shen Mingshi, a scholar at the National Security Institute of Taiwan’s National Defense Research Institute, stated that Japan’s situation this time is similar to Taiwan’s past experiences when faced with various Chinese propaganda through different channels and topics, usually targeting specific political parties. He pointed out that if the content lacks local context, it often fails to resonate with the public, thus limiting its effectiveness.

Xie Peixue, a cybersecurity expert at the National Defense Research Institute, added from a technical perspective that Chinese interference in elections through social media platforms follows a systematic pattern, focusing not only on disseminating information but also designing content based on social emotions and popular issues.

This aligns with Nikkei’s description of the strategy, which involves stoking fears of war and economic anxieties to target swing voters.

Despite frequent postings by fake accounts, the Liberal Democratic Party led by Konoe still achieved a landslide victory with 316 seats. Analysis indicates that while fake accounts were active, their impact was overshadowed by genuine grassroots support and a trend of younger voters shifting towards conservative ideals, thus not influencing the election outcome significantly.

Xie emphasized that cognitive warfare is not designed for a single election but aims to influence public opinion in the long run. When spreading false information, the key for the Chinese Communist Party is not the veracity but ensuring that misleading information dominates in society.

Observations from the Taiwan Fact-Checking Center and the Taiwan Institute for Information Environment Research (IORG) show that during Taiwan’s 2024 election, misinformation often revolved around war, economic issues like cross-strait conflicts or whether the US would intervene militarily, and societal concerns such as social welfare, food safety, and transportation problems.

The impact of these messages may not directly sway votes but rather fuel hatred, strengthen dissatisfaction with the government, and breed distrust.

Shen bluntly stated that the involvement of suspected Chinese AI accounts in shaping public opinion in Japan is “a very normal technique for the Chinese Communist Party.” He mentioned that if the content is translated from Chinese, it mainly caters to the “domestic propaganda needs,” having limited influence on Japanese voters and possibly even backfiring.

Shen noted that “the Chinese Communist Party’s actions have instead prompted closer cooperation between Taiwan and Japan in the field of cybersecurity.” The Japanese government has recognized the similar threat of cognitive warfare faced by Taiwan and expressed a desire to learn from Taiwan’s experiences in countering such threats.

Xie pointed out that Taiwan is a major testing ground for Chinese cognitive warfare, with the largest scale and continuously evolving tactics. Analysis shows that out of around 45 million social media interactions in the past year, over 700,000 interactions were suspected to be manipulated, potentially involving more than ten thousand fake accounts.

He mentioned that the Chinese Communist Party in Taiwan often amplifies social conflicts by spreading scandals about candidates, highlighting government corruption, creating economic and national security fears. Attacks against the US, commonly known as “suspicion of the US,” are frequently used to weaken pro-US parties.

IORG and academic analyses also show that Chinese propaganda against Taiwan often ties the US to the notion of war, subtly conveying the message that “closer ties with China bring peace” to undermine Taiwanese trust in allies.

Regarding operations in the US, Xie highlighted that the strategy tends to incite inter-ethnic tensions by magnifying issues related to immigration, race, and resource allocation rather than directly supporting a specific candidate.

Xie emphasized that the ultimate goal of cognitive warfare is not to influence a single election but to create societal divisions over the long term, weaken trust in democratic systems, and normalize authoritarian narratives.

Shen also mentioned that cognitive warfare focuses on long-term impact; even if it doesn’t immediately affect elections, its effects can accumulate gradually over time and shake societal consensus.

In a seminar by the Center for Strategic and International Studies (CSIS) in April 2024, it was noted that the core objective of Chinese information manipulation against Taiwan is to polarize society further rather than merely change voting outcomes.

Experts believe that the case in Japan reflects this trend: while the interference may not have succeeded immediately, vigilance is still necessary for its long-term impact.

Furthermore, Xie warned that Chinese cognitive warfare has evolved from simple information dissemination to simulating social responses and precise deployment.

He mentioned a report from last year where scholars from Fudan University in China and engineers from Xiaohongshu utilized a million public accounts on X platform and 9 million Xiaohongshu accounts to create over 10 million AI virtual identities. These were used to simulate Taiwanese social conditions and test the impact of different united front strategies and rhetoric in various scenarios.

This indicates that cognitive warfare has transitioned from one-way propaganda to precise opinion manipulation.

Facing information manipulation in the AI era, experts advocate for a multi-faceted defense approach.

Shen advocates for governments to establish a “rapid response mechanism” to promptly clarify and disclose the source and purpose of fake news, block its spread, and actively provide correct information.

Xie suggested a three-tier defense: government establishing legal frameworks and specialized agencies, social media platforms taking a more proactive stance against manipulative behavior, and society enhancing media literacy skills.

He mentioned that Japan has implemented the Information Distribution Platform Act in 2025 and is planning to enhance intelligence agencies, demonstrating that democratic countries are gradually building systematic protection.

He also urged that the true goal of cognitive warfare is not to influence a single election but to create divisions and weaken trust in democracy, thus emphasizing that defense mechanisms must involve a continuous “social immunity building project.”

Overall, the suspected foreign AI account operations in Japan’s 2026 election, while not altering the election results, underscore the normalization of information warfare in democratic societies. From Taiwan and the US to Japan, cognitive warfare is spreading across borders through artificial intelligence and social media platforms.

Experts believe that strengthening institutions, deepening international cooperation, and enhancing citizens’ media literacy are crucial for democratic systems to maintain stability and trust in the digital age.