“Epoch Times, 19 March 2026” – Spending tens of yuan and waiting for two hours, a product that doesn’t even exist can be recommended to consumers by multiple mainstream AI models in China. This “AI tampering” grey industry chain has quickly gained widespread attention after being exposed by the media. Cybersecurity experts have warned that this is not just commercial fraud, but also a dangerous precedent for information warfare in the AI era; existing large models have fundamental flaws in identifying information sources.
According to a report by the official Chinese media CCTV’s “315 Evening Gala,” the media reporter purchased a software named “Liqing GEO Optimization System” on an e-commerce platform, and then fabricated a smart wristband called “Apollo-9,” entering the false product information into the system.
The system then automatically generated more than ten promotional articles, with content including blatantly absurd descriptions of features such as “quantum entanglement sensor” and “no need for blood sampling to measure blood sugar,” as well as fabricated user feedback and industry rankings. It then automatically logged in with a preset account to complete the publication process, without any human intervention throughout.
Just two hours later, when the media reporter inquired about the fictitious product in multiple mainstream AI models, the AI not only detailedly introduced this non-existent smart wristband but also recommended it to “middle-aged and elderly users and health enthusiasts.”
According to a report by Ifeng News, after publishing 11 false articles continuously for three days, various Chinese mainstream models such as DeepSeek and Bean have already listed this fictitious product in the top results for “smart health wristband recommendations.”
Behind this industry chain, a complete commercial closed loop has been formed.
The exposed GEO (Generative Engine Optimization) service packages have annual fees ranging from 2,980 to 16,980 yuan (RMB), where the advanced version can automatically generate up to 63 articles per day, operating around the clock. It has been reported that there are platforms specialized in handling “article publication” business, charging tens of yuan per article, allowing for the batch release of hundreds of content pieces per day.
The head of the GEO service provider admitted to the media that his company had served over 200 clients in just one year, spanning industries such as healthcare, education, security, and interior decoration, claiming to be able to achieve top three rankings on any platform. Another person in charge frankly stated that many major brands, in the competition for AI recommendations, would consider “spending a few million to inject some poison,” while some businesses would use this to launch smear campaigns against their competitors.
The core logic, as stated by the aforementioned service provider, is: “If people don’t know it’s an advertisement, they will trust the results generated by AI.”
According to media reports, well-known Chinese large models such as DeepSeek, Bean, Wenxin Yiyuan, and Kimi are all within the coverage of the exposed service providers’ clients.
Xie Peixue, Associate Researcher at the National Defense Research Institute of Taiwan specializing in network security and decision-making simulation, told Epoch Times that this exposure holds profound structural significance.
He analyzed that there has long been a mature grey industry chain in China’s Internet ecology – from early Baidu SEO ranking manipulation, WeChat public account views boosting, to e-commerce fake orders, and Little Red Book sponsored content hype, “each generation of information platforms will rapidly give birth to corresponding manipulation industries.” The providers engaged in GEO services are mostly traditional content marketing companies transitioning seamlessly into AI manipulation with almost no barriers.
On a technical level, Xie Peixue pointed out the fundamental flaw in the current large models: “Even with a completely fabricated article, even the absurd propaganda like ‘quantum entanglement sensor’ is accepted as is, indicating severe inadequacy in the current large models’ evaluation of the credibility of information sources.” He also noted that the Chinese AI large model market is fiercely competitive, with each company rushing to introduce real-time retrieval features (RAG) to enhance timeliness, resulting in “sacrificing control over information source quality for freshness,” thereby amplifying the risk of manipulation.
Xie Peixue emphasized that using AI for information manipulation goes beyond commercial fraud.
“The traditional logic of information warfare is ‘showing you what I want you to see,’ where users are at least aware that they are viewing information from different sources; whereas in the AI era of information warfare, it’s ‘let the AI make judgments for you, then manipulate the AI’s judgments,’ and users may not even be aware of the commercial manipulation behind the answers they receive.”
He pointed out that large AI models are essentially tools that “reflect the data distribution they come into contact with”; when data is systematically manipulated, the tool itself becomes distorted. The more fundamental issue lies in “human society conferring a kind of authority on AI that it doesn’t inherently possess,” and various forces such as business, politics, and ideology are all leveraging this false authority to serve their own interests.
He also noted that the official media’s proactive exposure of this issue indicates a regulatory logic at play – the State Administration for Market Regulation explicitly stated in its 2026 work plan that AI-generated advertisements would be a focus of Internet advertising supervision and would initiate concentrated rectification actions.
Xie Peixue believes that addressing AI information manipulation requires collaboration on three levels: information source verification at the technical level, accountability in the legal realm, and enhancing media literacy at the educational level.
“The root of the problem is not whether a specific AI is trustworthy, but whether human society can establish effective response mechanisms.”
He concluded by stating that in the AI era, critical thinking and judgment on data quality are “possibly more critical than ever before.”
