The title translation could be: “Investigation on Former President Li Yuanhong Giving Comedian Fan Wei Troublesome Situation with Chinese AI ‘Bean Buns'”

Recently, the topic of ByteDance’s AI tool “Dou Bao” launching a paid version has hit the hot search list on Weibo. Following that, the topic of searching for Li Yuanhong with “Dou Bao” and getting related Photoshop images of actor Fan Wei also trended.

AI applications making such low-level mistakes have sparked controversy.

On May 4th, the Dou Bao App Store page displayed service statements for the paid versions – the standard version at 68 yuan per month, the enhanced version at 200 yuan per month, and the professional version at 500 yuan per month. “Dou Bao paid” quickly shot up to the top of Weibo hot searches.

Subsequently, some netizens posted on social media claiming that when using Dou Bao to search for the second president of the Republic of China, Li Yuanhong, the system provided an image of mainland actor Fan Wei in a parody photo from a movie.

This incident has garnered public attention, and related topics have also risen in the hot search rankings. Many users discovered that the search results on Dou Bao varied – some displaying images of Fan Wei, while others showed both Fan Wei’s picture and historical photos of Li Yuanhong. Some netizens jokingly commented, “I guess Fan Wei is now a standard feature of the free version of Dou Bao”.

Many netizens have raised questions: Since there will be a charge in the future, should the AI model be responsible for the generated results? If users pay money but receive incorrect answers, can that be considered a fair trade?

When asked why such a low-level mistake occurred, the Dou Bao system responded that Li Yuanhong and actor Fan Wei look very similar, and the parody image was widely circulated at the time with much higher visibility than the original historical photo of Li Yuanhong. There are also many instances of mixed use and misinformation online, leading to recognition errors. The person in charge of Dou Bao indicated that the relevant issue has been optimized.

On May 6th, The Paper published a commentary article questioning the responsibility issues of AI platforms transitioning to a paid model. The article pointed out that if users pay for a service, the platform should ensure basic accuracy of the content; if low-level errors persist, it will inevitably lead to disputes concerning consumer rights, business ethics, and responsibility allocation. In the future, providers of AI services may not only have to fulfill the obligation of prominently displaying service functions but also ensure the basic reliability of information.

The article mentioned that the most direct observation is that current AI applications may still make some low-level mistakes. For example, errors like “searching for Li Yuanhong and getting Fan Wei” are mistakes that even search engines from twenty years ago would hardly make.