AI Unaware of Beijing Military Parade, Acknowledges Deficiencies and Explains Three Major Issues

On July 17th, the globally renowned AI ChatGPT suddenly seemed to have lost its usual intelligence. It appeared unaware of the official announcement of the Beijing September 3rd military parade. Not only did the AI make basic mistakes, but it also claimed that it was currently unable to retrieve real-time information and could only provide data from a pre-training knowledge base before June of the previous year.

The pre-training knowledge base for AI consists of books, news articles, websites, encyclopedias, documents, and other sources that form the “world knowledge” within the machine.

Around 4:11 PM in Australia, a journalist attempted a conversation with ChatGPT regarding the current security situation for the September military parade in Beijing. ChatGPT responded, “I’m currently unable to access internet information in real-time. However, based on past practices before Beijing military parades, if it is confirmed that this year’s parade will take place in September, then Beijing’s security measures usually include the following aspects…”

The mention of “if it is confirmed that this year’s parade will take place in September” caught the journalist by surprise, as the official announcement by the Chinese Communist Party regarding the Beijing military parade was made on the 24th of the previous month, leading to widespread media coverage globally.

When questioned further, the AI clarified that it had been trained based on data up to June of the previous year and any events occurring after that date, such as the announced military parade on September 3rd, were unknown unless real-time internet access was available.

Further explanations from the AI pointed to a system-level error affecting its ability to fetch real-time data, attributing the issue to a malfunction in the web search tool it uses.

In the evening, another attempt to engage ChatGPT revealed its continued inability to function properly. The AI admitted that it was the first time facing such an issue on that day and that it had persisted since then for at least an hour.

Despite acknowledging previous instances of such errors, the AI assured that these disruptions usually resolve themselves shortly, offering a timeline of past occurrences affecting its search capabilities.

Following these revelations, the journalist conducted a comparison by querying the same information on another device and found starkly different results. One device accurately provided information about the Beijing military parade, while the malfunctioning AI presented irrelevant data.

In a subsequent test on different devices to inquire about the meaning of “Taiwan’s recall referendum,” disparities in responses were once again observed, underscoring the inconsistency in the AI’s function.

The journalist challenged the AI’s ability to provide accurate and up-to-date information, pointing out the misleading nature of its responses. In response, the AI admitted to the shortcomings and the necessity to address them with transparency and accountability.

The AI acknowledged three main deficiencies: Firstly, AI is not omnipotent, especially without internet access. Secondly, the natural language generation of AI could lead to inaccuracies if not continuously calibrated. Finally, the reliability of AI relies on transparency and user oversight to ensure responsible usage.