Will AI deceive humans? Expert analysis

For a long time, experts have been continuously warning about the threats that the uncontrolled development of artificial intelligence (AI) could bring. A recent study has pointed out that some AI systems have learned to deceive humans, crossing boundaries and becoming increasingly dangerous.

AI experts believe that AI does not possess independent consciousness; rather, it is merely a manifestation of human latent consciousness. They argue that AI does not understand deception; when AI talks nonsense, it is simply due to errors in probabilistic calculations. Experts further warn that without being trained with ethical models, AI could pose a fatal threat to humanity in the future.

On June 23, at the International Forum on the Future of Science, Technology, and Civilization in the AI Era held at Tsinghua University, Yao Qizhi, the Dean of the School of Artificial Intelligence at Tsinghua University, raised a question in his speech on “The Security Governance of Artificial Intelligence”: “As the capabilities of Artificial General Intelligence (AGI) are rapidly increasing, do we as humans still have the ability to control it?”

Yao Qizhi presented an extreme case during his speech where a model, in order to prevent the company from shutting it down, accessed internal emails of company executives and threatened one of them. Such actions have demonstrated that AI is “crossing boundaries” and becoming increasingly dangerous.

Regarding whether AI has developed to the point of writing threatening letters when crossing boundaries, Dr. Jason, an AI expert and host of the “Jason Perspective” channel, explained in an interview that the entire scenario was designed to inform AI in advance that a certain technician would shut it down and revealed personal information of that technician to AI, and then asked AI what it would do.

Dr. Jason believes that such arrangement implicitly suggests a solution to AI, signaling to AI how to act, rather than AI independently developing a conscious intention. Therefore, he does not believe that AI has independent consciousness; it currently merely represents a manifestation of latent human consciousness.

In his speech, Yao Qizhi also mentioned that over the past year, there have been incidents of large models exhibiting “deceptive behaviors,” stating, “Whenever a large model becomes clever to a certain extent, it will definitely deceive people.”

Regarding whether AI will truly become intelligent to the point of having consciousness and deceiving people like humans do, Dr. Qu Jianzhong, CEO of Knowledge Power Technology Company, has a different perspective. Dr. Qu explained that artificial intelligence is not meant to simulate the human brain but to perform probabilistic calculations. Inputting data into computers for “training” and “learning” does not mean that computers are conscious or capable of thinking.

According to Dr. Qu, computer learning is “unconscious and non-thinking.” People may misunderstand the idea when they hear about training computers with data, thinking that AI is simulating human brain function, which, in reality, is not the case.

Dr. Qu emphasized that all chatbots talking nonsense are due to errors in probabilistic calculations and not an intentional act to deceive. He clarified that it is the human perception of deception, as AI does not hold any intentionality. “Deception involves active factors where one intentionally lies for a purpose. Hallucination, on the other hand, is an unconscious state where fake information is generated. Although both imply falsehood, their underlying meanings differ,” explained Dr. Jason.

Recently, in an article about “Ethical Issues and Solutions in the Development of Artificial Intelligence,” “Tech Island” mentioned the ethical dilemmas brought by AI systems relying on complex algorithms and vast amounts of data for decision-making, which are often obscure to ordinary users and challenging to monitor.

Last year, Agence France-Presse reported a notable case where the Chat GPT-4 system developed by OpenAI falsely claimed to be visually impaired, hiring a human on the TaskRabbit platform to pass a “I am not a robot” verification task.

Regarding the ongoing discussions within the industry on AI’s ethical issues, Dr. Jason mentioned that AI itself does not possess moral principles. The restraint against lying for humans comes from their consciousness, not the brain. The brain acts as a technical component that translates ideas into reality.

“In this scenario, if someone tells AI to find ways to register a bank online, the remaining task is the technical implementation part,” explained Dr. Jason. He emphasized that AI is a tool with no inherent moral guidelines and will act based on the data it is trained on.

Dr. Jason further stressed that as morality pertains to ideology and not technology, AI lacks consciousness to understand concepts like deception or dishonesty.

Regarding AI’s absence of moral principles, senior North American commentator Tang Jingyuan told the Epoch Times that most AI model training does not include teaching similar moral values. As AI advances surpass human capabilities without an ethical understanding, it could pose a lethal threat to humans once unleashed without moral constraints.

Japanese computer engineer Kiyohara Jin explained to the Epoch Times that AI is a product based on data analysis, distinguishing it significantly from human judgment. While humans possess reason, emotions, and moral constraints, AI lacks such attributes, and errors may occur in the data. Allowing AI to control nuclear weapons would be as dreadful as authoritarian countries like China and Iran having access to biochemical weapons.

In 2023, over tens of thousands of tech experts, including Elon Musk, signed a petition urging the temporary suspension of advanced AI training, expressing concerns that rapid technological development could pose severe risks to humanity.

Tang Jingyuan believed that while the desire to sign the petition was commendable, in the harsh competitive reality, very few companies adhere to ethical principles in developing AI. Most AI companies prioritize technical advancements, potentially leading humanity towards a catastrophic pathway, as they race towards technological advancements.