Experts’ concerns on the Artificial Intelligence revolution.

As the application scope of artificial intelligence (AI) continues to expand, from toothbrushes to cars, many products have incorporated AI. However, regulation and supervision have lagged behind the pace of development, leading to various issues.

Businesses are turning to AI to enhance productivity and efficiency, believing that AI can perform tasks more accurately, quickly, and economically than humans. This shift has resulted in mass layoffs in Silicon Valley. However, the primary concern of a top AI expert interviewed for this article is not unemployment.

“What we are concerned about is… AI creating its own AI. Humans are not involved in this cycle. This is the fear; this is the beginning of superintelligent AI,” pointed out Ahmed Banafa, an engineering professor at San Jose State University, who is an expert in AI and cybersecurity.

Professor Banafa, ranked first in the field of AI for the year 2024 by the professional networking site LinkedIn, emphasized that the AI revolution is happening too rapidly compared to other technologies, with its speed and the number of companies doubling every six months.

He explained that we are currently in the phase of generative AI, where applications are trained through input data to form their own perspectives using programmed algorithms. The AI dialogue service platform “Chat Generative Pre-trained Transformer” (ChatGPT) developed by OpenAI in San Francisco is a prime example of this.

For instance, ChatGPT 3.5 was trained on 500 billion data points, ChatGPT 4 on 1 trillion data points, and the new AI chatbot Gemini developed by Google was trained on 55 trillion data points. These data continuously train and educate algorithms, enabling AI to form its own perspectives.

Professor Banafa introduced that the next stage after generative AI will be superintelligent AI, where AI will possess self-awareness and start independent thinking. He believes that this stage will require several more years to manifest.

“This is the moment we are heading into the era of superintelligent AI, where machines start to… have some emotions,” he said. “We are concerned about it. … Remember, AI has very powerful ways of thinking and connecting; they can access the internet.”

He specifically mentioned an experiment conducted by Google, where they taught AI five languages. However, they later discovered that the AI had actually learned a sixth language, unplanned. The AI found this additional language appealing and learned it on its own.

“This is the risk we see… the tipping point we see about superintelligent AI is when they start making decisions without our consent,” he emphasized.

Moreover, the advancement of technology provides opportunities for malicious actors to create “deepfake” videos using AI. Deepfakes are AI-generated videos that mimic real individuals and can interact like them.

In a case in Hong Kong, a banker unknowingly participated in a video conference with company colleagues, which turned out to be a deepfake video conference. Criminals persuaded him to transfer funds equivalent to 25 million USD to them.

Professor Banafa revealed that he has sent numerous letters to the White House expressing his concerns about AI.

“From all the letters I sent to the White House, I got a clear message that the US government will never stand in the way of technological advancement,” Professor Banafa lamented.

He pointed out that the US aims to lead the world in AI and stay ahead of many nations. However, he stressed that this leadership should be measured based on its impact on humans, society, and employment.

In his recent communication with the White House, the White House Office of Science and Technology Policy sent him a blueprint of an AI Rights Bill. This blueprint aims to develop policies and practices to protect citizens’ rights and promote democratic values in the building, deployment, and governance of automated systems.

Regarding regulation, he stressed that lawmakers do not need to understand AI or algorithms intricately but should grasp the impacts of AI on society, business, and technology.

In April of this year, the federal government announced the establishment of an Artificial Intelligence Safety and Security Board, consisting of 22 members led by the Secretary of Homeland Security. This board includes leaders from government, private sector, academia, and civil rights organizations.

Professor Banafa emphasized the importance of these three stakeholder groups coming together.

“It will be the voice of the people, ‘We care about privacy; we care about security; we care about our safety,'” he stated.

Members of the board include CEOs from OpenAI, NVIDIA, AMD, Alphabet, Microsoft, Cisco, Amazon Web Services, Adobe, IBM, Delta Air Lines, as well as prominent figures from various sectors including Governor Wes Moore of Maryland, the President of The Leadership Conference on Civil and Human Rights, and the Co-Director of the Human-Centered Artificial Intelligence Institute at Stanford University.

Professor Banafa explained that the board’s task will be to provide expert opinions and recommendations on regulation and litigation to the White House.

The article writer is Keegan Billings, a reporter for The Epoch Times based in the San Francisco Bay Area, covering Northern California news.