CCP Accelerates Development of Artificial Intelligence Weapons, Threatening Human Survival

A group of experts have warned that cutting-edge weapons powered by Artificial Intelligence (AI) are becoming a global security threat, especially when controlled by the Chinese Communist Party (CCP).

Experts believe that the CCP is eager to surpass the United States militarily and may overlook safeguards related to deadly AI technology, which is becoming increasingly dangerous. They caution that this technology could easily exacerbate the worst tendencies of the CCP regime and human nature.

Bradley Thayer, Senior Fellow at the Center for Security Policy, China strategy expert, and a writer for The Epoch Times, stated that the impact could be as significant as the nuclear revolution.

Alexander De Ridder, co-founder of Ink, an AI marketing company, mentioned that the development of AI-driven autonomous weapons is progressing rapidly, which is unfortunate. He noted that these weapons are becoming more efficient and effective but they have not yet reached the capability to entirely replace humans.

Autonomous drones, tanks, ships, submarines have already become a reality, alongside more peculiar models like the four-legged machine dogs equipped with machine guns developed by China. Even android robots from science fiction horror movies are in production, although they are still relatively clumsy in the real world. De Ridder expressed that these robots’ capabilities are rapidly advancing.

He believes that once these robots’ practicality and reliability meet market demand, China is likely to start mass-producing them with its manufacturing prowess. De Ridder mentioned that humanoid robots will flood the market, and it will depend on how programmers use them.

This also indicates military applications. He stated that it is inevitable.

James Qiu, AI expert, Founder of GIT Research Institute, and former Chief Technology Officer of FileMaker, explained that AI machines are very effective in processing images to identify objects, making AI robots excellent at targeting. He emphasized that such machines are very effective.

Jason Ma, an AI expert and data research lead at a Fortune 500 multinational corporation, mentioned that several nations are developing AI systems that can provide information and coordination for battlefield decisions – electronic generals. He refrained from naming the company to avoid any misinterpretation that he was speaking on behalf of the company.

In a recent military exercise conducted by the Chinese People’s Liberation Army (PLA), AI was directly involved in command.

Jason Ma noted that the US military also has similar projects in development, emphasizing that it is a very active research topic.

He explained that the need is apparent. Decision-makers on the battlefield must consider vast amounts of data ranging from historical context and real-time satellite data to millisecond inputs from every camera, microphone, and sensor on the battlefield.

He stated that humans find it challenging to process such diverse data flows.

He elaborated on the complexities of making accurate decisions in increasingly complex warfare scenarios and emphasized the importance of rapidly integrating and synthesizing all information within seconds or even fractions of a second.

Experts unanimously agree that AI weapons are redefining warfare, with far-reaching consequences. Thayer argued that this technology is making the world increasingly unstable.

At the most basic level, AI-driven weapons targeting could make it easier to shoot down intercontinental ballistic missiles, detect and destroy submarines, and take down long-range bombers.

Thayer believes that this could weaken the US’s triad nuclear capability, allowing adversaries to escalate beyond nuclear levels with impunity.

He stated, “AI will affect every component of the nuclear triad, and understanding these components that we developed during the Cold War and their significance to a stable nuclear deterrence relationship is absolutely essential.”

He added, “During the Cold War, it was generally understood that conventional warfare between nuclear powers was unwinnable, but AI is challenging this understanding by introducing the possibility of conventional conflicts between two nuclear states.”

He predicted that if the unrestrained development of AI-driven weapon systems continues, this instability will worsen.

He warned that while AI is significantly impacting the battlefield, it is not yet decisive.

He cautioned that if AI achieves the ability to wage nuclear warfare without the use of nuclear weapons, it would lead to a highly dangerous and incredibly destabilizing situation, where preemptive attacks become more likely than enduring strikes.

In terms of warfare, this concept is known as “damage limitation,” where one must strike first to avoid significant losses, leading to heightened instability in international politics.

The concern is not limited to killer robots or drones but also includes a variety of unconventional AI weapons, such as developing AI to exploit vulnerabilities in critical infrastructure like power grids or water systems.

Controlling the spread of such technology is incredibly challenging. AI is ultimately software. Even the most advanced models can fit on a regular hard drive and run on small server clusters. The rise of lethal AI weapons like killer drones that transport in batches unnoticed is a growing concern.

Thayer pointed out, “The power for vertical and horizontal proliferation is enormous and easily achievable.”

De Ridder noted that China aims to be seen as a responsible player on the world stage.

However, other experts mentioned that this has not deterred the CCP from providing weapons or assisting less scrutinized regimes and organizations in weapon programs.

If the CCP were to furnish AI weapons to terrorist organizations, leading to endless asymmetrical conflicts for the US military, it would not be surprising. The CCP could keep a distance by providing components for proxy entities to assemble drones, similar to Chinese suppliers providing precursor chemicals to Mexican drug cartels for drug production, transportation, and sale.

For instance, the CCP has long assisted Iran in its weapon projects, and in turn, Iran has supplied weapons to terrorist organizations in the region.

Thayer highlighted the lack of repercussions Iran faced for these actions.

At least in the US and its allied nations, it is widely believed that ensuring human control over critical decisions, especially those regarding the use of lethal force, is paramount to preventing the unforeseeable destruction from AI weapons.

De Ridder emphasized, “Under no circumstance should any machine autonomously take away human lives.”

This principle is commonly known as “Human in the Loop.”

De Ridder reasoned, “Humans have conscience, they can wake up in the morning feeling remorse and take responsibility for their actions, learn from their mistakes, and avoid repeating atrocities.”

However, some experts indicated that this humane principle is being eroded by the combat nature endowed by AI capabilities.

For instance, in the Ukraine conflict, Ukrainian forces had to equip their drones with a level of autonomy to guide them towards targets because their communication with human operators was disrupted by Russian military interference.

James Fanell, former Naval intelligence officer and China expert, asserted that relinquishing human control as necessary is not a new concept.

He cited examples like the Aegis Combat System deployed on US missile cruisers and destroyers. The system can automatically detect and track airborne targets, launching missiles to intercept and destroy them. Typically, missile launches are controlled by human operators; however, there are options to switch to automatic mode—for instance, when there are too many targets for human operators to track, the system identifies and destroys targets autonomously.

In a significant conflict where hundreds or even thousands of drones are simultaneously deployed, they can share computational power to execute more complex autonomous tasks. Fanell stated, “Everything is possible; we are beyond the world of science fiction fiction, it’s a matter of whether there is a group of people willing to invest time in studying this real technology.”

Chuck de Caro, former consultant to the Pentagon’s Network Assessment Office, recently advocated for the development of electromagnetic weapons that can render computer chips inoperable. In a column on Blaze, he suggested that it could be possible to develop energy weapons that disable specific chips.

He pointed out, “Obviously, without chips functioning normally, AI can’t work.”

Another possibility is developing an AI superweapon for the purpose of deterrence.

Fanell questioned, “Is the US embarking on an AI version of the ‘Manhattan Project’? Something that could have the same impact on the People’s Republic of China and the Chinese Communist Party as the bombings of Hiroshima and Nagasaki, making them realize, ‘Well, maybe we don’t want to go there, isn’t it mutual assured destruction?’ I don’t know, but if I were an American leader, I would do it.”

This could potentially lead the world into a standoff reminiscent of the Cold War. It’s not an ideal scenario, but it may be more preferable than conceding military superiority to the CCP.

Jason Ma expressed, “Every country knows how dangerous it is, but no one can slow down because they are afraid of being left behind by their adversaries.”

De Ridder urged for a less destructive way to utilize AI globally. He stated, “There are many ways to achieve your goals utilizing AI without sending swarms of killer drones to the other side unless absolutely necessary, no one wants these conflicts to occur.”

However, other experts argued that as long as the CCP sees a clear path to victory, they will not hesitate to provoke such conflicts.

Fanell asserted, “The Chinese will not be constrained by the rules we set, they will try to exploit it and learn how to use it better than us.”

Thayer stated, “Relying on AI military advisors for decision-making, which is highly appealing because they instill confidence by processing vast amounts of data and devising compelling battle plans, could be particularly dangerous, creating visions of victories where none existed before.”

He remarked, “You can see how attractive this prospect is for decision-makers, especially for decision-makers as aggressive as the CCP, which might amplify their aggression.”

Fanell added, “To stop it, there’s only one way, and that’s to defeat it.”

In conclusion, while public pressure may limit the development and use of AI weapons, at least in the US, experts foresee a challenging landscape where the CCP, with its different motivations and controls, may not adhere to such restrictions.