“Charlotte Mount: Ethical Concerns Arose Before AI Weapons Entered the Battlefield”

How long will it take for the United States to be prepared to fight side by side with deadly autonomous attack drones? Though the answer is still uncertain, it may not be long. Technological advancements have made this possibility quite real, but it also comes with serious ethical risks.

Before the outbreak of the Ukraine war in 2022, small drones had already begun to change the face of warfare. With the rapid development of artificial intelligence, the latest attack and reconnaissance drones, with enhanced autonomous capabilities, can complete more complex missions with minimal or even no operator intervention.

On October 10, Anduril Industries based in Washington, unveiled the Bolt series portable vertical take-off and landing (VTOL) autonomous aircraft (AAV), providing the potential to carry out various complex tasks on the battlefield. The basic model of Bolt is designed for intelligence, surveillance, reconnaissance (ISR), and search and rescue missions. Bolt-M, on the other hand, is an attack variant developed from the original Bolt model, aimed at providing lethal precise firepower to ground forces, autonomously executing accurate tracking and targeting missions, while offering operators four simple decision modes: observe, follow rules, engage, and when to initiate attacks.

Designed for rapid deployment, Bolt-M features easy operation and portability, providing autonomous waypoint navigation, tracking, and engagement options. It offers over 40 minutes of endurance and a control range of about 20 kilometers to support ground operations. Moreover, Bolt-M’s ammunition payload reaches 3 pounds, allowing for destructive strikes against stationary or moving ground targets including light vehicles, infantry, and trenches.

Currently, both FPV (First-person view) drone operators on the battlefield, like the Ukrainian armed forces or the U.S. Army, require specialized training to carry out missions, along with many operational limitations. The benefit that Bolt-M’s artificial intelligence offers to FPV operators is the ability to fulfill combat requirements without the need for complex training, providing more information and functionalities compared to existing drones.

Anduril Industries has signed a contract with the U.S. Navy to develop an autonomous attack drone under the background of the Organic Precision Fires-Light (OPF-L) project of the U.S. Marine Corps. The core technology is embodied in the artificial intelligence software provided by Anduril Industries’ Lattice platform. Operators only need to draw boundaries on the battlefield monitor, set some rules, to allow the drone to autonomously carry out combat missions.

The autonomous capability of Bolt-M comes from the Lattice platform, which integrates information from various sensors and original databases and merges this capability with various drones. The Lattice platform provides as much autonomy as possible for drones throughout the kill chain, allowing human involvement for faster and better decision-making.

Once the artificial intelligence identifies a target, operators can assign a target area to Bolt-M, even if the target is located beyond visual surveillance range or is moving, the system is able to accurately track and target it. The drone’s built-in visual and guidance algorithms can maintain terminal guidance, and launch effective attacks, even in cases where the drone loses contact with the operator.

Bolt-M can also assist its operators in understanding ongoing situations on the battlefield and track, monitor, and attack targets according to the operator’s commands. For targets like Russia’s Turtle Tanks or other vehicles with camouflage protection that may be unidentifiable by the computer, the system can still provide this insight to operators for decision-making. Crucially, these lethal drones can maintain control over targets and autonomously carry out previously issued commands even if the link is disrupted, and the operator loses control.

However, the autonomous attack capability of Bolt-M still exceeds the Pentagon’s principle limits, which state that robotic weapons should always have human involvement in lethal decisions. The Pentagon has a list of artificial intelligence ethical principles, which stipulate that humans must be able to “make appropriate judgment and be responsible for developing, deploying, and using artificial intelligence weapons.” Last year, the Pentagon attempted to clarify what is allowed and not allowed, while leaving room for adjusting rules when circumstances change.

With drones becoming more effective on the battlefield, the demand for autonomous attack drones is rapidly increasing. Industrial sectors like Anduril Industries have almost no technical issues achieving autonomous attack capabilities with drones. The key lies in striking a balance between ethical constraints and lethal autonomous attacks. The industrial sector will not address ethical issues, but follow government policies, participation rules, management conditions, and user requirements to make systems as powerful as possible.

One of the crucial lessons from the Ukraine battlefield is that conditions can change very quickly. Different countries, whether allies or opponents, have varying ethical standards regarding the development and use of autonomous deadly weapons, largely influenced by events on the battlefield. This uncertainty or lack of consensus can have serious consequences because, while the Pentagon and various U.S. agencies emphasize artificial intelligence ethics and ensuring the necessity of “human in the loop” for using lethal force, it’s almost impossible to guarantee that adversaries will accept similar constraints. This presents unprecedented risks for the Pentagon, which is why there are significant efforts in optimizing the use of artificial intelligence, autonomy, and machine learning in current U.S. military, government, and industrial sector operations and weapon development.

The U.S. Army has been actively working to ensure the implementation of subsequent steps related to artificial intelligence security. Recently, the U.S. Army launched the “100-Day” artificial intelligence risk assessment plan to ensure, enhance, and improve the development of artificial intelligence systems under ethical constraints. These efforts emphasize the importance of human and machine capabilities.

Not only does the Pentagon require adherence to the principle of “human in the loop” for lethal force, but the U.S. Army’s technical development personnel also recognize that advanced artificial intelligence computational methods cannot replicate crucial and more subjective human attributes in certain aspects, such as ethics, intuition, consciousness, emotions, etc. These are just a small part of the many apparent human attributes of the decision-making core, but they may be crucial in combat operations. Purely technical systems lacking human attributes not only pose ethical risks, but may also not be able to perfectly address complex battlefield situations.