The Ethical Dilemma: Problems with Autonomous Weapons and their Implications

By Henry V. Lyons, Jr. 10/29/2023

autonomous weapons, killer robots, ethical concerns, weaponized AI, autonomous weaponry

In 1984 a movie “The Terminator” became a hit with the premise that machines infused with artificial intelligence become sentient and view humans as the enemy and begins the extermination of mankind. Pure science fiction, right? Or was it?

Today with the emergence of advanced AI and the proliferation of autonomous drones this science story has all the ingredients needed to become science fact. Militaries around the world are already racing to weaponize AI. These lethal machines, conversationally referred to as “killer robots,” are furnished with artificial intelligence systems that grant them the ability to function independently, rendering decisions and executing actions devoid of direct human intervention.

The introduction of weaponized AI into the realm of warfare has triggered a litany of ethical quandaries, provoking intense debates among governmental authorities, military strategists, and advocates for human rights. The implications that arise from the utilization of autonomous weaponry give rise to a multitude of questions, notably concerning accountability, transparency, and the unforeseen consequences that might ensue.

To fully understand the ramifications and moral quandaries that this new technological surge brings we must examine the arguments on both sides of the discourse concerning autonomous weapons. We must consider the complexities of this matter before we can develop a meaningful dialogue on how our society can effectively grapple with the challenges posed by weaponized AI.

The Risks and Dangers of Autonomous Weapons in Warfare

military robots, lethal autonomous systems, risks of AI in warfare, dangers of weaponized AI, autonomous weapons in conflict zones

The development and deployment of autonomous weapons, often referred to as military robots or lethal autonomous systems, have elicited profound concerns regarding the risks and perils entailed by AI in the context of warfare. These highly advanced technologies harbor the potential to reshape the landscape of modern warfare dramatically, presenting a myriad of opportunities and challenges that reverberate across ethical and security dimensions.

One of the paramount apprehensions surrounding AI in warfare is the perilous prospect of relinquishing human control over weapon systems. Autonomous weapons, by their very nature, are designed to function independently, autonomously rendering decisions, and executing actions devoid of direct human intervention. This conspicuous absence of human oversight gives rise to an array of disconcerting questions regarding accountability and the unforeseen ramifications that may ensue. This is also where the “Terminator” argument begins. Imagine intelligent heavily armed machines roaming the battlefield untethered to human control. Making split second combat decisions on their own.  As the intelligence of AI grows exponentially who is to say what decisions these machines will choose to make. How will they determine who is friend over who is foe and are these systems safe guarded against hacking?

These are some of the questions posed by Maiara Folly in her article entitled ‘Liller robots’: the danger of lethal autonomous weapons systems published in the Southern Voice. “In Latin American countries, the use of facial recognition technology has disproportionately threatened black people. Many are arrested for crimes they did not commit. It is often based on inaccurate images identifying them as the perpetrator of offences. Ultimately, lethal autonomous weapons systems making use of this technology to identify targets could kill innocent individuals. In addition, killer robots, like all technology, are vulnerable to hacking and could turn against their developers in the event of cyberattacks.” (Folly, 2022) The potential incapacity of autonomous weapons to make an accurate distinction between combatants and non-combatants lays the groundwork for indiscriminate attacks within conflict zones. Furthermore, the specter of these systems falling prey to hacking or manipulation by malicious intruders looms large, potentially precipitating catastrophic outcomes.

A further cause for concern lies in the looming specter of a new arms race, which could be triggered by the development and proliferation of autonomous weapons. As nations vie to secure a competitive edge through the integration of AI technology into their military arsenals, a perilous cycle is set in motion, where each party endeavors to outshine the other in terms of weapon capabilities. This upward spiral augments the likelihood of conflicts and instabilities on a global scale, casting a somber shadow over the prospects for international peace and security.

Consider the U.S. Air Force’s idea to connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2) together in one AI driven system. This system would enable commanders to make faster and more accurate decisions by collecting data from thousands of sensors then using AI algorithms to identify targets, select the best weapon for that target and then engage the target. It was supposed to only control conventional weapons but there are plans to link it with the Pentagon’s command-control-and-communications systems (NC3). If you’ve seen the movie “Terminator” then you know this is how the computers took over. In the movie an AI system named “Skynet” was put in control of the entire U.S. nuclear arsenal. This system became self-aware and turned on its creators using the very weapons it was in control of against them. This would be life imitating art in a horrific way. But let’s say a system becoming sentient is too fictional to become reality. There are still some real dangers involved in activating such a system.

Take this scenario from an article published in Foreign Policy in Focus, “Picture a time in the not-too-distant future when a crisis of some sort — say a U.S.-China military clash in the South China Sea or near Taiwan — prompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering the intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on U.S. facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.” (Klare, 2023) This is much more frightening because it is much more plausible.

On an international scale, concerted efforts are underway to grapple with and mitigate the risks and dangers associated with autonomous weapons. Organizations such as the Campaign to Stop Killer Robots www.stopkillerrobots.org ardently champion the cause of a pre-emptive ban on fully autonomous weapons systems. Their fervent advocacy underscores the paramount importance of retaining meaningful human control over decision-making processes during armed conflicts, preserving the core tenets of humanity and morality in the face of advancing technology.

Ethical Considerations: The Moral Quandaries Surrounding Autonomous Weapons

ethics of killer robots, moral implications of autonomous weapons, human control over AI weaponry, accountability, and responsibility in warfare

One of the central ethical dilemmas revolves around the issue of human control vis-a-vis AI weaponry. The crux of the matter is whether decisions involving life and death should be entrusted solely to machines, bereft of human oversight. Alternatively, should there perpetually be a human presence in the decision-making loop, guaranteeing accountability and bearing the weight of responsibility for these consequential actions?

The deployment of autonomous weapons further unfurls the labyrinthine corridors of accountability in warfare. In the unfortunate event that an AI-driven weapon errs or inflicts harm, the question of ultimate responsibility looms large. Is it the programmer, the brilliant architect behind the algorithm, who shoulders the blame? Or do we point fingers at the valiant military personnel who put these digital sentinels into action? Such multifaceted moral conundrums necessitate meticulous contemplation and the formulation of robust legal frameworks to ensure that those culpable are held answerable.

Furthermore, an ominous specter of apprehension emerges regarding the potential misuse or the harrowing scenario of hacking of autonomous weapons. Should these highly sophisticated systems fall into the wrong hands, or worse yet, be manipulated by malicious actors, the repercussions could be nothing short of catastrophic.

As society grapples with the intricacies of these ethical dilemmas, it becomes imperative to foster meaningful dialogues and erect international norms and regulations to govern the utilization of autonomous weapons. Striking a delicate equilibrium between the relentless march of technological progress and the enduring values of humanity will be quintessential in steering through this labyrinthine terrain responsibly.

References

Folly, M. (2022, May 18). ‘Killer robots’: the danger of lethal autonomous weapons systems. Southern Voice. https://southernvoice.org/killer-robots-the-danger-of-lethal-autonomous-weapons-systems/

Klare, M. (2023, July 14). The military dangers of AI are not hallucinations – FPIF. Foreign Policy in Focus. https://fpif.org/the-military-dangers-of-ai-are-not-hallucinations/

Stop Killer Robots. (n.d.). Stop killer robots. https://www.stopkillerrobots.org/

Leave a Reply