Should Lethal Autonomous Weapons be Banned?

Stephen Ragan
17 min readMay 24, 2019
Russian Automated Weapon

Technological developments are reshaping our lives. We are increasingly interconnected, analyzed, and evaluated. As technology progresses, we are learning how best to live with progress that instantly allows us to individualize an online paradise of passive consumption full of feedback loops that increasingly isolate us.

Artificial Intelligence is becoming increasingly adept at understanding our tastes, values, and desires better than our own conscious mind by manipulating our first order, impulse thinking. Algorithms and machines are winning the battle over our wants and what we want to want.

Aritificial Intelligence is also making progress in another, less abstract battlefield. I am speaking about lethal autonomous weapons and the debate currently raging over the future of warfare and how it is conducted.

Our thinking about autonomous weapons is integrally linked with pop culture we have grown up with. Now think about Terminator sprinting across Afghan plains or drones striking within with calculated precision. Our imaginations, through a combination of science fiction and technological journalism have been stretched, and shaped to inform an intuition on weapons development. I want to explore that intuition as Amazon shareholders vote on their future of their facial recognition software and employees at Google lead march outs to protest the use of their technology by the military.

But First, Just How Bad Might Lethal Autonomous Weapons Be?

Because lethal autonomous weapons do not yet exist, these considerations are speculative. By lethal autonomous weapons I mean weapons with the offensive capability of operating in a dynamic environment picking and choose targets without human control.

First let me introduce the philosopher and author Nick Bostrom’s “Vulnerable World Hypothesis.” His paper broadly looks at technological advancement as a process of pulling balls out of a giant urn. Each ball represents a possible idea broken into three categories on the basis of their utility. White balls are entirely beneficial. Gray balls offer mixed blessings while a black ball is defined as a “technology that invariably or by default destroys the civilization that invents it.” It’s unclear what type of ball lethal autonomous weapons will be, but as Bostrom says, and he is way smarter than me:

“The hypothesis is that there is some level of technology at which civilization almost certainly gets destroyed unless quite extraordinary and historically unprecedented degrees of preventive policing and/or global governance are implemented.”

In his paper Bostrom draws an analogy to the nuclear arms race and highlights an essential virtue about the difficulty in developing an atomic weapon. This difficulty has prevented its mass production[1], thankfully, and reserved control to nation states. Bostrom goes on to speculate about the potential for a technological advancement in weapons development that makes weapons of mass destruction easily attainable. Something like perhaps drones, which are easy to buy and program.

You see a part of artificial intelligences fast acceleration is the open nature of its development. While this has pushed the technology forward, it also makes it available to, for instance go on Google and find a trained neural network that can identify human faces to determine age and gender. That’s a little disconcerting.

The ease of access relates to another Bostrom idea that herefers to as the “anarchy at the bottom.” This idea explores the possibility that a sufficiently large sample increases the likelihood of deviation from an accepted normative behavior. So with a large enough number of people you will get someone who can program a drone to identify targets, take the next step, and attach a weapon to it and you better understand my concern

The above example considers the potential of a rogue actor using developments in technology to wreak new forms of terrorist behaviors. Let’s consider, however the existentialist threat, the farthest end of the speculative spectrum that combines ideas of artificial general intelligence, machine learning, weapons development, and our own disregard for our environment and those who inhabit the planet with us.[2]

Add to this dystopian vision a rogue machine, recognizing its superiority[3] while at the same time observing the reckless abandon and destructive nature with which we treat the planet. The machine determines humans are a threat to the continued survival of the planet and like the old Nike commercial says, “Anything you can do, I can do better” anyways and eliminates the human race like an Elon Musk nightmare so we better hurry up and build those beautiful manufactured worlds.

Anyways, after that lengthy digression, what should be clear is that nobody knows what color ball lethal autonomous weapons will be.

What is a Lethal Autonomous Weapon

When speaking about lethal autonomous weapons I will use the Department of Defense’s definition of an autonomous weapons system that is used to target and deploy lethal force. These weapons would have the ability to operate in a dynamic environment to pick and choose targets without human control.

This is an important specification of offensively capable weapons that would choose and execute targets without a human in the decision making loop. This is different from other uses of automation that are already and have been in use for many years. One example of the current form of automation is radar guided defense systems used to defend ships. These systems have been in use since the 1970s and can identify and destroy incoming missiles and rockets per human specification.

Arguments to Ban Lethal Autonomous Weapons

A Categorical Ban

In his book Army of None, Paul Scharre meticulously outlines outcomes of past treaties banning weapons in combat. Some similarities emerge among effective treaties emerge. One of the properties is drawing clear and explicit lines that leaves no room for exceptions. The usual example cited is the ban on the use of lasers that has successfully been in place since 1998.[4]

However, clear and explicit provisions are not generally enough. A more important factor is the utility of a weapon in war. After all there is no trial for the winners. The common counter example is the ban on the use of submarines. In 1930 the Rules of Submarine Warfare were set forth in the Treaty of London. At the outset of World War II, forty-nine nations were Parties to the treaty. Germany, Italy, Japan, the UK, and the United States willingly violated the provisions in World War II.

Scharre argues that the best way to insure a ban is when there exists a significant risk of reprisal. Legally binding treaties are routinely violated because there are no consequences if a country wins a war. What does deter countries is the threat of retaliation and the example cited is mutual deterrence in the context of atomic weapons.

One difficulty in applying a complete ban on offensive artificial intelligence is insuring transparency and compliance in the murky world of cyber space. Another difficulty exists is that this type of technology has been used for defensive purposes for over 40 years.

Thus, one of the major arguments in favor of building autonomous weapons is the fear that others might do so. This is a classic example of a prisoner’s dilemma. It might be true that nations, as well as the world, would benefit mutually if no country developed lethally autonomous offensive capabilities. However, the country first to develop autonomous weapons might reap such great benefits that the incentive is to develop them.

Second mover fears are heightened because of machine’s super human computation and coordination abilities already on display that humans fundamentality cannot compete with. Even if the justification in development is for purely defensive purposes, the AI must be developed. Then the moral line between offense and defense becomes harder to judge.

Lethal Autonomous Weapons Do Not Comply With the Laws of Armed Conflict

Another line of argument advocates that fully autonomous weapons cannot comply with the laws of armed conflict. The degree of human control over weapons systems is an important point of consideration because it is humans who apply the law and are obliged to respect it. It’s clear that the laws of armed conflict govern the actions of humans during the course of war. Not machines and obligations under the laws of war cannot be transferred to machines.

What is at the heart of the debate is the idea that highly automated systems must have “meaningful human control” in order to comply with international law. An interpretation by the U.S. Department of Defense Law of War Manual argues that the laws of war impose an obligation on humans to make specific determinations in the case of lethal force related to 1) distinction 2) proportionality[5] and 3) precautions in attack that cannot be left to machines. Thus, it is a human that must have sufficient information regarding a target, the weapon used, and context for the attack in order for lethal force to be lawful.

A question is then raised regarding the sufficient level of human involvement in determinations of lethal force and where culpability should lie,[6] and on a more philosophic level, whether the removal of a human weakens us as moral agents.

Another thorny issue relates to the distinction requirement, and the ability to determine between civilians and combatants. In the past machines have demonstrated biases in how visual and audio recognition features operate[7] and creates concerns for how and who machines might target with an especial concern for marginalized groups.

Other Positions

Some view the development of autonomous weapons as inevitable. What if THEY get the weapon. That nebulous “they,” the villain of all our films, hearts, and minds. AI has the capacity for superhuman quantitative and cooperative abilities making decisions so so so much faster than humans. What if “they” develop these capabilities and we are caught with our pants down so to say. Well, smart people have thought about this scenario and some argue that the continued development of fully autonomous weapons is justified in this context. What is this context? Well, it’s the context of defense. In order to combat weapons with decision capabilities at incomprehensible speeds, we will need machines to respond likewise. This is a compelling argument. Where you draw the line and the temptation to use weapons for offensive purposes, glory and all that jazz might be a temptation far harder to resist than an apple.

But:

Let’s also consider that if algorithms are better at making decisions; they are more precise and decisive, aren’t susceptible to human emotion, and quicker thinkers, why shouldn’t they be used in warfare? Wouldn’t it even be more prudent to do so? Along with increased precision and better decision making, they will reduce the risk of death by employing machines in the place of humans.

The counter-argument is that a battle from distance with machines lowers the number of human casualties for the side employing autonomous weapons and thus reduces the cost of going to war which in turn lowers the risk calculation and makes war a more attractive scenario.

Another argument for a ban relates to the military industrial complex and that constantly innovating for war means we are always on the brink of deploying some new technology and always preparing for the next iteration. But, of course, so are they. Better to be prepared then unprepared. The rules of the game are too far entrenched.

As should be clear by this point there are a number of arguments for and against the development of lethal autonomous weapons. One recurring theme in the debate is the role of human dignity. The argument is that death by a machine, without human consideration, is inherently inhumane and violates our notion of human dignity.

The Role of Human Dignity

Much is said about the role of “human dignity” as a vital consideration in the use and deployment of autonomous weapons. Arguments based on notions of human dignity say that it is essentially inhumane to delegate to a machine the identification and calculation to kill a human.

The idea is that there is something intrinsic to each individual imbibed within us at birth, something immutable that demands equality as humans, the freedom to choose one’s life, and in this case, to take it. This is the foundational principle that forms the argument a human must always be in the loop of decision-making to provide “meaningful oversight” to comply with the requirements of distinction, discrimination, and proportionality under the laws of armed conflict.

What should we make of this term “human dignity.” A basic principle of human rights law and how can a better understanding of it inform the debate on lethal autonomous weapons? After all if you’re going to die, does it really matter if the decision maker is a human or a robot?

“Human Dignity” After World War II

Human dignity is an abstract and ethereal notion that is the foundation of human rights and international law in the post-World War II era. This period’s orgin is marked by the Universal Declaration of Human Rights (1948) and the Grundgesetz (Basic Law) in Germany. After the atrocities of World War II, nations sought to codify universal principles for the flourishing of humanity and insure the end to an era of global warfare.

The preamble of the Universal Declaration of Human Rights states a “recognition of inherent dignity” essential to the pursuit and flourishing of humans. The first article declares, “all human beings are born free and equal in dignity and rights.”

Similarly Article I of the Grundgesetz states:

(1) Human dignity is inviolable. To respect it and protect it is the duty of all state power.

(2) The German people therefore acknowledge inviolable and inalienable human rights as the basis of every community, of peace and of justice in the world.

There are some general characteristics of “human dignity” represented in law, secular, and sectarian writings that form a general understanding about what the term means. After a brief exploration of these commonalities I will explore how notions of “dignity” inform the debate on abortion and how we can use this conceptual understanding and apply it to lethal autonomous weapons.

First, the idea of “dignity” is rooted in egalitarianism. The idea that every person, by virtue of their birth enters into the human family and thus at its root we are all born equal, and thus no human life should be valued above or at the expense of another.

Second, the notion of dignity is inviolability. This inherent inviolability is something that we are all born with. Theologians argue this idea is rooted in our understanding of God and are relation to Him. The argument goes that we are all created in His image and by virtue of birth share in the collective idea of dignity.

Third, the characteristic of dignity is rooted in autonomy and freedom from coercion. European Human Rights Law states “dignity is the basis of freedom to choose one’s autonomously chosen goals.”[8]

The Debate on the Legality of Abortion

I want to turn to a contemporary example that might shed some light on determinations of life and death currently playing out in America over the morality of abortions. Human dignity is at the root of the pro-life argument that abortions should be illegal.

In America, abortions were deemed a constitutionally protected right by the Supreme Court in the case Roe v. Wade in 1973. The case was decided on privacy grounds reasoning it is up to a woman to decide what to do with her body. In Planned Parenthood v. Casey (1992) the court expanded on the ruling linking dignity to autonomy saying:

“These matters, involving the most intimate and personal choices a person may make in a lifetime, choices central to personal dignity and autonomy, are central to the liberty protected by the Fourteenth Amendment. At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and the mystery of human life. Beliefs about these matters could not define the attributes of personhood were they formed under the compulsion of the state.”

In recent years the debate has reemerged as the Supreme Court has grown increasingly conservative. In his campaign for president, Donald Trump made overturning Roe a pillar. Since being elected he has appointed two conservative justices to the Supreme Court. One of the justices replaced was Anthony Kennedy who voted with the liberal wing on issues related to abortion.

In this debate there exists a tension between the right to life and the autonomy to decide what a life should consist of. There are competing considerations, on the one hand the interest of the mother to determine the health and well being of her life is weighed against the interests of the unborn child. The pro-life argument reiterated is that life begins in the womb and thus the fetus is endowed with human dignity and the rights and protections afforded any other citizen. In America, however, the right to autonomy has been held paramount.

Not all countries have followed the American course of action. In Germany the law is different and the Federal Constitutional Court, the highest court in Germany, held abortions violate the Constitution. In 1975 the Federal Constitutional Court reasoned “where human life exists, human dignity is present to it; it is not decisive that the bearer of this dignity himself be conscious of it and know personally how to preserve it.”[9]

In America this debate is not settled and pro-life advocates are making a similar argument advocating that the right to life precedes the right to autonomy. In Alabama, the state legislature has directly challenged Roe by enacting a law that specifically makes abortion illegal in the state and threatens to punish doctors who perform abortions with lengthy prison sentences.

The Alabama legislators view themselves as champions of science and morality arguing “medical science has increasingly recognized the humanity of the unborn child” through studies that claim to show a fetus’ ability to feel pain long before the usual distinction of viability at 24 weeks. The legislators go on to compare the number of abortions performed in the United States, 50 million, to genocides like the Holocaust and Stalin’s gulags.

On the other hand women’s rights activists see this as an affront to hard fought rights in a precedential ruling that is nearly 50 years old. Activists consider this an attack on their autonomy and the regulation of their bodies. Women argue that laws criminalizing abortion violate women’s dignity and take from them decisions about their health, family needs, economic freedom, bodily autonomy, and laws criminalizing abortion reflect status-based controls over women’s lives[10]

One defining characteristic that has thus far won out is that dignity to decide, to make decisions about one’s life. To consider the decisions and alternatives free from “compulsion of the state” gives us agency in how and why we live our lives. This is fundamental consideration to a life worth living.

What the idea of human dignity encapsulates is ideas of equality and autonomy to promote a free and just world. We want to reserve that determination of life and death, in the case of abortions, to humans. Forcing us to make this decision forces difficult considerations and insures that we don’t outsource our morality, and continue to improve our existing conceptions. And finally, our decisions must take into account that where we deny human dignity to others we risk losing our own.

Conclusion

In the case of Legally Autonomous Weapons the case is similar to the abortion debate over the dignity derived right to life in the context of war. Pain, death, and degradation of dignity are fundamental aspects of war. War at the same time has the ability to exalt other defining characteristics of dignity like bravery and comport under excruciating levels of stress.

On the question of abortion in American, the dignity derived right of autonomy supersedes the right to life. In the case of Lethal Weapons, the dignity derived right is the right to life. In times of war this right is subjugated to the necessities of engagement. We subjugate this right contingent on the parameters of proportionality and discrimination outlined under the laws of armed conflict. These parameters require humans comply with certain standards that confide in humans decisions about life and death. In essence these actors define the meaning of human life, the mystery of existence and the universe in order to make and live with decisions on the battle field. Delegating this task strips us of our agency and rationality. The very capacity that elevates us over species with whom we cohabitate. Delegating this task to machines would blur these lines. It is because of our humanity and the dignity there imbibed that we must continue to make these difficult decisions.

Within dignity and the idea of it is that tension between the right to life and the freedom to choose what makes a good life. Without the freedom to determine the meaning of life, the very essence of life is devalued. This is the formulation of an American understanding of dignity that exalts freedom over the right to life consistent with its jurisprudence. At the same time the consideration of human life required under the laws of armed conflict is an expression of respect for the very nature of life and humanizes the ‘other.’ As in American jurisprudence, the right to life is not paramount and if it were there would be no justification for war.

As outlined above, it’s far from a clear-cut case about banning the development and use of lethal autonomous weapons. There are strong arguments on both sides. America’s commitment to developing autonomous weapons is clearly outlined in its Summary of the 2018 Department of Defense Artificial Intelligence Strategy[11] explicitly mentioning one rationale as preparation to Chinese and Russian investments in AI for military purposes. At the same time the DoD maintains a commitment to existing guidelines in DoD Directive 2000.09 issued in 2012, that requires “autonomous and semi-autonomous weapon systems be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Perhaps the most glaring rationale for the development of these weapons is the fear of being caught out and the potential consequences. It is easy to be idealistic from behind a computer screen and advocate against the development of lethal autonomous weapons. There is however the practical, and unfortunately gruesome realities of war made all the worse by asymmetric access to weapons, personnel, and strategy inherent to any conflict. The evolution in weapons development seems intent on the continued development of AI. Now it seems the consideration is about where we draw a line on the required level of human involvement.

Drawing a clear and comprehensive line on autonomous weapons is the first step to creating a future of war that continues to operate under the auspices of international humanitarian law, itself a response to the darkest moments in human history. Whether that will be enough to stop rogue actors and nations is a question for time and in times of conflict it’s best to be prepared for worst-case scenarios.

[1] Mass production does not necessarily capture the essence of what I mean especially in light of stock piles that America and Russia have amassed over the years entering something on the magnitude of thousands. What I mean by mass production is more akin to a regular availability to common people, like guns for instances or drones. Drones are probably the best example.

[2] We might be living in a period of mass extinction as the book The Sixth Extinction by Elizabeth Kolbert argues but like National Geographic soothingly and reassuringly extols, “this creates new opportunities for other species to thrive.

[3] This is probably like the deepest darkest fear about artificial general intelligence. (AGI)

[4] https://treaties.un.org/doc/Treaties/1995/10/19951013%2001-30%20AM/Ch_XXVI_02_ap.pdf

[5] Proportionality is a balancing determination weighing the anticipated military advantage of an attack versus the expected incidental civilian causalities, which seems really blasé.

[6] Who is more culpable, the software engineers writing the code that tells a weapons system when and against who to attack, the operators in the field who carry out the attack, or the commanders supervising them?

[7] Karen Hao, This is how AI bias really happens-and why it’s so hard to fix, https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/?utm_campaign=the_algorithm.unpaid.engagement&utm_source=hs_email&utm_medium=email&utm_content=71495159&_hsenc=p2ANqtz-_30bPnYzG9Ug-3_Ou5bIlzVhZaZyQraOxwHW59iZO1JzufISvX36L5BT08vtWj1c75AW2KWKp1iJq_vhioZ6QJywbWvg&_hsmi=71495159,

accessed 01/05/2019

[8] Going meta demands an exploration of what an “autonomously chosen goal” could even mean considering some modern thinkers’ claim that “free will” is an illusion and that our actions are rooted in type of biological impulse or psychological manipulation, or even that all of life “determined.” However, that is merely a tangential claim and the point being a freedom from a direct tyrannical coercion. It seems we prefer our coercion to come in the more subtle forms of advertisement and attention grabbing technology.

[9] BVerfGE 1 (1975) (Abortion I), 641 (citing Articles 2(2)(1) and 1(1)(2)).

[10] Reva Siegel, ‘Consitutionalization of abortion,’ in Michel Rosenfeld and András Sajó, Oxford Handbook of Comparative Consitutional Law (Oxford, Oxford University Press, 2012), 1057.

[11] https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF?utm_campaign=the_algorithm.unpaid.engagement&utm_source=hs_email&utm_medium=email&utm_content=72894729&_hsenc=p2ANqtz-9mYaLtgEC--vu0pL2dsikEhO4A_-4GhQ2cz5f_um4F3fYRZwoLj3RMsgEqkb7sKwcQR24qvNw9XReaKLxuRfxlnshfaw&_hsmi=72894731

--

--