Thursday, July 15, 2010

Drone Ethics


There are at least three major ethical issues associated with autonomous weapons systems: (1) Could and should machines be enabled to make sensitive decisions about human life? (2) Does the United States have an ethical imperative to develop such autonomous systems? (3) What do we do if the (moral) cost of such development outweighs prospective benefits?

    At this point, machines can only follow the rubric of pre-programmed constraints. Relative to state of the shelf technology, whether or not machines should be enabled to judge who to kill and who not to kill on their own, without a man in the loop, may be a matter of how well policymakers, or whoever is approving the employment of such systems, trust in discriminatory intelligence of these machines. The discriminatory intelligence of these first generation autonomous machines is only as good as the code that the programmers establish as the basis for the functionality of the machines. For those interested in the principals of proportionality and discrimination, which are staples of Just War Doctrine, it is imperative to ensure that these machines can distinguish between combatants and non-combatants with a ‘high’ degree of accuracy before being let loose in theater. The benchmark metric for determining ‘how much discriminatory intelligence is enough’ has yet to be defined in any official capacity.

    As an advocate of Just War Doctrine, and as a pillar of customary international law, the United States has an ethical obligation to maximally comply with the principles of discrimination and proportionality. Warfighters must discriminate between combatants and non-combatants. Loss of civilian life must be minimized. There is an implicit responsibility to employ whatever means maximize compliance with these principals; whether that means appropriately educated human beings or autonomous machines.

    Implicitly, there is also a responsibility to minimize the adverse affects of war on warfighters. By adverse affects is meant the full gamut; from stress and strain to wounds and death. So long as they can satisfy the principals of discrimination and proportionality, if machines can save lives then there is an ethical obligation for machines to be so employed. To an extent, robots have been employed in war for the purposes of saving human life, such as the PackBot from iRobot, which goes on missions with the objective of locating and detonating Improvised Explosive Devices. In this capacity, robots have saved countless lives. Lives could be saved if robots were discriminate enough to help soldiers in gunfights.

    From a big picture perspective, in terms of ensuring minimum casualties, there is an imperative to end wars as soon as possible. Therefore, wartime policymakers have a responsibility to assess whether or not, and to what extent, robotics can facilitate victory. From the military planning side, it is necessary to incorporate such technologies in a way that helps, rather than hinders, operational effectiveness.

    Also from a big picture perspective, it is important to consider whether or not and to what extent the development and incorporation of robotics into war affects the sheer quantity of war. For instance, if state or non-state actors can cost-effectively utilize semi-autonomous or autonomous robotics for wartime operations then wars may become more plentiful than before. This train of thought would stick for countries, especially developed countries, which would go to war more, but who are dissuaded from doing so at least in part due to adverse public reaction as regards risking human lives. Further, robotics could be used by enemies of a state, or disenfranchised groups, to stage and carryout remote terrorist attacks, a type of terrorism that would proceed without the element of human sacrifice. These are some of the risks associated with autonomous robotics technology. The United States, among others, has a moral responsibility to assess and militate against the potential threats posed by such technology.         

    5 comments:

    1. We need to be careful of statements about "machines implicitly embodying moral elements". This seems to hinge on your thought about machines possessing "decisional capabilities". One problem hangs on the meaning of the word "decision"; get this wrong and all sorts of false inferences follow.

      Let us be clear that a "decision" in computer terms can be as simple as the conditional "if x, then y" e.g. if sensor 1 activated, then open fire. It is a bit like saying a mine makes a decision: if pressure greater than p, then explode (of course this is done mechanically). Link a lot of these "decisions" together in a program and we start talking about a "reasoning machine".

      You could make an argument that human decision making is like that of a thermostat (as some in AI have done) but I don't think that is your intention here.

      IF the conditionals are loaded with moral statements, THEN we might call this a "moral machine". IF we cleverly load the conditionals in the same program with statements about pet grooming, THEN it becomes a "pet grooming advisory machine".

      A lot of the discussion around moral agents and machine intelligence boils down to such simple anthropmorpic confusions at the foundations of computer science. When we unravel these, the systems are laid bare for what they are - dumb machines following instruction (and I am aware of the anthro-error of "dumb").

      I don't even want to get started on the idea of "learning moral decisions" or I will be here all day. Suffice it to say for now that, having worked in machine learning for many years, the types of statistical pattern recognition and mathematical models we use, do not resemble the type of moral learning you are referring to. (It is arguable that they model neural synaptic learning).

      I recommend that when you and others read papers about the artificial conscience of armed robots, you search and filter out the anthropomoric terms like "guilt", "anger" and "humanity" etc. and look at what is actually being done.

      Yes of course we should build safety critical software and mechanisms into autonomous killing machines but let us not cloud the issues with confusing analogies to human reasoning and morality. Let us use human ethical principles in the design of the machines.

      Anyway - an interesting blogs here - keep up the good work.

      best wishes,
      Noel Sharkey

      ReplyDelete
    2. Hi Noel! Thanks for tuning in!

      You misread the context of the sentence in question, which actually reads: "machines performing activities that implicitly embody moral elements". This does not say anything about the decisional aspects of the machine following processes that could be compared to human moral reasoning processes. Rather it says that machines are now, and may well be more so in the future, engaging in tasks that are patently moral in nature, such as killing. Since I tend to write in double entendre, I should make this distinction clearer.

      The later sentence that reads "If human beings were not the only things that could learn, decide, and coherently articulate such complex cognitive processes" is more in tune with your commentary. Despite your comments, I do wonder if an autopoietic system, which can learn, derive its own rules, and execute those rules, might be considered a moral machine. It goes beyond conditional programming. The ability to have self-given rules is, to me, a critical component in the creation of 'artificial moral agents' proper. Your thoughts on autopoiesis?

      V.

      ReplyDelete
    3. I've been reading about nanotechnowledgy and find it fascinating. There are many ways machines and nano-neurology work very well together. Not everything is bad all the time, if it is bad.

      In my humble opinion, don't bomb us, we won't bomb you. Sounds fair to me. You want to include 'morality' into the picture? Then tell me why. Why did 2996 Americans die, others injured, and the people who ran to save them suffer from breathing? WHY? Because we are a Christian nation, and they thought they could without consequence. Wrong. We changed presidents. They should have known better. Never mess with Texas or the USA!

      (Just a little rant. I had friends who died that dastardly day. I don't very many people who didn't.)

      Thanks for the 'friend' invitation on facebook. I'll add it if you still want me to do so. Have a blessed day.

      ReplyDelete
    4. I'm sorry that you lost friends on 9/11. I'm glad to hear your fascination with the future. There are many ways that humans and machines work well together. Instead of it being a relative term, it is important to define what 'bad' means. That is the point of this analysis of the ethics surrounding autonomous machines.

      Your opinion as regards bombing (the eye for an eye mentality) is (overly-) simplistic.

      People care about 'morality' because people want to do the 'right' thing. In our society, what the 'right' thing is must be clearly defined, aside from the fog of the emotions of the disenfranchised.

      9/11 happened because men, motivated by opinions and predisposed to violence, made it happen. Whether or not they thought they could do so without consequence is hearsay. That would have been stupid of them, because there are always consequences.

      Anyway, I hope you are enjoying the blog! Take care and thanks for tuning in!

      ReplyDelete
    5. Then tell me why. Why did 2996 Americans die, others injured, and the people who ran to save them suffer from breathing?

      I'd like to 'correct' something that's overlooked regarding causalities. Not all who died were American nor were all Christian.

      ReplyDelete