Wednesday, July 14, 2010

Drone Basics

An autonomous weapon system is an intelligent machine that can decide on its own whether or not to employ lethal force. The intelligence of the machine is a byproduct of its sensor technologies. For instance, a machine equipped with special acoustic sensors can detect gunfire. The detection of gunfire and computation of appropriate behavior in response to gunfire is an example of machine intelligence. If the robot is equipped with a gun of its own, which has the range necessary to return fire, then that machine has the capability to fight fire with fire. The autonomy of the machine is a function of whether or not it can act on its own, without human input. For example, South Korea has recently deployed a sentry robot along the Demilitarized Zone that separates North and South Korea. The job of this robot is to detect and kill intruders. It is equipped with undisclosed surveillance technology that enables it to detect and track, and armed with the ammunition necessary to fire upon targets. This is an autonomous machine, which functions without human input. The exact discriminatory intelligence of the machine is unknown and partially irrelevant considering nobody is supposed to be in this Zone - it is a case where trespassing is punishable by death. However, looking forward, discriminatory intelligence is a function necessary to their use in theater writ large; for discriminatory intelligence enables the machines to tell the difference between friend and foe. If a machine cannot accurately tell the difference between friend and foe then innocents can die. Therefore, countries interested in respecting the discrimination and proportionality elements of customary international law will look for discriminatory intelligence as an important gauge of the general field-deployability of weaponized autonomous robotics.

Ronald Arkin proposes that first generation autonomous combat systems ought to possess limited autonomy. The technical range of behavior should be bound by formal rule sets, such as the Laws of War ("LOW") and Rules of Engagement ("ROE"). These rule sets are formalized as computer code to regulate the behavior of the system in the battlespace. In fact, Arkin's work has arisen in response to a demand for autonomous weapons systems that can effectively distinguish between combatants and non-combatants. The decisional characteristics of these machines is such that they cannot execute lethal force unless the situation at hand meets strict criteria (i.e. if shot at then return fire). But even within these simple criteria, there is room to wonder how much discrimination satisfices ethical sensitivities while embodying the flexibility necessary to be field deployable for spontaneous missions. For instance, strict adherence to pre-programmed constraints may handicap operational effectiveness in first responder situations. The Arkin type of tight systemic constraints, which largely depend upon exhaustive definition and predictive classification of situational constituents might be fine for isolated situations, where all the environmental variables are more or less known, but this same machine architecture could be impoverishing in quick response circumstances featuring innumerable unknowns. Therefore, in order to embody the appropriate degree of readiness, said military machines have to be adaptive – they have to be able to learn on the fly – they have to be able to not only operate according to pre-programmed constraints, but also learn on the fly, and incorporate that learning in real-time.

No comments:

Post a Comment