Sunday, July 18, 2010

Culture Clashes: Drone Technology


In discussion of the social issues associated with autonomous weapons, I turn to a treatment of such in terms of strategic, or military, culture. Strategic culture is defined as “shared beliefs, assumptions, and modes of behavior, derived from common experiences and accepted narratives (both oral and written) that shape collective identity and relationships to other groups and which determine appropriate ends and means for achieving security objectives” (Kartchner, 2009). In other words, strategic culture concerns the identity, values, norms, and perceptions that affect the functioning of a given security apparatus.

American innovation has been responsible for breakthroughs in abstract communication media such as the internet and allied social networking technology. These communication technologies are a part of the status quo American identity. These technologies ‘close the gap’ of communication by enabling virtual correspondence with many, anytime. Ironically, utilization of these remote communication technologies can make for a disconnection in personal communication in terms of more time calling, texting, Facebooking, or e-mailing and often less actual face time. This component of American identity is reflected in military technology. For instance, today, at least one-third of all aerial, sea, and land-based military vehicles are unmanned, albeit still controlled from a remote location by a human operator. So while technology enables the military to ‘close the gap’ – to be anywhere in the world in a moment - this same technology can also cultivate a sense of disconnectedness from the reality on the ground. The cultural identity of America, with its emphasis on remote command, control, and communications, feeds the development of autonomous weapons technologies. Likewise, the utilization of such technologies reinforces this component of sociocultural identity.

America values a specific type of instrumental rationality; one based upon cost benefit analyses. Not everybody formulates rationality in this same way, nor is it strictly American. In the United States, the common way of evaluating war is on this scale of cost benefit analysis. There is also a strong sense of ‘duty’. As far as the US is committed to compliance with the Geneva Conventions and the principals of Just War Doctrine, military culture is concerned with proportionality (i.e. the rewards must outweigh the risks) and discrimination (i.e. between combatants and non-combatants). The verdict on whether or not an event or mode of action fits within this framework of valuation often determines the extent of socioeconomic resources committed to the cause. Turning to the US development and utilization of autonomous weapons, the extent of development and utilization presumably depends upon whether or not autonomous technology can demonstrate a reliable level of discriminatory intelligence, enough to satisfice concerns over proportionality and discrimination. This is the materialistic level of evaluation, which is supplemented with concerns about whether or not it is ‘worth it’ to advance autonomous systems for warfighting purposes. Such worthiness is a function of risk versus reward.

Risk perception also affects disposition to use or non-use of autonomous weapons. One who sees a threat around every corner is more likely to justify an aggressive posture than one who feels a relative sense of safety. In the United States, the gravity of the ‘terrorist threat’ motivates the magnitude of compromise willing to be made for the sake of security. In a time of less readily perceivable war, risky venture support is marginalized. This dynamic can affect autonomous weapons in two ways: first, it could be seen that autonomous weapons are too risky because of concerns in terms of discrimination, proportionality, and the threat of proliferation of such weapons systems at the hands of adversaries. On the other hand, it could also be seen that autonomous weapons limit the risk of human life by virtue of their characteristic ‘remote control’ function. Opponents will use the former argument, and advocates will use the latter. To flip the script, the actual use of autonomous weapons affects risk perception in a paradoxical way: friendly human life is valuable to the extent that it is desirable to minimize risk of human life through employment of autonomous systems, and enemy life is expendable to the extent that robotic technology can execute the kill.    

Whether or not it is acceptable and ‘normal’ to develop autonomous weapons is a judgment created out of a combination of identity, values, and risk perception. The ultimate establishment of such norms is reflected in public policy, such as arms control. Resonance or dissonance with established norms is predictable according to an analysis of the strategic culture of the entity in question. For instance, whether or not China will chill with development of autonomous weapons, or push full speed ahead, depends upon its sociocultural environment. Likewise, once recommended rules and regulations are formalized in public policy, compliance or non-compliance will affect, and be affected by, culture.    

Hokie-Pokie

Groups and individuals concerned with international law and accountability as regards mistakes and warcrimes are on both sides of the fence as regards deployment of autonomous weapons technology. The ability to discriminate combatants from non-combatants, civilian objects from military objects, and act with force only upon military objects is necessary to fulfill the discriminatory requirements of the Geneva Conventions (pic of the original conventions above), as well as meet the grade as regards the principle of discrimination a’la Just War Doctrine. The Pentagon claims that the use of unmanned and semi-autonomous vehicles has already proved to reduce military and civilian casualties. Advocates of unmanned systems cite that autonomous machines more reliably observe international law than men because the former do not have a choice, nor are machines subject to distortions in decision making due to emotional incoherence or the 'fog of war'. Also, the laser-guided precision of autonomous weapons is said to be 'unparalleled' when compared to contemporary weapons such as crude cruise missiles. According to an American source, US airstrikes in Pakistan using Predator aircraft through September 2009 resulted in 979 total casualties, 9.6% of which were identified as civilians . According to a Pakistani source, of the 60 cross-border drone strikes carried out between January 2006 and April 2009, only 10 of these 60 strikes were able to hit value targets. These 10 strikes on the mark resulted in the elimination of 14 terrorists and 687 civilians. The other 50 strikes missed their mark due to reported intelligence failures and resulted in loss of 537 civilian lives . These figures, which paint different pictures, both are concerned with non-autonomous drone strikes, with men in the loop at all times. The wide divergence in reports gives reason to pause and consider the accountability factor.

Regardless of which report of civilian casualties is correct, and regardless of what definition was employed to define ‘civilian’ writ large, it is ambiguous which of the five plus personnel manning drones is responsible for mistakes that result in undue death. In war, collateral damage is inevitable, but there is an ethical and legal obligation to minimize loss of non-combatant life. Similarly, if, or when, the step is taken to employ fully autonomous weapons, the axis of accountability ought to be clear.

In his software architecture for autonomous systems, Arkin addresses the accountability issue through the design and integration of 'ethical governor' and 'responsibility adviser' modules. The former constrains the functioning of the system by encoding Laws of War and Rules of Engagement as rule sets that must be met by the robot prior to execution of force. The latter establishes a formal locus of responsibility for the use of any lethal force by an autonomous robot. According to Arkin, "this involves multiple aspects of assignment: from responsibility for the design and implementation of the system, to the authoring of the LOW and ROE constraints in both traditional and machine-readable formats, to the tasking of the robot by an operator for a lethal mission, and for the possible use of operator overrides" (Arkin, 2009). In this case, accountability is clear and human operators remain ‘in the loop’, as supervisors who serves in a fail-safe capacity in the event of system malfunctions.

Despite the need to establish a clear axis of accountability, and despite so many military professionals saying that “humans will always be in the loop”, it is ambiguous what constitutes ‘the loop’. Plus, there is no official commitment preventing autonomous weapons so, until there is, all the in the loop talk is hearsay. To boot, in 2005, the Joint Forces Command drew up a report entitled “Unmanned Effects: Taking the Human Out of the Loop”, and in 2007, the U.S. Army put out a Solicitation for Proposals for a system that could carry out “fully autonomous engagement without human intervention”. Clearly, the military is interested in at least researching autonomous weapons. It would be worthwhile if there was no less than equal interest in addressing the legal and accountability concerns.

Thursday, July 15, 2010

Drone Ethics


There are at least three major ethical issues associated with autonomous weapons systems: (1) Could and should machines be enabled to make sensitive decisions about human life? (2) Does the United States have an ethical imperative to develop such autonomous systems? (3) What do we do if the (moral) cost of such development outweighs prospective benefits?

    At this point, machines can only follow the rubric of pre-programmed constraints. Relative to state of the shelf technology, whether or not machines should be enabled to judge who to kill and who not to kill on their own, without a man in the loop, may be a matter of how well policymakers, or whoever is approving the employment of such systems, trust in discriminatory intelligence of these machines. The discriminatory intelligence of these first generation autonomous machines is only as good as the code that the programmers establish as the basis for the functionality of the machines. For those interested in the principals of proportionality and discrimination, which are staples of Just War Doctrine, it is imperative to ensure that these machines can distinguish between combatants and non-combatants with a ‘high’ degree of accuracy before being let loose in theater. The benchmark metric for determining ‘how much discriminatory intelligence is enough’ has yet to be defined in any official capacity.

    As an advocate of Just War Doctrine, and as a pillar of customary international law, the United States has an ethical obligation to maximally comply with the principles of discrimination and proportionality. Warfighters must discriminate between combatants and non-combatants. Loss of civilian life must be minimized. There is an implicit responsibility to employ whatever means maximize compliance with these principals; whether that means appropriately educated human beings or autonomous machines.

    Implicitly, there is also a responsibility to minimize the adverse affects of war on warfighters. By adverse affects is meant the full gamut; from stress and strain to wounds and death. So long as they can satisfy the principals of discrimination and proportionality, if machines can save lives then there is an ethical obligation for machines to be so employed. To an extent, robots have been employed in war for the purposes of saving human life, such as the PackBot from iRobot, which goes on missions with the objective of locating and detonating Improvised Explosive Devices. In this capacity, robots have saved countless lives. Lives could be saved if robots were discriminate enough to help soldiers in gunfights.

    From a big picture perspective, in terms of ensuring minimum casualties, there is an imperative to end wars as soon as possible. Therefore, wartime policymakers have a responsibility to assess whether or not, and to what extent, robotics can facilitate victory. From the military planning side, it is necessary to incorporate such technologies in a way that helps, rather than hinders, operational effectiveness.

    Also from a big picture perspective, it is important to consider whether or not and to what extent the development and incorporation of robotics into war affects the sheer quantity of war. For instance, if state or non-state actors can cost-effectively utilize semi-autonomous or autonomous robotics for wartime operations then wars may become more plentiful than before. This train of thought would stick for countries, especially developed countries, which would go to war more, but who are dissuaded from doing so at least in part due to adverse public reaction as regards risking human lives. Further, robotics could be used by enemies of a state, or disenfranchised groups, to stage and carryout remote terrorist attacks, a type of terrorism that would proceed without the element of human sacrifice. These are some of the risks associated with autonomous robotics technology. The United States, among others, has a moral responsibility to assess and militate against the potential threats posed by such technology.         

    Wednesday, July 14, 2010

    Drone Basics

    An autonomous weapon system is an intelligent machine that can decide on its own whether or not to employ lethal force. The intelligence of the machine is a byproduct of its sensor technologies. For instance, a machine equipped with special acoustic sensors can detect gunfire. The detection of gunfire and computation of appropriate behavior in response to gunfire is an example of machine intelligence. If the robot is equipped with a gun of its own, which has the range necessary to return fire, then that machine has the capability to fight fire with fire. The autonomy of the machine is a function of whether or not it can act on its own, without human input. For example, South Korea has recently deployed a sentry robot along the Demilitarized Zone that separates North and South Korea. The job of this robot is to detect and kill intruders. It is equipped with undisclosed surveillance technology that enables it to detect and track, and armed with the ammunition necessary to fire upon targets. This is an autonomous machine, which functions without human input. The exact discriminatory intelligence of the machine is unknown and partially irrelevant considering nobody is supposed to be in this Zone - it is a case where trespassing is punishable by death. However, looking forward, discriminatory intelligence is a function necessary to their use in theater writ large; for discriminatory intelligence enables the machines to tell the difference between friend and foe. If a machine cannot accurately tell the difference between friend and foe then innocents can die. Therefore, countries interested in respecting the discrimination and proportionality elements of customary international law will look for discriminatory intelligence as an important gauge of the general field-deployability of weaponized autonomous robotics.

    Ronald Arkin proposes that first generation autonomous combat systems ought to possess limited autonomy. The technical range of behavior should be bound by formal rule sets, such as the Laws of War ("LOW") and Rules of Engagement ("ROE"). These rule sets are formalized as computer code to regulate the behavior of the system in the battlespace. In fact, Arkin's work has arisen in response to a demand for autonomous weapons systems that can effectively distinguish between combatants and non-combatants. The decisional characteristics of these machines is such that they cannot execute lethal force unless the situation at hand meets strict criteria (i.e. if shot at then return fire). But even within these simple criteria, there is room to wonder how much discrimination satisfices ethical sensitivities while embodying the flexibility necessary to be field deployable for spontaneous missions. For instance, strict adherence to pre-programmed constraints may handicap operational effectiveness in first responder situations. The Arkin type of tight systemic constraints, which largely depend upon exhaustive definition and predictive classification of situational constituents might be fine for isolated situations, where all the environmental variables are more or less known, but this same machine architecture could be impoverishing in quick response circumstances featuring innumerable unknowns. Therefore, in order to embody the appropriate degree of readiness, said military machines have to be adaptive – they have to be able to learn on the fly – they have to be able to not only operate according to pre-programmed constraints, but also learn on the fly, and incorporate that learning in real-time.

    Tuesday, July 13, 2010

    Autonomous Weapons



    Military robotics technology is growing in intelligence, capability, and autonomy. The sensing technologies, which are central to robotic intelligence, are increasing in sophistication. State of the shelf sensors can acquire and process information in the visual, auditory, tactile, and gustatory dimensions. State of the art sensors can gather and process human intentions. Wide-area surveillance sensors like the Gorgon Stare can film an area with a four kilometer radius during day and night operations from twelve different visual angles. Acoustic sensors can detect, localize, and prepare countermeasures to unfriendly gunfire. Chemical and biological warfare multi-sensing technologies can provide information on carbon monoxide, hydrogen sulfide, volatile organic compounds, oxygen, lower explosive limit, carbon dioxide and air particulates including airborne dust, smoke, mist, haze and fumes. The U.S. Department of Homeland Security’s MALINTENT sensor program features a series of sensors that read body temperature, heart rate, and respiration, among other non-verbal cues, in order to detect signals of aggression or distrust. New technologies are emerging at a rapid pace.

    Today, most such robotic systems relay the information acquired to human operators, who then decide a course of action, but, as with the UK Taranis stealth jet, there is a desire to develop autonomy to the extent that enables machines to automatically decide what constitutes a target and, subsequently, what line of action should be taken upon a target. The evolution of robotic intelligence, capability, and autonomy promises a future where machines can learn, derive rules, and execute orders with minimum, if any, human input. Such technology may be a godsend in removing warfighters from dull and dangerous jobs, and it also may be a beast of burden in its ethical, legal, and social implications.

    Ethically, there must be balance between acting upon the basis of a duty to provide security and ensuring the relative rewards outweigh the risks. Some interpretations of international law sanction and encourage the employment of lethal robots, while other interpretations vehemently oppose machines capable of autonomous force. On the one hand, these machines can be trusted to make decisions that are not influenced by the stresses and strains of war. This could result in reduction of non-combatant casualties. On the other hand, it is questionable whether or not machines should be enabled to kill without authorization from a human operator. It is also questionable whether or not machines can embody the discriminatory intelligence necessary to guarantee, or minimize, civilian casualties.

    The development of autonomous machines deeply impacts the identity, values, beliefs, and perceptions of societies. Also, the inverse is true in that the identity, values, beliefs, and perceptions of a given society influences whether or not and to what extent autonomy in lethal machines is seen as a legitimate, or even necessary, instrument of state power. For instance, although the ability for autonomous robotics to reliably discriminate between combatant and non-combatant is important for US policy makers, not all actors will employ this same instrumental rationality. Different cultures will judge the costs and benefits of the development of these systems differently. Rogue regimes or non-state actors may indiscriminately develop and utilize autonomous weapons systems.

    This discusses the conceptual, technical, political, social, ethical, legal, and military dimensions of autonomous weapons. It discusses the state of the art, the state of the shelf, and a probable future wherein machines can learn to derive and execute their own rule sets. It recommends typical analyses that could be conducted for the purposes of current policy, as well as strategic assessments that could be conducted for the purposes of preventing strategic surprise. Although far from an exhaustive account of autonomous weapons, this paper makes a contribution to the available literature.

    Neuromarketing & Mass Media Manipulation

    Companies such as NeuroFocus and Brain Fingerprinting Laboratories, Inc. have developed proprietary methodologies for analyzing the extent by which specific media 'impress' the human brain. These companies, amid a few others, can tell the extent by which you remember advertisements or other media. This knowledge can be used to fine tune an intended effect such as emotional engagement and/or purchase intent. NeuroFocus boasts clientele including The Weather Channel, Blue Cross/Blue Shield, CBS, Scottrade, and Microsoft.

    To my knowledge, these neuromarketing tools are only being used for commercial endeavors, but the potential applications for political and/or military purposes are there, loud and clear. There is nothing stopping candidates for public office from using neuromarketing to gauge the effectiveness of their campaigns - from the signage to the symbols to the speeches to the slogans. In essence, politics is, always has been, and always will be a fame game. Impressions made play a major role.

    The same goes for military campaigns, such as the War on Terror. Major portions of overseas contingency operations (read stabilization and reconstruction operations) center upon convincing the current and potential opposition that fighting is not worth its weight in blood; convincing would-be problem makers that terrorism is not in their best interest - that the West is not an enemy - et cetera. From the get go, the War on Terror is a global psychological operation. Global psychological operation is a term that sums up the backbone of every modern military effort, from nuclear deterrence to counterterrorism to counterintelligence.

    As political and military competition heat up, players start to look for new ways to win favor. In this vein, I expect that some derivation of 'neuromarketing' will become a standard tool for strategic communications analysis. Ultimately, the question of how to win hearts and minds is a question of how to manipulate or net effect neural connections. "If we do x, then in the brain this happens."

    What are the ethical, legal, and social issues associated with using neurotechnology to manipulate public opinion in such an unprecedented fashion? Ethically, those who are in the business of crafting messages have a duty to use the most effective means available, and the cost of acquiring such capabilities will always be worth the benefit. Legally, there is nothing stopping anybody from using neurotechnologies to measure and manipulate the effectiveness of multimedia. With the intelligence and capability necessary to more or less directly control the extent by which media impresses the brain, mass communications become an exercise in deliberate social engineering - on the level of the neurons of the brain.

    Political and military adversaries of the US would be wise to invest in such technologies. I am sure the Russians are already well along the way. (Those sneaky Ruskies!) Consider an Al Qaida or other terrorist organization that utilizes neuromarketing for their recruitment campaigns, or a Hugo Chavez on fame whore steroids who is hell bent on swaying favorable public opinion away from the US for his own dictatorship gain. Defense wise, is there anything you or the US writ large can do about this, aside from tuning out of media and/or turning into a human ostrich, with the old head in the sand?

    Back in '98 Lt. Timothy Thomas (U.S. Army) wrote a paper called "The Mind Has No Firewall". Although this was mostly a response to reports concerning Russian psychotronic weapons technologies, the basic concept still carries weight today. The mind is a wide open system, which we have developed little to no defense for. The responses to this are to cower, complain, or play the game.

    Monday, July 12, 2010

    The UFC & Military Neurotechnologies

    Via DARPA, the military funded and brought us the internet (it was originally called DARPAnet). The same can be said for the bulk of our favorite, now common, technologies including the widespread availability of high resolution video (think Google Earth), vehicle intelligence (Lexus' advanced parking guidance system), radar, and global positioning systems (GPS). It is an official aim of the research branch of the Department of Defense to fund and fuel crossover technologies -- technologies that are dual use in nature -- they can enhance the military as well as bootstrap society writ large. I'm talking about dual use brain-based drugs, diagnostics, and devices.

    I have a close friend who fights in the UFC, so I'm going to take that field as an example. As a high intensity contact sport, the UFC (and other avenues for hand to hand combat and Mixed Martial Arts) is especially compatible with technologies designed for warfighters. Each of the four domains (Training Effectiveness, Optimizing Decision Making, Sustaining Soldier Performance, and Improving Cognitive and Behavioral Performance) from the book Opportunities in Neuroscience for Future Army Applications easily applies to the man in the ring.
    • Training Effectiveness: evaluating the efficiency of training regimes; gauging individual capability and response to training; monitoring and predicting changes in individual performance efficiency; fighter selection and assessment; augmented reality based training regimes
    • Optimizing Decision Making: emotional reactivity; recognition-primed decisions; reinforcing and accelerating combat learning using physiological and neural feedback; optimizing target discrimination (i.e. identifying and capitalizing on weaknesses in the environment)
    • Sustaining Soldier Performance: measures to counter performance degradation (i.e. fatigue); pharmaceutical countermeasures to neurophysiological stressors (anxiety, depression, brain injury)
    • Improving Cognitive and Behavioral Performance: field-deployable biomarkers of neural state; guarding against 'hours of boredom and moments of terror' (keeping it cool at all times)
    These are domains that all athletes have a stake in, especially those athletes that need to militate against the side effects of intense exertion; those that need the fastest reaction times possible; those that can't take breaks and have to be in peak performance mode 24/7; those that have to be able to identify and exploit weaknesses, etc. Athletic trainers can use this stuff to bootstrap their own businesses and recruiters can use assessment methodologies to detect, assess, and measure potential in terms of raw talent (possession of the right neural networks) and learning ability (plasticity).

    What if you were wired such that when you started to get tired, the neuronano bot in your brain turned 'on' the dopamine switch and gave you a personalized burst of newfound energy? These type of 'on/off' dimming switch neuropharmaceuticals are state of the art, soon state of the shelf. What if you could supplement standard (body) sparring with virtual sparring with neural feedback, which would allow you to identify the zone and train your brain to respond faster to threats? These are real technologies with real crossover potential. My assessment: sports in 2020 are a whole other animal.

    Are there ethical, legal, and social issues associated with the use of such? Sure! Ethically, do athletes have a duty to do all they can do to be peak performers (ethically this is a deontological argument)? Does the cost of such technologies (cost in terms of adverse side effects) outweigh the prospective benefits? Increasingly, the answer to the former is yes and the latter is no. Legally, what will the 'laws' in sports performance have to say about the use of such technologies? Is it considered 'doping' to install a dimming switch neuropharmaceutical neuronanobot in the old noggin? Is it considered legal to accelerate learning capability using mind-machine interfaces? Can recruiters and team owners utilize neurophysiological diagnostics to measure and monitor performance -- to reward the excellent and dismiss the unfit on neural grounds? And what are the social repercussions in terms of the blow-back effect of neurotechnologies on kids, fans, and those who generally imitate sports stars?

    The cat is out of the bag, for sure. There's no stopping emergence or social blow-back. The adventure is in exactly how sports and society reacts to Pandora's Box, and the opportunity is there for those 'first adopters' (athletes, trainers, recruiters, agents, and legal eagles) who notice the cutting edge and capitalize on it before the rest respond.