Swarms Are Hell: Warfare as an Anti-Transhuman Choice Woody
Evans Assistant
Professor and Librarian Zayed
University, Dubai, UAE Journal of Evolution and Technology - Vol. 23 Issue 1 – December 2013 - pgs 56-60 Abstract The
use of advanced technologies, even of so-called transhuman technology, does not
make militaries transhuman. Transhumanism includes dimensions of ethics that
are themselves in direct conflict with many transhuman capabilities of soldiers
in warfare. The use of advanced weapons of mass destruction represents an
anti-humanism that undermines the modern, open, and high-tech nation state. 1.
Transhuman warriors? Warfare is becoming transhuman, in some senses. The dust-up that
outgoing Secretary of Defense Leon Panetta caused by proposing the
“Distinguished Warfare Medal” for drone operators in February 2013 was
telling. Among those who supported the proposal was P.W. Singer, who
concluded that “a Predator pilot carrying out an important mission, such as the
2006 operation that found the leader of al-Qaeda in Iraq, or a cyber-warrior
taking down a key enemy network” should receive a medal for outstanding service
(2013). Subsequently, however, incoming Secretary of Defense Chuck Hagel canceled
the plan. This controversy reminds us that there are sincere differences of
opinion about the role, value, and honor of the jacked-in technicians working
in virtual and augmented realities to advance geopolitical agendas with
military might. Arguments exist for accelerating the marriage of technology and
warfare under the banner of transhumanism: “If transhumanism can become an
important node in the semantic web of military terms, it might shine light into
the shadows cast by the grim uses of the technologies associated with it”
(Evans 2007). Part of the argument is that transhumanism is a force for
morality in the military, and I will revisit this below; but part of the
argument is that “transhuman technology” can bring to bear such overwhelming
force that military (and, later, cultural and political) superiority is
assured. This sounds akin to the promotion of a unipolar global order – the chief idea of the neoconservatives behind the Project
for the New American Century (Vaisse 2010; Project for the New American Century
1997). We can advance evidence for the horrific failure of these neoconservatives’
strategy and tactics in attempting to accomplish their goals (look for example
to the fracturing of Iraq since March 2003), but that is not the same thing as
critiquing their position about the form that the global order should take. In
any case, transhumanism has been put forward as a means to global order (Evans
2013), and transhuman technologies “raise the prospect of a new dimension to
human security: the protection of human identity and dignity in a posthuman
world” (McIntosh 2010). Tony Tether, head of the Defense Advanced Research Projects Agency
from 2001 to 2009, opened a DARPAtech conference with a speech that included an
invitation to imagine a transhuman military: Imagine
25 years from now where old guys like me put on a pair of glasses or a helmet
and open our eyes. Somewhere there will be a robot that will open its eyes and
we will be able to see what the robot sees. We will be able to remotely look
down on a cave and think to ourselves, “Let's go down there and kick some butt.”
And the robots will respond, controlled by our thoughts. It’s coming. Imagine a
warrior – with the intellect of a human
and the immortality of a machine – controlled
by our thoughts. (Tether 2002) Tether here blends the longstanding transhumanist concerns with
artificial intelligence, virtual and augmented realities, and cognitive and
physical enhancements with a sort of mid-life crisis pining for “kicking ass
and taking names.” Here we have a visionary call from the leader of the most
influential defense and security visioning house in the world for a kind of
transhumanist version of a retiree’s red muscle car, but blended with a
dissociated blood thirstiness. That particular vision has since had 11 years to
advance into reality. There are many military projects that take a cue from
transhumanist concerns. From “Combat Zones that See” (total surveillance
tracking every element and person), or the advanced military robots produced by
Boston Dynamics (Petman, BigDog, etc., and acquired by Google in late 2013), to
the REPAIR program (using implanted circuitry in the brain for direct control of
how a warrior processes information in the battlefield), warfare has been set
on a trajectory toward the posthuman.
But transhumanism is not just about the appropriation of advanced
technologies for transforming the human individual into a posthuman; nor is it
only about taking strategic steps to funnel societies (or the world as a whole)
into a technological singularity. “Transhumanism has roots in rational humanism,” writes Nick
Bostrom; transhumanism cannot be separated from its roots in humanism, and
indeed “encompasses many principles of modern humanism” (Bostrom 2005;
Humanity+ 2009). Some of these principles are morality, personal choice,
individual liberty, general well-being, reduction of existential risks, and the
reduction of involuntary suffering in general (Humanity+ 2009). One undeniable emphasis of transhumanism is the notion that
individual persons (whether human or non-human, just in case you favor the idea
of “uplift”) should have the right to modify themselves with technology. The
Institute for Ethics and Emerging Technologies posits an orientation toward
liberty for technoprogressives: “[we] want to see all sentient beings protected
in their rights for self-augmentation, enhancement, or modification, and we
want everyone to have fair and equal access to such treatments” (IEET n.d.). It
does not follow, however, that corporations, organizations, governments, or militaries
have the right to modify themselves by modifying their customers and clients,
participants, citizens, or personnel. The principles mentioned above are opposed
to the imposition of modifications on individuals without consent. Yet in the
case of both soldiers and their victims, militaries need no consent.
Furthermore, and closer to the core of the issue: the use of
advanced technologies that in any way erodes or inhibits morality, personal
choice, individual liberty, or the general well-being of non-combatants is
anti-transhumanist. As Spezio’s discussion of the attempt to “Vulcanize”
soldiers (make them emotionless for increased battlefield efficiency) makes
clear, a twisting of the transhuman vision of modified humans might be used to
horrifying effect: “DARPA’s vision of the trans / posthuman ‘war fighter’... runs
counter to the most prevalent transhumanist aspirations with regard to
enhancing the human condition” (2011). Indeed, to the extent that transhuman
warfare infringes on the personal freedoms or well-being of any non-combatant,
we must say that it (and warfare generally) is also anti-humanist. Transhumanism is an ethical enterprise. Transhumanism has little
place in warfare. In appropriating some of its aspects for warfare, militaries
necessarily shift away from humanism. In the case of the United States
military, wasn’t it founded to protect and defend a humanist experiment? Consider two types of advanced weaponry that undermine the ideals
of an open, liberal democratic (humanist) nation state: 1. Robots and drones invite human participants to enter into a
shared cognitive space, or a perception of space that replaces or augments
normal sensory function (see Morales’ and Cummings’ survey of pilots’ attitudes
toward unmanned aerial vehicles, 2008), and at the same time allow for easier
killing and loose legality. Actor-Network Theory might understand this as
lending social agency to machines that exist only to kill – an anti-social social agency? But also consider the “push-button” nature of extrajudicial
killings of American citizens – the murder
of Abdulrahman Anwar al-Awlaki, in October 2011, for example – and civilians of foreign countries (the dozens killed in
the village of Wech Baghtu, Afghanistan, November 2008, for example), and the
general permissiveness of warfare that drones engender. Singer sums it up well:
“America’s founding fathers may not have been able to imagine robotic drones,
but … the Constitution did not leave war, no matter how it is waged, to the
executive branch alone” (2012). As military robots advance to organize into
self-directed swarms, they could, after being programmed with “simple rules”
(complex algorithms, really), “spread around in an almost random search [and]
broadcast to the group [of robots] any enemy targets they find. Swarms would
then form to attack the targets” (Singer, 2009). In other words, human agency
in decisions about who to kill and how to do it gets transferred to machines;
and this might be okay if the twenty-first century battlefield did indeed have
“fewer humans and more robots,” as Singer says. But only the hegemons will have the robots. Our swarms will target
human beings in hot countries, and those humans will lack the potential benefit
of the mercy or caution or good sense that a human warrior might show to a
mother protecting her children. Robots and drones do not promote the personal
choice, or individual liberty, of warriors or their victims. Since the activity
of swarms is emergent, it “is not and cannot be centrally controlled” by human
agents – though there is some slim hope
that a form of ethics could spontaneously emerge from the swarms themselves
(Coeckelbergh 2011). General well-being, the reduction of existential risks, or
the reduction of involuntary suffering of civilians and non-combatants,
however, is not (cannot be) a concern in the programming of robots meant for
war. 2. Weapons of mass destruction are artifacts of the
industrialization of war (and of almost everything) that took place in the early
twentieth century (see extensive discussion in Carew 2009). They are used to
kill efficiently, on a mass scale, and indiscriminately. Civilians and
non-combatants die in large numbers when nuclear or chemical weapons are employed.
Although Americans led in inventing nuclear weaponry, and some American
scientists led in its promotion as a way to balance power and maintain peace
(see Finkbeiner’s discussion of Edward Teller, 2007, 14–19), the United States has since become a leader in
attempting to create “a world without nuclear weapons” (Office of the Press
Secretary 2013). What a horrific irony, then, that the US aided Saddam Hussein
in his use of chemical weapons in the 1980s (Harris 2013), and that America and
her allies used depleted uranium weaponry on civilians in Kosovo and Iraq
(United Nations 2001). The debate about deterrence aside, weapons of mass
destruction cannot be used to promote personal choice, individual liberty,
general well-being, reduction of existential risks, or the reduction of involuntary
suffering of the civilians and non-combatants who are killed by them. Whether the technology is “transhuman” in its stylings or tone (as
with robotics, augmented reality, and drones), or just plain technologically
advanced (as with guidance systems that can carry radioactive munitions into
civilian areas), warfare ends life. “With large nuclear arsenals on
hair-trigger alert, there is inevitably a significant risk of accidental war.
The same can happen with nanotechnology: it may be pressed into serving
military objectives in a way that carries unavoidable risks of serious
accidents” (Bostrom, 2002). Warfare, whether sometimes necessary or not, is by
its nature anti-life – and, with its
primary focus on killing people, it is also anti-human. Warfare is anti-humanist. It is anti-transhumanist.
We have seen that to discuss the role of transhumanism in military
conflict in mere terms of technological advancement is tragically
one-dimensional. “Militarized transhumanism” is a folly or worse, unless the
humanism at the core of transhumanism informs the ethics of future warfare. We as a species cannot afford to pursue some version of a
posthuman warrior class without continuing the greater project of the marriage
of ethics, diplomacy, and politics to war, until warfare itself becomes
obviated.
Bostrom, N. 2005. A history of transhumanist thought. Journal of Evolution and Technology
14(1). http://www.jetpress.org/volume14/bostrom.html
(accessed December 20, 2013). Bostrom, N. 2002. Existential risks: Analyzing human extinction scenarios
and related hazards. Journal of Evolution
and Technology 9(1). http://www.jetpress.org/volume9/risks.html
(accessed December 20, 2013). Carew, M. G. 2010. Becoming
the arsenal: The American industrial mobilization for World War II, 1938-1942.
Lanham, Maryland: University Press of America. Coeckelbergh, M. 2011. From killer machines to doctrines and swarms,
or why ethics of military robotics is not (necessarily) about robots. Philosophy and Technology 24: 269–278. Evans, W. 2007. Singularity warfare: A bibliometric survey of militarized
transhumanism. Journal of Evolution and
Technology 16(1). http://www.jetpress.org/v16/evans.html
(accessed December 20, 2013). Evans, W. 2013. Singularity terrorism: Military meta-strategy in response
to terror and technology. Journal of
Evolution and Technology 231. http://jetpress.org/v23/evans.htm
(accessed December 20, 2103). Finkbeiner, A. 2007. The
Jasons: The secret history of science’s postwar elite. New York: Penguin. Harris, S and Aid, M. 2013. Exclusive: CIA files prove America helped
Saddam as he gassed Iran. Foreign Policy,
August 26. http://www.foreignpolicy.com/articles/2013/08/25/secret_cia_files_prove_america_helped_saddam_as_he_gassed_iran (accessed September 5, 2013). Humanity+. 2009. Transhumanist declaration. http://humanityplus.org/philosophy/transhumanist-declaration/ (accessed September 11, 2013). IEET. n.d. IEET’s purpose. http://ieet.org/index.php/IEET/purpose (accessed September 11, 2013). McIntosh, D. 2010. The transhuman security dilemma. Journal of Evolution and Technology
21(2). http://jetpress.org/v21/mcintosh.htm
(accessed December 20, 2013). Morales, D. and Cummings, M. L. 2008. UAVs as tactical wingmen:
Control methods and pilots’ perceptions. Massachusetts Institute of Technology. http://web.mit.edu/aeroastro/labs/halab/papers/UAV_wingmen_AUVSIdraft.pdf (online draft; accessed September 12, 2013). Office of the Press Secretary. 2013. Fact sheet: Nuclear weapons employment
strategy of the United States. June 19. http://www.whitehouse.gov/the-press-office/2013/06/19/fact-sheet-nuclear-weapons-employment-strategy-united-states (accessed September 11, 2013). Project for the New American Century. 1997. Statement of principles.
June 3. http://www.newamericancentury.org/statementofprinciples.htm (accessed September 12, 2013). Singer, P.W. 2009. Wired for war: Robots and military doctrine. Joint Force Quarterly 52(1). Singer, P.W. 2012. Do drones undermine democracy? The New York Times. January 21. Singer, P.W. 2013. A military medal for drone strikes? Makes sense.
Washington Post. February 15. Spezio, Michael L. 2011. Human or Vulcan? Theological consideration
of emotion control enhancement. In Transhumanism
and transcendence: Christian hope in an age of technological enhancement,
ed. Ronald Cole-Turner, 145–162.
Washington, DC: Georgetown University Press. Tether, T. 2002. DARPAtech 2002 welcoming speech. http://archive.darpa.mil/DARPATech2002/presentations/diro_pdf/speeches/TETHER.pdf (accessed September 9, 2013). United Nations. 2001. Observance
of environmental norms in the drafting and implementation of agreements on
disarmament and arms control: Report of the Secretary-General. http://www.un.org/documents/ga/docs/56/a56165.pdf (accessed September 9, 2013). Vaisse, Justin. 2010. Why neoconservatism still matters. Foreign Policy at Brookings Institution.
Policy Paper 20. |