The Transhuman Security Dilemma Daniel McIntosh Journal
of Evolution and Technology - Vol.
21 Issue 2 – December 2010 - pgs 32-48 Abstract Developments in genetics, cyber-technology,
nanotechnology, biotechnology, artificial intelligence, and other areas hold
the promise – and the peril – of redefining what it means to be human. In
addition to hedonism or a desire for self-improvement, the possibilities of
these technologies plus the rational concern of falling behind a potential
adversary produce a classic security dilemma. These competitive pressures among
states, firms, and emerging “super-empowered individuals” encourage the
development and dissemination of these technologies, and already the possibilities
are being explored by actors in conflict. This security dilemma, plus the
nature of the technologies themselves, makes it virtually certain that attempts
at regulation will fail. Instead, we should expect “arms races” of quantity and
quality of improvements, complicated by differing conceptions of what
improvement means. This paper explores these pressures and outcomes, as well as
general consequences of the potential modification of “human nature” for global
and human security. It finds that whatever forms or enhancements we possess, in
a transhuman or posthuman future politics will not be transcended. Critical
problems of security will continue to challenge ourselves and our descendants. Introduction Homo sapiens, the first truly free species, is about
to decommission natural selection, the force that made us… Soon we must look deep within
ourselves and decide what we wish to become.1 There is a set of emerging technologies which, singly
and synergistically, have the potential to overshadow nuclear power in their
effects on the international system. Nano-Bio-Info-Cogno (NBIC) technologies
have progressed to the point that they raise the prospect of the evolution-through-design
of human beings – as individuals, as societies, and as a species. By
challenging our most basic assumptions regarding what it means to be a human in
society, NBIC technologies may well render much of contemporary sociology,
political theory, and economics obsolete. They raise the immediate possibility
of a transhuman era, with transhuman or even posthuman politics. By altering
what have been assumed as defining characteristics of humanity – including
individuality, empathy, mortality, physicality, and levels of intelligence – they
change the context of politics. It is safe to assume that transition through a
transhuman era will not be smooth. It will not affect all persons at once, or
to the same degree. It will also be shaped by current structures, conflicts,
and notions of what improvement actually means. It will take place from within
a system of competitive states, firms, nongovernmental organizations, and
“superempowered individuals,” each with an interest in the application of NBIC
technologies for relative advantage. Although the designs will not be random,
there will still be the interaction of types within a competitive environment
that leads to evolution – and evolution by its nature leads to unexpected and
contingent outcomes. The security implications are enormous, up to and including
the possible extinction of the human species.
1. Technologies of directed evolution NBIC technologies are in fact a constellation of four
converging technologies. Nanotechnology involves structures on the scale of 10-9
meters. It is the construction and manipulation of objects on the scale of a
single molecule. Biotechnology refers to the modification and use of organisms,
or parts or products thereof, to achieve ends. Information technology refers to
the integrated systems of computer hardware, software, and networking. The cognitive
sciences and their applications refer to the study of intelligence and
intelligent systems, both cybernetic and biological. The convergence of these
fields comes from the fact that at the nanometer scale the differences between
living and nonliving systems are indistinguishable. The body (including the
brain, and whatever we call “mind”) can be restructured. Human genetic engineering, the most commonly
recognized of these technologies, may either modify somatic (body) cells or
germ cells (gametes, zygotes, early embryos). Somatic modifications, sometimes
known as gene transfer or “gene therapy,” never result in a heritable trait.
Germ modification, or germline manipulation, affects future generations (Adams
2004, 16-17). While germline manipulation has taken place on animals for around
twenty years, there are as yet no confirmed cases of human experiments. Some
experts have suggested for legal and regulatory reasons it will be at least fifteen
more years before human tests will be conducted (Adams 2004, 19-20); if these considerations
were ignored germline manipulation could be underway today. Already, cultural
differences exist in the regulation of stem cell research. As of 2004, a survey
of thirty countries found no two shared a common regulatory regime. Instead,
“policymakers must accept the reality of international ‘dissensus’” (Pattinson
and Caufield, 2004). Moreover, history indicates that even when there is a
consensus on the limits of human testing, it may be deficient or ignored in
practice. The addition of nano-scale machines adds new
possibilities. One is implantation of medical devices that will produce as well
as dispense drugs inside of the host, including the brain. Another is the
implantation of supercomputers the size of a cell, monitoring for and
preventing disease before it could be noticed by the host (Canton 2005).
Cybernetic breakthroughs point to continued blurring of “common sense”
distinctions between the mechanical and the biological. NBIC technologies force one to reconsider what
it is to be human. Political science and practical politics are grounded in assumptions about the nature of
humans as individuals and in groups. Within the field of international
relations, for example, theoretical and policy disagreements – associated with
such viewpoints as political realism, neorealism, neoclassical realism,
liberalism, neoliberalism, Marxism, neomarxism, institutionalism, feminism, and
various forms of constructivism – rest in large part on differing assumptions
about human nature. Yet in one regard they are all alike: the assumption that
there is a single “human nature,” fixed in time and universal in scope. The
divergence of humanity into new and different forms and capabilities renders
that assumption obsolete. Or, as Jurgen Habermas (2003, 14) observed, “the
breadth of biotechnological interventions raises moral questions that are not
simply difficult in the familiar sense but are of an altogether different
kind. The answers touch on the ethical self-understanding of humanity as a
whole.” Consider some of the more speculative possibilities for directed evolution: ·
The
mind, once understood, could be loaded and run on different hardware. Bodies
would be understood to be temporary. Death would not be permanent so long as
one maintained a “backup.” ·
Movement
from “meat” to electronics opens the possibility of increasing the speed of
thought. Electrical impulses among neurons would be replaced by
nanosecond-speed electronics. ·
Transhumans
could inhabit harsh environments, including outer space, without cumbersome
life-support systems. ·
Knowledge
could be downloaded at computer speeds, and integrated instantly into memory. ·
High-bandwidth
communications could lead to mental networking, or a hive mind (or competing
hive-minds). Legal regimes are notoriously slow to cope with, let
alone anticipate, the potentials of new technologies. Politicians are, if
anything, even less likely to do so. 2. The politics of the transhuman What will be the goals of human intelligences if,
“freed from the limits of the human meat-machine,” these “humans can change and
improve their own hardware” (Robinett, 2002)? How will the goals themselves
change as a result of the process and prior choices? An imaginary team of
proto-humans, if it were tasked with designing their next evolutionary step,
might have focused on doing better the things they already knew how to do, with
the capabilities they already have. Their goal might be to be larger, or
stronger, or better able to climb. Would they have imagined the range of
possibilities opened by intelligence, language, and technology? When
considering the prospect of a radical breakthrough in capabilities, we
proto-posthumans may be in a similar situation. The most important developments
will literally be the things we can not imagine. Among specialists, a debate over biopolitics is
already under way (for examples, see Fukuyama 2002, Hughes 2004, Bailey 2007,
Darnovsky 2010), but these have tended to focus on the path to encourage or
regulate change. More advanced NBIC technologies make that debate all the more important,
and all the more immune to compromise. For the most part, these debates have
occurred within nations, and while the UN and EU have encouraged the
consideration of common standards for experimentation, these standards do not
exist. Many states continue to consign the issues of experimentation and
transition to a regulatory void. Perhaps because there is so little to be certain of, discussion
of the actual politics of transhumans and posthumans – as well as those who would
attempt to use them for various ends – have tended to be simplistic. They also
reflect the participants’ own assumptions of what it means – or can mean – to
be human. On one side, there is a movement to ensure the
widespread distribution of these technologies for the good of all people. Sometimes
described as “radical technophilia,” transhumanism as an ideology lies at the
intersection of “popular culture, genetics, cybertechnology, nanotechnology,
biotechnology and other advanced technologies, bio-ethics, science speculation,
science fiction, mythology, the New Age Movement, cults, commerce and
globalization” (Bendle 2002, 45). Its proponents
have described themselves as a “loosely defined movement that has developed
gradually over the past two decades” which “can be viewed as an outgrowth of
secular humanism and the Enlightenment” (Bostrom 2005b, 202; Dacey 2004), as
well as “a burgeoning lifestyle choice and cultural phenomenon” (Dvorsky 2004). Transhumanism is at its core an “intellectual and
cultural movement that affirms the possibility and desirability of
fundamentally improving the human condition through applied reason, especially
by developing and making widely available technologies to eliminate aging and
to greatly enhance human intellectual, physical, and psychological capacities.”
It also seeks to study “the potential
gains and problems associated with such technologies” (WTA 2008a).
Transhumanists expect technological innovations to result in the emergence of
several varieties of “posthumans,” defined as “future beings whose basic
capacities so radically exceed those of present humans as to be no longer
unambiguously human by our current standards” (WTA 2008b). Given the range of
transhuman technologies, from the genetic to the cybernetic, there is no reason
to assume posthumans would be a single species, rather than a set of them (Agar
2007, 13). In a very real sense we cannot know what posthuman will be, any more
than a proto-human could have imagined modern man, but we should assume a wide
range of variation. In its faith in reason and technology, transhumanism began
within the general liberal tradition, but over time the model of technological
change held by transhumanists has grown more complex. 1980s “extropians” called
for an immediate and unrestricted application of all technologies, limited only
by the reason and conscience of individual adopters. Today, even among those
who promote their use, there is an emerging understanding of a potential threat
from NBIC technologies. Nevertheless, the vast majority of transhumanists hold
an essentially “progressive” view of technological change, despite the fact
that there are good reasons to have a pessimistic view of “progress” (Verdoux
2009). One observation that suggests the scope of potential
threat is the Fermi paradox, named for Enrico Fermi, who is credited with first
articulating it. Science starts from the assumption that our species and our
world do not have a privileged position in the universe. Given the age of the
universe, the probability of life-bearing star systems far older than our own,
the potential for technology to accomplish more and more, and the adaptive
quality of intelligence, we should expect to see evidence of older, more
technologically-advanced civilizations. Fermi’s paradox is this: where are
they? Several possible solutions have been offered to
Fermi’s paradox, but none so far is universally accepted, let alone emotionally
satisfying. Many of the “solutions” suggest that there is some natural function
that prevents the development of advanced technological civilizations past the
point where we are today. (With available instruments, a civilization at a
technological level similar to our own could already be detected at the range
of nearby stars.) More ominously, perhaps advanced technology does not, in
fact, have survival value. Perhaps there is a natural developmental gap between
a species’ ability to eradicate (or cripple) itself and the development of
mechanisms to prevent that from happening. In a universe of existential threats
to living worlds, the most dangerous may be the ones we create – or will create
– for ourselves. This insight has encouraged the establishment and growth of such
organizations as Humanity+ (formerly the World Transhumanist Association) and
the Lifeboat Institute, which engage in projects to anticipate and reduce the
potential for catastrophic or extinction events. Yet even in their discussions of existential threats
and the Fermi paradox, there remains relatively little discussion by
transhumanists of the political, social, and economic factors that would continue
to promote the adoption of extinction technologies. The focus is generally on
the risks of technology out of control, or of human error. It is as if, blinded
by their liberal faith in reason and the improvement of mankind, transhumanists
find it hard to imagine that actions that are rational for an individual or a state
might lead to catastrophic outcomes for the world. Pressures for the adoption of these technologies are
analogous to the “security dilemma” discussed by political scientists and game
theorists. The term “security dilemma” was coined for the argument that in
arming for self-defense a state or other autonomous political actor might in
fact decrease its security (Herz 1950). If such armaments are perceived as
threatening by others, they are prompted to arm in response. This, in turn,
leads the original actor to maintain its security by increasing its own arms.
The eventual result of this can be a spiral of rivalry and mistrust (Jervis
1986). Logically, the severity of a security dilemma is related to the ability
to distinguish offensive from defensive capabilities, as well as to the
perception of the degree of vulnerability to marginal change, the so-called
“offense-defense” balance (Glaser 1997). How does this apply to the politics of transition? In
history, technological change has rarely been smooth or rational. The standard
model of such change consists of three stages: invention, innovation, and
diffusion. Invention is the idea and the demonstration of its feasibility.
Innovation is the process by which the invention is brought into use. Diffusion
is the spread of the innovation into general use. The process resembles an
S-curve, where the cost for early adopters limits diffusion, the costs drop
while diffusion increases, and diffusion stabilizes when economies of scale
maximize and innovation slows (Schumpeter 2005; Girifalco 1991). Early adopters
may have significant advantages, but they have to pay more for them, and late
adopters can come to undermine those early gains. Historically, the period of maximum diffusion has been
the most disruptive for social and political structures predicated on the old
technologies. The nation-state, for example, was challenged by the development
of nuclear weapons and long-range delivery systems. In principle, these
technologies undermined one of the primary justifications for the existence of
the state, the ability to protect its population from attack. Some
intellectuals perceived the situation to mandate the development of a world
authority to replace the state, or at least its defense function. This prospect
of “one world or none” was undermined by the politics of the Cold War, as well
as by the limited distribution of states with stockpiles of nuclear weapons. The
situation was also seen as more manageable over time as the superpowers
(especially after the Cuban Missile Crisis) recognized the reality of mutual
deterrence, developed regimes and routines to control their own forces, and
slowed the proliferation of nuclear weapons technology. Today, as Gray (2010)
has observed, there has been a transition to a “second nuclear age” in which
the fear of vertical proliferation (within the great powers) has been
superseded by concern for “horizontal proliferation” to smaller powers and
non-state actors. Today the Given this general pattern, there is even more reason
to be concerned with biotechnology, “among the most radical innovation clusters
ever introduced” (Adams 2004, 4), and more so the constellation of mutually-reinforcing
changes under the rubric of NBIC. Even more than nuclear technology, these new
multi-use technologies can be expected to move beyond the control of
governments, to smaller groups and to individuals. If, as seems likely, biotechnology takes the same path
as computer technology did a generation ago, a limited set of complex centers
will be replaced by hobbyists and home genetics labs, and the hackers and
computer virus-writers of today will be joined by genome hackers designing and
unleashing biological viruses and nanites. In a nightmare scenario,
self-replicating nanomachines might escape confinement, consuming resources and
doubling each generation until they consume the planet. But the “gray goo”
scenario, as it is sometimes called, is not the only – or anything like the
most probable – outcome. More generally, there is nothing to limit the
proliferation of NBIC technologies only to users who are able and willing to
use them without harm to the innocent. Even if it were possible, there is no
agreement on what constitutes “harm” or “improvement.” Change brings unanticipated consequences. If parents or governments take the role of determining exactly what constitutes a “normal” or “acceptable” child, the desires of the parents (if any can be identified) and the state come into conflict. There is a potential for machine intelligences (or previously typical human intelligences stored – or replicated – on a machine) to conflict with the “meat” intelligences that originally created them. There is also the fear that the distinctions among human and (various kinds of) posthuman will lead to conflict amongst the differently abled. Some, such as George Annas, Lori Andrews, and Rosario Isasi, go so far as to describe the modification of human genetics as a “crime against humanity, given that The new species, or “posthuman,” will
likely view the old “normal” humans as inferior, even savages, and fit for
slavery or slaughter. The normals, on the other hand, may see the posthumans as
a threat and if they can, may engage in a preemptive strike by killing the posthumans
before they themselves are killed or enslaved by them. It is ultimately this
predictable potential for genocide that makes species-altering experiments
potential weapons of mass destruction, and makes the unaccountable genetic
engineer a potential bioterrorist. (Cited by Bostrom, 2005b, 206) From this perspective, in a world where designing children has been
perfected, the very foundations of democracy could crumble. The Posthuman
individual would likely be stronger, smarter, and more attractive. If genetic
enhancements of intelligence or strength remain prohibitively expensive to all
but the wealthy, however, does government then step in and, practicing a
beneficent eugenics, guarantee improvements to all? Or do we face a world in
which, to recall To be sure, a world of powerful and weak, rich and
poor, privileged and exploited is nothing new. What are new are the injustices
of race and class that could be engineered into the genome itself. Even if
these technologies are not abused, they are likely to raise suspicions, promote
political and social differentiation, and exaggerate problems that already
exist. But most of all there are the consequences we can not anticipate at all.
This is, at its base, the root of much of the fear among the bioconservatives.
Oddly enough, it is also in keeping with much of the radical critique of
neoliberal globalization. Whatever the concerns, however, there will be technological
diffusion. The conflicts and structure of present systems, plus the technical
difficulty of verifying and enforcing a global regime to control these
technologies, make it likely that their proliferation – much like the proliferation
of WMD and computer viruses – could at best be managed and endured. 3. Pressures to adopt transhuman technologies Even as critics point to the potential for harm from
NBIC technologies, they have their own blind spots. In particular, they tend to
deemphasize the competitive and hedonic pressures encouraging the adoption of
these products. Even in a political environment where Neither transhumanism nor its critics have yet to have
any substantial impact on open-source military literature or planning (Evans
2007). The idea of human enhancement in the service of the state, however, has
become a subject for research and speculation. DARPA has engaged in a program
for “Metabolic Dominance” which would “enable superior physical and
physiological performance of the warfighter by controlling energy metabolism on
demand" (cited by Auer, 2004, 1). There is also a Metabolic Engineering
Program, which “seeks to develop the technological basis for controlling the
metabolic demands on cells, tissues, and organisms,” beginning with blood and
blood products (Goldblatt 2002, 337). Peak performance is encouraged by devices
to control body temperature, “nutriceutical” foods and “first strike rations,”
and “tweaking” mitochondria to increase energy and reduce fatigue. An Augmented
Cognition program has aimed to extend the ability to manage information, while
the Continuous Assistance Performance (CAP) program has as its goal “to
discover pharmacological and training approaches that will lead to an extension
of the individual warfighter’s cognitive capability by at least 96 hours and
potentially by more than 168 hours without sleep” (Goldblatt 2002, 339-340).
The soldier, in this vision, will be more focused, smarter, and have a better
memory. He or she would be stronger, fast-healing, and capable of functioning
for days at a time without food or sleep (Auer 2004, 1). War, and the threat of
war, have already accelerated human evolution (Bigelow, 1970). But now it can be by design: Today DARPA is in the business of creating
better soldiers – not just by equipping them with better gear, but by improving
the humans themselves. “Soldiers having no physical, physiological, or
cognitive limitations will be key to survival and operational dominance in the
future, ” Goldblatt once told a gathering of prospective researchers. Until
mid-2003 he was head of the Defense Sciences Office (DSO), a DARPA branch that
focuses on human biology. “Imagine if soldiers could communicate by thought
alone,” he went on. “And contemplate a world in which learning is as easy as
eating, and the replacement of damaged body parts as convenient as a fast-food
drive-thru. As impossible as these visions sound ... we are talking about
science action, not science fiction.” (Garreau 2005) At present, the technology is not sufficiently
developed to apply to battlefields, but the potential is there. Will
individuals consent to this kind of augmentation? Competitive pressure may
leave them with no practical alternative. For others, the choice could be
perceived as liberating. At one time the marketing slogan of the U.S. Army was
“be all that you can be.” In the future it may become “be more than you could be.” From the perspective of the military, once one starts
down this path there are few logical places to stop. Surveys of research, for
example, find that the typical human “clearly shows inhibitions against killing
which are part of our biological heritage. The command ‘thou shalt not kill’ is, so to speak, based on a biological filter of norms”
(Eibl-Eibesfeldt 1977, 139). This is inconvenient, to say the least, for
armies. Upon the biological filter of norms which
inhibits killing, is superimposed a cultural filter of norms commanding killing
of the enemy. The biological filter of norms is not eradicated by his process
of self-indoctrination; since it is still there, it leads to a conflict of
norms which is felt as guilt, particularly when the encounter with the enemy
becomes a face-to-face one. (Eibl-Eibesfeldt 1977, 139) Would it not make sense, for both the good of the
state and the psychological well-being of the soldier, to mute the biological
imperative not to kill? What happens next?
It is possible the treatment may not be reversible. In that case,
releasing “enhanced” ex-soldiers into the general population could put that
population at risk. If the alterations are heritable, it would mean that there
would be children born without the inhibition against killing. The logical
response would be never to release the soldiers, and/or to see to it they are
incapable of reproducing. On the other hand, if the moral reprogramming can be
reversed, a soldier may have to deal with the memory of what he or she was willing
to do. This is speculation. By design, DARPA is in the
business of exploring far-out ideas that often don’t pan out (Weinberger 2006).
At the same time, this is the agency that laid the foundations for the modern
internet. Even if its original goals are not met, whatever is found is likely
to have significant effects, and some of these effects may be far different from
what program managers intend. It is useful to remember the connections between
classified research with LSD and other agents as “truth drugs” and the spread
of these chemicals into more general use. First adopters and test subjects may
find that new technologies meet their needs, even if those technologies fail to
meet the requirements of the researchers (Lee 1994). Besides the military and hedonic motives, a final
driver in the adoption of NBIC and enhancement technologies will be economic.
“The incentives that drive private-sector innovation” are, in the words of one
observer, “real-time, unforgiving, and essentially Darwinian – survival of the
smartest.” Popular demand, and the
profit to be made in meeting that demand, may establish enhancement as a
“right,” at least for those with the wealth to get it, and “human nature being
what it is, improvement and enhancement become a product offering in the global
marketplace” (Canton 2005). In fact, this has already begun. Consider the
expansion of the pharmaceutical industry as it has defined new illnesses and
promoted “improvements” in the human condition. For several years this industry
has been the most profitable in Thus, there are at least two reasons to expect the
adoption of transhuman and NBIC technologies. First, the transcendence of past
limitations feels good. Second, these technologies will provide a comparative
advantage for those who adopt them. Ford (2007) refers to the projection of laissez-faire
competition between families to accrue competitive advantage in the “new
eugenics.” Others predict that an “unregulated market would naturally create
disincentives for parents to have gen-natural children due to the competitive
disadvantage they would face and the medical costs of children with inherited
diseases. The gap between the haves and have-nots, Caucasians and minorities,
males and females, and able and disabled would widen” (Adams 2004, 62-63). Any
attempt to close these gaps through action by the state would lead to “an
unprecedented expansion of the welfare state as it sought to ensure a baseline
for genetically healthy humans.” In a democracy, “norm creep” (Adams 2004, 63) would
be a natural result as advantaged parents sought to exceed whatever floor the government
set, which in turn could lead to countervailing political pressure for the
government to raise the floor and guarantee a competitive “head start” to all
the recipients of its enhancement-welfare program. At an international level, a competition to provide
enhancements could take a form similar to tax havens and weak regulatory zones
found today. For military and economic reasons, the danger that could come from
being left behind would prompt actors to match or exceed the programs of
potential rivals. The fear of being caught by surprise can be a powerful
motivation for research and development. A security dilemma is the logical
result. Perhaps a “clash of genomes” will emerge as the tendency
for national styles in military technologies and strategies is reflected in the
choices of enhancements and techniques. Today, different countries have
distinct styles in their design of weapons (Cohen 2010, 143). Just as a
dictator would not design the same kind of “improvements” in his people as
those people would choose for themselves, a fundamentalist society would not encourage
the same “enhancements” as a liberal one. More generally, different cultures emphasize varying
elements of our common humanity as being praiseworthy. They have different
notions of what it means to be human, and the responsibilities, if any, that
one human owes another. We should expect that “different cultures will define
human performance based on their social and political values” (Roco 2007, 78). Those
groups will have a capability to remake themselves to reflect those values. The
most simple of sorting techniques – to choose the sex of a baby – when coupled
with local cultures and state policies, has already altered prior demographic
balances, and with them the dimensions of future international and internal
conflict (Hudson and Den Boer, 2005). What would be the consequences of more
advanced techniques of directed evolution for global politics?
4. Security and conflict in a transhuman world Even without modifying our basic humanity, current
technologies have altered the relationship between individuals and states.
Globalization “gives more power to individuals to influence both markets and
nation-states than at any other time in history” (Friedman 2002). In the words
of author William Gibson (1999), “the future is already here – it is just
unevenly distributed.” Sometimes the distribution is not to the benefit of states.
Consider, for example, the “war” between the Even when the methods of empowerment are external,
like the World Wide Web or the system of transcontinental air travel, great
powers like the U.S. have already found themselves pushed to mirror the capabilities
and approach of superempowered individuals. While today’s superempowered
individuals are such by virtue of wealth or networks or personal skills, the
superempowered individual of tomorrow may be transhuman or posthuman. Technologies
will amplify the power of individuals to the point that a single person could
conceive of taking on the world – and winning. Again, what we would have is a classic security
dilemma: each group or individual that could modify itself, uncertain of the
intention of other groups to take advantage of NBIC technologies for unilateral
gain, would have reason to act as if the worst might happen. Each might see
some value in selecting for greater cooperation, or greater empathy, or reduced
aggressiveness. But unless everyone can be trusted to make such modifications,
those who choose another path would have a competitive advantage. In a world of
sheep, the wolves rule. The wolves who already exist are unlikely to volunteer
to join the sheep. Some may take it upon themselves to “tame” humanity.
But this effort to tame the human animal, besides being morally repugnant,
would fail to achieve its goal. The “zookeepers,” however enhanced, will remain
imperfect, and will likely be in competition with one another. Who or what will
restrain the elites? The situation is analogous to the competition among
sovereign states today, only far less orderly. 5. Security after a singularity Some optimistic transhumanists take solace in the
prospect that new kinds of actors, far beyond human, will emerge to save the
day. Others see it as essential. The possibilities inherent in NBIC
technologies have led transhumanist philosopher Nick Bostrom (2007b) to conclude
there are four general futures for humanity: extinction, recurrent collapse,
plateau, and posthumanity. The rate and direction of change points to what John
von Neumann speculated would be “some essential singularity in the history of
the race beyond which human affairs, as we know them, could not continue” (cited
Bostrom 2007b). Today, futurists refer to “the singularity” as the “conjecture
that there will be a point in the future when the rate of technological
development becomes so rapid that the progress-curve becomes nearly vertical.” Within a very brief time (months, days, or
even just hours), the world might be transformed almost beyond recognition.
This hypothetical point is referred to as the singularity. The most likely
cause of a singularity would be the creation of some form of rapidly
self-enhancing greater-than-human intelligence. (WTA 2008c) This prospect has sometimes been referred to as “the
rapture of the nerds.” In 1965, statistician I.J. Good argued: Let an ultraintelligent machine be defined
as a machine that can far surpass all the intellectual activities of any man
however clever. Since the design of machines is one of these intellectual
activities, an ultraintelligent machine could design even better machines;
there would then unquestionably be an “intelligence explosion,” and the
intelligence of man would be left far behind. Thus the first ultraintelligent
machine is the last invention that man need ever make… (Cited Bostrom
2007) Good expected this machine to be built before the end
of the twentieth century. Needless to say, ultraintelligence has proven more
difficult than he imagined. Yet progress, however slow, has been made. Author
and mathematician Vernor Vinge estimated in 1993 (cited by Bostrom 2007) that
“[w]ithin thirty years, we will have the technological means to create superhuman
intelligence. Shortly thereafter, the human era will be ended.” The world after a singularity event, if it were to
occur, would almost certainly “be geo-politically destabilized” (Evans 2007,
162). With innovation building on innovation, first-adopters would have a
significant advantage over others – so long as they could maintain some
semblance of control over their machines and progeny. If they can not maintain that control, the new
actors will accrue the advantage. What would a post-singularity security competition
look like? One way to look at it is as a logical extension of the evolving
“generations” of war. In this analysis, as popularized by a rising generation
of strategic analysts, first-generation war involved line-and-column tactics
between soldiers of the state. The second generation applied machines and
indirect fire, third-generation war involved industrialized mass armies, and
the fourth generation involves political-economic struggles among networks. If past
war has centered on an enemy's physical strength, and fourth-generation war on
his moral strength, a fifth generation of war might focus on breaking his intellectual
strength. It would require even more deception, and out-thinking of an opponent,
than has been seen before. It would be most successful, in fact, if the target
did not even realize it was taking place. Third generation war, as seen in World War II, relied
on industrial and political mobilization. In fourth generation war, the sort
that involved Mao or Ho, political mobilization was critical. In fifth
generation war, if political mobilization tips off your enemy, it is worse than
useless. In the fifth generation, (1) “the people do not have to want to be on
the fighter’s side,” (2) “the forces the fighter is using do not have to want
to be on the fighter’s side,” and (3) “your enemy must not feel that he is not
on your side” (Abbot 2005b). It would be a kind of struggle that in many ways
transcends our normal conceptions of conflict.
In a post-singularity, fifth-generation world, there would always be the
possibility that the economic collapse or natural disaster was not the result
of chance, but of design. There would always be the possibility that internal
social changes are being manipulated by an adversary who can plan several moves
ahead, using your own systems against you. The systems themselves, in the form
of intelligences more advanced than we can match, could be the enemy. Or it might
be nothing more than paranoid fantasies. The greatest problem that individuals
and authorities might have to deal with may be that one will never be sure that
war is not already under way. Just as some intelligence analysts cited the rule
that “nothing is found that is successfully hidden” – leading to reports of
missile gaps and Iraqi WMD – a successful fifth generation war would one that an
opponent never even realized he lost. Is it the end of politics if some or all actors are
not human? Is it the end of the “international” politics when “nations” make
and remake themselves? In theory, transhuman agents may be less of a problem
than they first appear to be. Humans are already unequal in many respects. In
fact, a liberal society does not require identical power or other attributes, only
equality before the law. One proponent of “liberation biology” argues that political equality has never rested on the
facts of human biology. In prior centuries, when humans were all
"naturals," tyranny, slavery, and purdah were common social and
political arrangements. In fact, political liberalism is already the answer to The crowning achievement of the
Enlightenment is the principle of tolerance, of putting up with people who look
differently, talk differently, worship differently, and live differently than
we do. In the future, our descendants may not all be natural homo sapiens, but
they will still be moral beings who can be held accountable for their
actions.There is no reason to think that the same liberal political and moral
principles that apply to diverse human beings today wouldn't apply to relations
among future humans and posthumans. (Bailey 2004) Bostrom, the philosophical dean of contemporary
transhumanism, emphasizes that enhanced humans would retain their moral agency
(2005a). This parallels his belief that individuals “should have broad
discretion over which of these technologies to apply to themselves” (Bostrom
2005b, 202). But even if this were to be true within a liberal community, the
world is not yet such a community. Although “NBIC enhancements in human
performance will take us closer to abilities reserved for gods in most of our
traditional stories” (Gorman 2005), the gods of myth were not without conflict,
and often it was the humans who paid the price. Perhaps it will be possible to establish superordinate
goals that promote the development and diffusion of transhuman and NBIC
technologies without the threat of a common enemy or the grasping for temporary
advantage. Perhaps exploration, or a threat of catastrophic climate change,
will encourage the development of a just and sustainable global civilization.
Perhaps the benefits of local nanotechnology can be spread far enough, fast
enough, to make competition over resources a waste of effort. Perhaps we will
evolve past the point of violence and zero-sum games. Perhaps, but not likely. Just as we now carry within
the traces of our animal ancestors, posthumans – whether biological, or
electronic, or some mix of the two – will carry traces of us within them. The
international system as it is will shape the ways in which NBIC technologies
are developed and applied, even as they will reshape that system. There is no
reason to assume an end to politics, or to concerns with security. Instead, the
threats will be more subtle, strategies more complex, and outcomes less
definitive.
5. Human security and the posthuman future Human security is, at its core, the shift in
perspective from the state to the individual as the proper subject to secure. The
United Nations Development Program’s Human
Development Report (1994) subdivided human security into seven threat
areas: economic security, food security, health security, environmental
security, personal security, community security, and political security.
Advances in NBIC technologies will have a substantial impact in each of these
areas. In addition, these technologies raise the prospect of a new dimension to
human security: the protection of human identity and dignity in a posthuman
world. In the short run, economic security is threatened by
the continued advance of NBIC technologies. Inequities of class and region and
race may well be made worse by the uneven distribution of these tools. Since
the powerful and wealthy will have first access, competitive and hedonistic
pressures can be expected to increase the gap between haves and have-nots. In the long run, a wider distribution of these
technologies raises the prospect of mass political action to “raise the floor”
of human potential, and this could also be encouraged by competition among states,
firms, and other groups who see in the improvement of their “human capital” the
potential for enhanced power. Here the risk is that elites may wish to maintain
their position by keeping the best enhancements for themselves. Another is that
competition among groups will encourage the powerful to impose a “tracking” of
persons into differentiated and over-specialized “species” within groups. Physical
and social division of labor would be matched by, and reinforced by, a genetic
division of labor. Food security may well be improved by breakthroughs in
NBIC technologies. Not only will the application of genetic engineering and
nanotechnology raise crop yields, rations developed for the battlefield will
have applications after natural disasters and for communities suffering from
malnutrition. Modifications of the human body, based in “supersoldier”
technology may, if extended to others, permit individuals to do more with less. Health security is transformed by the potential of
NBIC technologies. Rather than aiming for “wellness” the new goal could be to
be “better than well.” This would be a constantly rising standard for
achievement, and given the differences among cultures it might not be the same
standard for all. Arguments about how much, and what kinds, of enhancement are
enough – and what kinds are a “right” of personhood – will further complicate
efforts to provide “adequate” health care for all. Environmental security is threatened directly by the
ability of NBIC technologies, especially those involving self-replication, to
flood the environment with nanomachines and new organisms. While the “gray goo”
scenario is unlikely, it is likely that the next generation will produce viruses
and bio-machines that will infect humans and the environment in which they
operate, much as computer viruses do today. Widespread application of NBIC technologies will have
a direct effect on community security, but the direction of the effect depends
on the choices made. Widespread modification of humans by humans would lead to
blurring traditional lines of ethnicity, both for good and for ill. On the
other hand, the ability of cultures to recreate themselves to achieve their
local conception of “better,” coupled with competitive pressures and the
potential for speciation, open new horizons for conflict among groups who might
not recognize one another as human. Personal and political security are at risk from the
potential for abuse of NBIC technologies by States and other organizations. Left
in the control of elites, these technologies could be used to create dystopian
societies to rival anything in science fiction. In fact, the entire notion of
personhood could be at risk. If these technologies can be distributed and
regulated for the good of the community, however, they hold the promise of making
a far better world. The diversity of humanity could be recognized as a value to
be protected, even as people learn to see beyond external forms to the humanity
within. Yet even under the best regulatory regime, the pressures of competition
and the desire of each individual to improve set up the potential for a dilemma
that affects not merely the interaction and security of states, but the lives
and liberty of each person. This is not a danger that can be edited out of the
human genome, for it is inherent in the nature of competitive interaction,
coupled with the expected comparative advantage of those who choose to take
advantage of the new technologies. Here, then, is the conundrum: in our attempt to remake
ourselves, we will not entirely leave our old selves behind, any more than we
have escaped our animal past, and nor will we escape the pressures inherent in
social and political systems. The balance of factors argues that change is
coming. Evolution never ends, even when it is to some extent self-directed. Yet
like so many other technologies, the tools of evolution-by-design will not
solve the most basic problems of human or global security. There are things we
cannot or will not leave behind. Trapped by the dilemmas inherent in security
and economic competition, political and security issues will continue to
challenge our descendants, no matter what forms or enhancements they possess. Appendix: A sample of applications of NBIC technology
for warfighting, with projections for when each will be achieved2 2010 Virtual-reality
battlefields and war-gaming simulations are sufficiently realistic to
revolutionize combat training. 2015 Human
biochemistry will be modified to allow soldiers and pilots greater endurance
for sleep deprivation, enhanced survivability from injury, and enhanced
physical and psychological performance. 2020 Uninhabited
combat vehicles. Adaptable smart materials. Microfabricated sensors to warn of
chemical, biological, radiological, or explosive
devices. Effective measures against biological, chemical, radiological, and
nuclear attack. 2025 The human
body will be more durable, more energetic, easier to repair, and more resistant
to stress, biological weapons, and aging. 2035 Nano-enabled sensors will be implanted
inside the body to monitor health. 2045 Warfighters
will be able to control weapons and combat vehicles through thought, perhaps
including the ability to react before the thought is fully formed. Notes 1. 2. These
estimates are based on the median of the judgments of participants and authors
in the first three National Science Foundation conferences on NBIC
technologies. They are extracted from a list of 76 applications found in
Appendix 1 of Bainbridge (2005). References Abbott, Dan. 2005. Dreaming the fifth generation war. Txdap (http://tdaxp.blogspirit.com/archive/2005/07/20/dreaming-5th-generation-war.html) 20 July 2005, accessed 9 March 2008. Agar, Nicholas. 2007. Whereto transhumanism? Anderson, Brian C. 2002. Identity crisis. National Review 54, 10 (3 June): 41-45. Auer, Catherine. 2004. Super snacks for super soldiers.
Bulletin of the Atomic Scientists 60,
3 (May-June): 8. Bailey, Ronald. 2004. Transhumanism: The most dangerous
idea? Reason Online (http://www.reason.com/news/show/34867.html) (24 August), accessed 26 February 2008. Bainbridge, William Simms, and Mihail C. Roco, ed.
2005. Managing nano-bio-info-cogno innovations:
Converging technologies in society. Baylis, John, James J. Wirtz and Colin S. Gray, ed.
2010. Strategy in the contemporary world
Bendle, Mervyn F. 2002. Teleportation, cyborgs, and posthuman
ideology. Social Semantics 12, 1
(April): 45-62. Bigelow, R. S. 1970. The dawn warriors: Man’s evolution
toward peace. Bostrom, Nick. 2005. In defense of posthuman dignity. Bioethics 19, 3 (June): 202-214. Bostrom, Nick. 2005. The transhumanist dream (letter
to the editor). Foreign Policy
(January ): 4. Canton, James. 2005. NBIC convergent technologies and
the innovation economy: Challenges and opportunities for the 21st century.
In Bainbridge and Roco 2005. Cebrowski, Arthur K., and Thomas P. M. Barnett. 2003. The
American way of war. Proceedings
(January): 42-43. Cohen, Eliot. 2010. Technology and warfare. In Baylis,
Wirtz, and Grey 2010: 141-60. Dacey, Darnovsky, Marcy. 2010. “Moral questions of an altogether different kind”:
Progressive politics in the biotech age. Harvard
Law and Policy Review 99, 4 (Winter): 99-119. Dvorsky, George. 2004. Better living through transhumanism.
Humanist 64, 3 (May): 7-10. Eibl-Eibesfeldt, Iraneus. 1977. Evolution of destructive
aggression. Aggressive Behavior 3, 2:
127-44. Elliott, Carl. 2003. Humanity 2.0 Wilson
Quarterly 27, 4 (Autumn): 13-21. Evans, Woody. 2007. Singularity warfare: A bibliometric
survey of militarized transhumanism. Journal of Evolution and Technology 16,
1 (June): 161-65. http://www.jetpress.org/v16/evans.html Fox, Dov. 2007. Silver spoons and golden genes: Genetic engineering
and the egalitarian ethos. American Journal of Law and Medicine 33:
567-622. Fransman, Martin. 1994. Biotechnology: Generation, diffusion and policy. In Technology and Innovation in the
International Economy, ed. Charles Cooper, 41-147. Friedman, Thomas L. 2002. Excerpt from the author s Longitudes and latitudes. (http://www.thomaslfriedman.com/longitudesprologue.htm) Accessed 9 March 2008. Fukuyama, Francis. 2002. Our posthuman future: Consequences of the biotechnology revolution.
Fukuyama, Francis. 2004. Transhumanism. Foreign Policy 144 (September): 42-43. Garreau, Joel. 2005. Perfecting the human. Fortune 151, 11 (30 May): 101-108. Gibson, William. 1999. The Science of science fiction.
Talk of the Nation radio program (30
November). Girifalco, Louis A. 1991. Dynamics of technological change. Goldblatt, Michael. 2002. DARPA’s programs in enhancing
human performance. In Roco and Bainbridge 2002: 297-300. Hudson, Valerie M., and Andrea M. Den Boer. 2005. Bare branches: The security implications of Habermas, Jurgen. 2003. The future of human nature. Trans. Hella
Beister and William Rehg. Jervis, Robert. 1986. Cooperation under the security dilemma.
World Politics 30, 4 : 167-214. Kurzweil, Raymond. 1999. The age of spiritual machines: When computers exceed human intelligence.
Lee, Martin A., and Bruce Shlain. 1985. Acid dreams: The complete social history of
LSD. McKibben, B. 2003. Enough:
Staying human in an engineered age. Michelson, Evan S. 2005. Measuring the merger:
Examining the onset of emerging technologies. In Bainbridge 2005. Pattinson, Shaun D., and Timothy Caulfied. 2004.
Variations and voids: the regulation of human cloning around the world. BMC Medical Ethics 5:9. President’s Council on Bioethics. 2003. Beyond therapy: biotechnology and the pursuit
of happiness. Available at www.bioethics.gov/reports. Robb, John. 2007. Brave
new war: The next stage of terrorism and the end of globalization. Robinett, Warren. 2002. The Consequences of fully understanding
the brain. In Roco and Bainbridge 2002. Roco, Mihail C., and William Sims Bainbridge, ed. 2002.
Converging technologies for improving human
performance. NSF: Schumpeter, Joseph A. 2006. Capitalism, socialism, and democracy Sententia, Wyre. 2005. Cognitive enhancement and the neuroethics
of memory drugs. In Bainbridge and Roco 2005. United Nations Development Program. 1994. Human Development Report. Verdoux, Phillipe. 2009. Transhumanism, progress, and
the future Journal of evolution and technology
20, 2 (December): 49-69. http://jetpress.org/v20/verdoux.htm Waltz, Kenneth N. 1979. Theory of international politics.
World Transhumanist Association. 2008a. Transhumanist FAQ http://www.transhumanism.org/index.php/WTA/faq21/46/ accessed February 16, 2008. World Transhumanist Association. 2008b. Transhumanist
FAQ 1.2 http://www.transhumanism.org/index.php/WTA/faq21/56/, accessed February 16, 2008. World Transhumanist Association. 2008c. Transhumanist
FAQ 2.7 http://www.transhumanism.org/index.php/WTA/faq21/64/,
accessed February 16, 2008. |