Book review: Mark
Coeckelbergh’s Human Being @ Risk:
Enhancement, Technology, and the Evaluation of Vulnerability Transformations Russell
Blackford School of
Humanities and Social Science, University of Newcastle, NSW Editor-in-Chief, Journal of Evolution and Technology Journal of Evolution and Technology - Vol. 23 Issue 1 – December 2013 - pgs 65-68 Many
of this journal’s readers will be aware of Mark Coeckelbergh’s longstanding
interest in the ethics and politics of enhancement technologies, not least
through his contribution (2011) to our recent Minds and Machines special issue,
co-edited by Linda MacDonald Glenn and myself. In
his new book from Springer, Coeckelbergh focuses on the political significance
of emerging technologies that carry the promise of transforming human
capacities, and particularly our vulnerability to such evils as disease, age-related
decline, and death. He is, however, not engaged in straightforward advocacy for
a transhumanist position – welcoming
dramatic improvements to human capacities – or, if it comes to that, for a
technoconservative one that rejects alterations to our “given” limits. Instead,
he focuses on the concept of vulnerability and on the question of why we might
wish to alter the vulnerabilities that we’ve inherited through the blind
process of biological evolution. Coeckelbergh
argues that vulnerability (of one kind or another) is an inherent aspect of
human life that could not be eliminated by any conceivable transformation of
our capacities. It is, he suggests, not a question of using technological means
to eliminate human vulnerabilities, or even reduce the balance of them, since
any technological alterations of ourselves or our circumstances inevitably
bring new problems and make us vulnerable in new ways. Rather, our choices must
be more subtle. According to Coeckelbergh’s approach, we have a limited ability
to employ technology so as to change the ways
in which we are vulnerable to circumstances and events. Although the new
vulnerabilities that emerge when technology alters our circumstances are not
entirely predictable, we have some ability to choose which existing vulnerabilities
we might strive to eliminate or reduce, and which new kinds of vulnerability we
are thereby likely to take on. This does,
of course, raise difficult questions about how to measure overall vulnerability.
If there is an objective metric for this, then it seems as if overall
vulnerability should be the kind of thing that we can, in fact, reduce through
planning and action. And if that were
so, why would we not want to take measures to reduce our overall vulnerability?
After all, we and our predecessors have gone to great lengths to prevent, treat,
and cure diseases, try to lower the probability of accidents (for example, on
the road or from industrial machinery), and search for alternatives to war. All
of these look like steps aimed at reducing various risks to human life and
health, and correlatively at making us less vulnerable to certain kinds of
harm-causing events. If such actions are futile in reducing overall human vulnerability,
why bother engaging in them at all? Doesn’t our conduct, in that event, suggest
that we’re deluded and that we’re wasting our energies? Such a
radically pessimistic conclusion sounds like too much – too shocking, too demoralizing, too absurd and contrary to received
wisdom – for us to accept. Yet, Coeckelbergh
develops an impressive case that all our technological and social measures
create new sources of vulnerability and that there is no uncontroversial way of
measuring vulnerabilities against each other. There are, as the title of the
book acknowledges, vulnerability transformations, but are there unambiguous
vulnerability reductions? Part of
the problem, as is brought out frequently in Humanity @ Risk, is that our desires and expectations change as our
powers change. Even if we obtain greater control over the natural world, we
become vulnerable to disappointment if something, perhaps something unforeseen,
goes wrong in the way we exert that control. At the same time, there seems to
be no limit to how our desires can shift. As our current desires and needs are
met, we may wish for “higher” kinds of pleasure or perhaps for competitive
success in the new environments created by our technologies and social systems.
As our environments and our own powers are transformed, we can find ourselves
with new ways to fail in competition, and thus with new avenues to
disappointment, frustration, and shame – not to mention new reasons to feel anxious about our efforts and our future
prospects. Moreover, emergent technologies and systems tend to bring their
own dangers to life and health. Taking all this into account, it can be problematic
to claim that we are ever, all things considered, better off (as individuals,
as societies, or as a species) than we were. Though
Coeckelbergh does not make the point in quite this way, our preferences adapt
so that we tend to think of ourselves as fortunate when we compare ourselves
with people from earlier generations. Some of the things that our predecessors
had to put up with would now dismay, daunt, and appall us, though they were,
perhaps, not important sources of anxiety at the time. Consider how you would
cope with the various unsavory sanitation arrangements that even the aristocracy
of Europe dealt with through the Middle Ages and deep into the modern era. Or
rather, don’t even try to imagine the experience in detail. Moreover, changing
human preferences give us new sources of anxiety that were not even on our
ancestors’ radar. In the end, it may be undecidable whether we are better off overall. At times, Coeckelbergh
appears to overreach and suggest that our vulnerability might even be increased
by the “vulnerability strategies” that we adopt: “soon we discover new vulnerabilities,
or, rather, we experience that our vulnerability has not diminished but is
merely modified, transformed – if not
increased” (81). Can this
be correct? Well, perhaps we could become more vulnerable, on balance, to some
specific kind of harmful event or phenomenon (perhaps a particular disease?). We
might even find out that we are worse off, overall, in our level of
vulnerability when judged by our own
standards; if that can be the case, however, why couldn’t we sometimes be better
off when judged by those same standards? If we
think, therefore, that vulnerability can be measured objectively to the extent
of intelligibly saying that it has increased in an overall sense, then why
can’t we take steps to reduce it
overall? Coeckelbergh would seemingly do better to stick to his more usual
position that we cannot uncontroversially measure net gain or loss in the
extent to which we are vulnerable, since we are vulnerable to such a vast range
of harms, offenses, frustrated desires, and so on, with no clear procedure for weighing
them against each other. Thus, two reasonable people might disagree about
whether some innovation has reduced overall human vulnerability (or the
vulnerability of a specific individual or group affected by the innovation) or
increased it. There will be no objectively binding answer as to who is correct,
since two reasonable people can place different weights on different kinds of
harms, frustrations, disappointments, etc., as well as differing in such things
as aversion to risk. If that’s so, whether or not we consider ourselves better
off might be a matter, in part, of which vulnerabilities we prefer to put up
with if needed. There is no metric that could be employed by a neutral
observer. Even this
conclusion may seem like a difficult to one to accept when we think of
objectively measurable increases in, say, human life expectancy. Still, it is
unclear how we could measure increasing or decreasing vulnerability, taken
overall, without assuming at least some values that might be rejected or contested
by an outside observer who knows the same facts about the situation but works with
a different value system. In that
case, Coeckelbergh seems to be correct that we can measure our vulnerability to
harms, frustrations, disappointments, and the rest only by using standards that
are informed by our actual, current preferences. If so, transhumanists,
technoconservatives, and others who care deeply about the human future need to
have rich conversations about the likely effects of technological and social
innovations, and about what we really want from them – assuming, of course, that there is enough common ground to justify
using such a slippery word as “we.” This approach may itself seem frustrating: it would be
nice if we (that word again, and not for the last time!) could see clearly what
really amounts to an improvement of the human condition and what amounts to the
opposite. It would be nice if we could measure this in some way that could be
applied objectively: that is, in such a way that the same results would be
reached by all rational and sufficiently well-informed beings, irrespective of
their particular desires or values. But this seems to be impossible. It may be possible to imagine situations so bad that
they contain nothing that any being remotely like any of us could possibly find
anything attractive or redeeming (compare Harris 2010, 38–41). However, we’re
faced with more subtle questions than that. Once we start thinking of desirable
kinds of future societies there seems to be enormous scope for differing and
deeply contestable ways to measure their goodness or badness. This more general
insight subsumes Coeckelbergh’s point about trying to measure the overall extent
to which future citizens would experience vulnerability or be at risk. Thus, Coeckelbergh drives home an important point for
our debates about the human future. I personally found it unhelpful at times
when his argument relied heavily on concepts borrowed from continental
philosophers who’ve been concerned with more general issues about the human
condition. For anyone who is not well-grounded in this body of literature, some
of the discussion will seem more like a barrier than an aid to understanding. Further,
the essential argument that I’ve sketched, and which Coeckelbergh elaborates in
much more detail, just doesn’t seem to need this scaffolding. Nonetheless, many readers probably will, in fact, find
the comparisons with Heidegger, Sartre, Levinas, and so on helpful as they
orient themselves to the direction of Coeckelbergh’s argument in Humanity @ Risk. Where you start from
and what you find familiar will influence how you experience this aspect of the
book. I can only suggest to readers who find it alienating that they show
patience. The core arguments do not depend on any particular familiarity with
Heidegger (who is most often referred to and discussed) or other thinkers from
the continental tradition of philosophy. Be warned,
Coeckelbergh is not concerned to identify the likely consequences of particular
technologies, although he makes clear that he is thinking mainly of information
technology and genetic technology in their various forms and guises. His book is
not, for example, an analysis of the utilitarian implications of performance
enhancing drugs, reproductive cloning, mind uploading, or advanced developments
in the science of machine intelligence. Coeckelbergh’s task is to ask deeper
questions about what sorts of consequences we want (or perhaps, in some sense,
should want) in the first place. Along the way, there are some fine moments in
his book, such as a thoughtful discussion of the internal politics developing
in cyberspace and a perceptive reading of Michel Houellebecq’s transhumanist
(or is it really?) novel, The Possibility
of an Island (2005). Human Being @ Risk identifies important choices that
we must debate as we imagine and (to a limited extent) plan the future of
humanity. It raises issues that are fundamental to ongoing thinking about how
to better the human condiction. On that basis, I hope to see widespread
discussion of its central arguments and what their implications might be for
the human future. There is much here to discuss. Note Unless
otherwise specified, all page references in the text are to Coeckelberg (2013).
References Coeckelbergh,
Mark. 2011. Vulnerable cyborgs: Learning to live with our dragons. Journal of Evolution and Technology
22(1): 1–9. Coeckelbergh,
Mark. 2013 Human being @ risk: Enhancement, technology, and the evaluation
of vulnerability transformations.
Dordrecht: Springer. Harris, Sam. 2010. The moral landscape: How science
can determine human values. New York: Free Press. |