Transhumanism,
Progress and the Future Philippe Verdoux philippe.verdoux@gmail.com Journal of Evolution and Technology - Vol. 20 Issue 2 –December 2009 - pgs 49-69 http://jetpress.org/v20/verdoux.htm Abstract This paper argues
that one can advocate a moral imperative to pursue enhancement technologies
while at the same time rejecting the historical reality of progress and holding
a pessimistic view of the future. The first half of the paper puts forth
several arguments for why progress is illusory and why one has good reason to
be pessimistic about the future of humanity (and posthumanity). The second half
then argues that this is entirely consistent with also championing the futurological vision of transhumanism. The
claim is that, relative to the alternatives proposed, this vision actually
offers the safest route into the future,
even if it also entails an increase in the probability of self-annihilation. 1. Transhumanism and progress Transhumanism is a recent
philosophical and cultural movement that has both descriptive and normative
components: (1) the descriptive claim is that current and anticipated future
technologies will make it possible to
radically alter both our world and persons, not just by “enhancing” the
capacities that we already have but also by adding entirely new capacities not
previously had.1 (2) The normative claim is that we ought to do what we can to foment and
accelerate the creation of such “enhancement” technologies, thereby converting
the possibility of a “posthuman” future into an actuality. A primary focus of the
present paper is the notion of progress
that one often finds wrapped around the theoretical and programmatic core of
transhumanist philosophy.2 For example, the World Transhumanist
Association (WTA), founded by Nick Bostrom and David Pearce in 1998, lists
“technological progress” as one of four fundamental conditions necessary for
realizing the transhumanist project (Bostrom 2005a); and the extropian Max More
specifies “perpetual progress” as one of seven basic “Principles of Extropy”
(More 1998). Similarly, the singularitarian Ray Kurzweil3 situates
the idea of progress center-stage in his theory of cosmic history, which
identifies exactly six historical epochs through which the universe develops in
both a linear and exponential fashion (i.e., through a fixed sequence of stages
according to the “law of accelerating returns”). As these examples suggest, the
transhumanist literature is saturated with talk of a kind of technology-driven
progress, one ultimately leading to a posthuman future populated by
superintelligent AI systems and biotechnological hybrids.4 Focusing on the progressionism championed by most
transhumanists,5 this paper puts forth a mosaic of arguments in
support of a peculiarly anti-progressionist and pessimistic version of
transhumanism. This variant is based on two distinct theses: first, it argues
that the progressionist conception of history as “a record of improvement in
the conditions of human life” (Mazlish and Marx 1998) is highly problematic,
both empirically and methodologically. On the one hand, not only does the
evidence here reviewed – both futurological and anthropological –
not provide epistemic support for progressionism
(according to these data, history actually appears to be regressive in
many respects) but, on the other, the historiographic method often employed by
transhumanists in characterizing history is flawed and tendentious. The
literature on transhumanism, in contrast, often manifests a strong proclivity
for discussing and thinking about technological progress in a highly uncritical
manner. For one, there has been no attempt (that I know about) to provide a
constitutive analysis of the concept
(What exactly does progress mean?),
and in addition no transhumanist has yet offered a robust empirical argument
for the historical reality of progress.6 As Robert Nisbet observes
in his History of the Idea of Progress,
the existence of absolute progress was assumed as an “axiom or dogma” by most
progressionist theorists during and after the Enlightenment – that is, “the idea was as self-evident as
anything in Euclid” (Nisbet 1994, 7). Given transhumanism’s intellectual
continuity with this Enlightenment tradition (Bostrom 2005b), it is thus no
surprise to find that most transhumanists today similarly accept progress as a
“central dogma” of their technocentric worldviews. For empirical and
methodological reasons, I argue that this is a serious problem. But one need not champion
the triumphant “march of progress” conception of history to endorse the core descriptive and normative claims of
transhumanism. This leads to the second thesis of the present paper: despite
the failure of technology to bring about absolute progress throughout human
history, the “futurological program” of transhumanism still provides,
comparatively speaking, the best road map for how we humans ought to
navigate the future. The idea here is that, relative to the alternative maps,
programs and prescriptions that have been proposed by futurist policy makers,
including broad relinquishment, the steady-as-she-goes option (Walker 2009) and
the comprehensive relinquishment route of anarcho-primitivism, the
transhumanist imperative to both world-engineer and person-engineer actually
offers the safest route. Thus, my position makes explicit that one can
be an antirealist about progress, adopt a pessimistic view of technology and
its (negative) influence on the common existential plight of Earth-originating
life, and still endorse transhumanism
(that is, as defined above). This position is, in fact,
suggested by the work of several notable transhumanists, such as Mark Walker
(2009) and Nick Bostrom, who recently suggested that transhumanists eschew
“progress” for a more axiologically neutral term like “technological
development.” Why? Because “it
is far from a conceptual truth that expansion of technological
capabilities makes things go better. [And] even if empirically we find that
such an association has held in the past (no doubt with many big exceptions),
we should not uncritically assume that the association will always continue to
hold” (Bostrom 2009; cf. Bostrom 2005a).7 I attempt to formalize the resultant version of
transhumanism in this paper – a pessimistic and anti-progressionist position
that I call (for lack of a better term) rational
capitulationism.8 2. Three anti-progressionist arguments The aim of this section is
to convince the reader that progressionism is highly problematic: not only do
the futurological and anthropological data strongly suggest that progress is not
an historically real phenomenon, but the progressionist conception of
technology as triumphantly solving the many problems impeding human well-being
is typically based on a flawed and tendentious historiographic method. I
conclude that transhumanism ought to eviscerate the notion of absolute progress
from its philosophical body. The result is a more robust philosophy of
technology and orientation towards the future, one that both maintains its
moral imperative to person-engineer using the advanced technologies of the
genetics, nanotechnology and robotics (GNR) revolution, as well as recognizing that our worsening existential plight is
primarily the result of our technological activities. I first enunciate a
futurological argument, then an historical one, and finally close this section
with a review of the pertinent data from anthropology. Futurological argument: Bostrom defines “existential risk” (a
term of his coinage) as “one where an adverse outcome would either annihilate
Earth-originating intelligent life or permanently and drastically curtail its
potential” (Bostrom 2002) With respect to the numerical, Bostrom enumerates a
total of 23 categories of mostly technogenic existential risks that have
emerged or are expected to emerge within the next few decades, as the GNR
revolution unfolds. Such categories include the misuse of nanotechnology
(either through error or terror), “unfriendly” AI systems,
genetically engineered pathogens capable of wiping out Homo sapiens, and four “catch-all” categories labeled “something
unforeseen.” Now, (i) if Bostrom is correct in counting the categories of risks
today as 23 and those prior to 1945 as 1 or 2, and (ii) if we take the relevant
increment of time to be 100 years (between 1945 and 2045, when Kurzweil
predicts the GNR revolution will culminate),11 then the following
proposition follows: in only 100 years,
human technological activity will have resulted in a 12- to 23-fold increase in
the number of existential risk categories.12 This is quite an
extraordinary fact, even if technology has succeeded in simultaneously
mitigating smaller-scale risks (Bostrom and Cirkovic 2008, 27). The reality is
that there are more ways for Earth-originating intelligent life to terminate
today – that is, for our species to self-immolate – than there were for any
living species to go extinct in the past 3.5 billion years. That said, consider the second dimension of
existential risk assessment, namely that of probability. This particular
dimension poses special problems, since probability estimates of existential
risk scenarios are necessarily subjective
in nature. (It is of course true “by definition” that no existential risk has
yet occurred, or else we wouldn’t be here to worry about them.) Nonetheless,
one can reasonably assume that as the number
of existential risks increases, so will the likelihood
of a risk actualization event, although the relation here is one of contingency
rather than necessity.13 If this assumption is sound, it follows
that the probability of an existential risk happening has, along with the
numerical growth of existential risk types and tokens, also significantly increased within the centennial increment
specified above. Again, this is quite an extraordinary fact – one worth taking
seriously when thinking about technology and progress. A more robust argument for the very same conclusion
can be formulated by taking into account the probability estimates of several
authorities on the matter. As I attempt to show, these data suggest a
nontrivial rising trend in the
probability that an existential risk will be actualized in the near future –
especially as increasingly powerful GNR technologies, exhibiting the
characteristic-of-technical-artifacts property of dual usability, are developed. To begin, as Bostrom notes above,
the probability of an asteroid or comet impact (the only noteworthy pre-1945,
non-anthropogenic existential risk) is negligible: an impactor The trend that emerges from the above analysis is
enough to make the futurologist – primed by Kurzweil’s singularitarian extrapolations
of observed historical trends – wonder about the possibility of what might be
called an existential risk singularity
( Premise
1. the gravest existential
risks facing present and future (post)humanity derive most significantly from
technologies of the GNR revolution. Premise
2. the development of GNR
technologies is accelerating at an exponential rate (according to
transhumanists).16 Conclusion, the gravest existential risks facing
present and future (post-)humanity are also growing at an exponential rate. This is, of course, not to ignore the fact that the
development of defense technologies will most likely accelerate, too, but it is to point out the fact, often
overlooked in the transhumanist literature, that new existential risk types and
tokens will likely be created at an extraordinary rate. After all, the GNR
revolution is, so to speak, the parent of such risk progeny. Consider
Kurzweil’s own observation that the benefits
of the GNR revolution are undergoing “an exponential expansion” (Kurzweil 2005,
396) – so why not the dangers as well?
Both benefits and dangers are indeed “deeply intertwined” (as Kurzweil puts
it), since both have their origin in the dual
usability of these neoteric artifacts. If the potential of one use
increases, then so does the other: benefit and danger are twin siblings, growing up together in a family of “promise and
peril.” In sum, the Historical argument: I should begin by pointing out that my
intended focus is less on history per se
and more on (what I claim are) a few bad habits that transhumanists get into
when characterizing history in progressionistic terms. In other words, I make a
more historiographic than historical point. Integral to this point
is the articulation of an “error theory” that has as its explanandum the
“progressionist illusion” that history is
in fact a record of improvement. The explanation that I give points to the
manner in which progressionists in general, and transhumanists in particular, present the history of technological
development, which I show is tendentiously asymmetrical in its focus on
technology’s problem-solving capabilities (although this is generally an
unconscious bias). My first claim is that, using a kind of medical
metaphor, progressionists very often focus exclusively on the treatment of problems impeding the
acquisition of human well-being rather than on their etiology.17 By fixating on only one half of the story –
that of treating or solving the well-being-impeding problems
of history – a pattern of technology-driven progress does indeed emerge from
the historical mist. In other words, from this treatment-oriented historiography the past takes the form of a
series of problem-solving episodes in which unsolved
but technologically solvable problems
are given increasingly sophisticated technological solutions. And the faster
the wheels of innovation turn, the more progressive history appears, since
progress is intuitively measured in terms of the number of problems solved in a
given increment of time. But there is a second and equally important
constellation of questions to be asked, namely: What about the causal origin of these problems? and What enables or requires the wheels of
innovation to turn at all? As I discuss below, it is well-known today that
the technological minimalism of hunter-gatherer peoples was more than
sufficient to secure a remarkably healthy and far more leisurely existence than
most moderns live (Gowdy 1997; Cohen 1989). And, in addition, we know that the
advanced artifacts of modernity are, outside the larger technological systems
to which they belong, of virtually no practical value – this is arguably why
the wheel was not employed for transportation in Thus, when one’s historiographic approach focuses on,
or at minimum includes, this aspect
of history, the diachronic phenomenon of technological development appears
quite different than it otherwise does on the progressionist reading. Rather
than a triumphant series of overcoming
the problems that impede human well-being, the history of technology presents
itself as a protracted succession of problem-generating episodes in which previously non-existent or less serious
problems are either newly introduced or reintroduced/exacerbated
by technology (respectively). The first might be described as the creation of
problems ex nihilo (completely novel
problem creation) and the latter as the creation of problems de novo (a novel form of an old problem
emerges). Furthermore, the creation of such problems and their
corresponding spaces of innovation is crucially important for the growth of
technology, since it makes possible the
formulation of new technological solutions.18 Consider the
existential risks mentioned above: virtually all of these are, as already
noted, technogenic in nature. And as these risks are countered and neutralized
with increasingly sophisticated “defensive” technologies, there is no doubt
that the production of such risk-mitigating apparatuses will be hailed by progressionistic
transhumanists as constituting significant strides forward in the inexorable
“march of progress.” Kurzweil, for example, discusses the possibility of
creating an army of “blue-goo” police nanobots to obviate the eschatological
scenario of “grey-goo” (Kurzweil 2005, 416; Drexler 1987). But what does this
possibility really amount to? In the end, all we have achieved is approximately the same level of security
that we had prior to existence of this brand new nanotechnological risk.
The difference between now and then pertains only to how such security is achieved: today, of course, we don’t need an
expensive, complex, energy consuming, and so on, apparatus of highly
sophisticated nanotechnology to protect us from ecophagy because the grey-goo problem doesn’t yet exist.
It is, therefore, only when one focuses exclusively on the solution to this problem that it can possibly appear to instantiate
genuine progress. Indeed, neutralizing the grey-goo threat is rather like
taking a giant leap forward only after
taking a huge leap backwards. We come to occupy roughly the same position that
we did before – except with more anxieties. This is a deceptively subtle point,
I believe, and one that transhumanists often neglect in their writings on
technology and progress.19 In addition to the dual usability of technical
artifacts as an origin of new spaces of innovation, another important source of
problems pertains to the unintended
consequences of our increasingly powerful technological creations. The
burning of fossil fuels and its many negative externalities is a good example:
one of the initial arguments for the
adoption of the gas-powered automobile rather than the electric or steam car
was that the former would actually reduce
pollution (e.g., by getting rid of horse manure in city streets). One is
similarly reminded of the use of lead in gasoline as an anti-knock agent, or of
DDT as a pesticide in the mid-twentieth century. Thus, if history is any
indication, the most worrisome of all the
existential risk categories that Bostrom (2002) identifies is probably the
non-specific class of “something unforeseen.” In this sense, while
strategies like differential development might offer some protection against
anticipatable risks like the grey-goo scenario, there will no doubt emerge a
vast panoply of negative externalities that, as Winner puts it, will be “not not intended” – that is, risks that will
have absolutely nothing “in the original plan that aimed at preventing them”
(Winner 1977, 97). The paramount but unanswerable question thus becomes: What
will be the “global warming” of nanotechnology? What will be the
“eutrophication” of Strong AI? What possible thalidomide-like effects will
“strategies for engineered negligible senescence” (SENS) (de Grey et al. 2002)
inflict on human users? Such questions point not merely to “known unknowns” but
“known unknowables,” and with potentially eschatological consequences. The third important problem source mentioned above is
the post-invention manufacture of novel problems. This occurs when a
technologist invents a device without any particular problem in mind.20
Thus, to make the entity marketable,
the technologist sets out to manufacture a “need” that did not previously exist
or was not previously recognized as such. This is a case of, as the saying
goes, “a solution looking for a problem,” and it is commonplace in consumerist
societies: companies sell products by persuading consumers that the item being
sold is in some way necessary, when in fact it is entirely superfluous. Again,
this may give the impression of
progress, as more and more “problems” are given technological solutions, but in
reality it amounts to nothing more than leaping back and forth, back and forth,
creating and then solving. Finally, the fourth source considered here occurs
when new technologies “[require] further inventions to make them completely
effective” (Kranzberg 1986, 548). As Winner notes, “one must provide not only
the means but also the entire set of
means to the means” (Winner 1977, 101). Thomas Edison’s invention of the
incandescent light bulb is exemplary, since it required the construction of an
entirely new and highly elaborate electrical infrastructure to enable its
functionality. In these ways, then, technology not only magnificently
solves but also powerfully generates problems, and by introducing
new spaces of innovation for technologists to explore it creates the illusion
that progress is being made. In fact, a significant proportion of all the
technological solutions that Homo faber (“man
the maker”) has devised – clever and sophisticated as they may be – actually
target problems that are in some nontrivial way technogenic.21 Thus,
in assessing whether or not history is
a record of improvement, we must take care to consider not just the treatment
of problems impeding the well-being of Earth-originating life but their
etiology as well. This more symmetrical historiography yields a rather less
optimistic and less progressive picture of history than that painted by many
transhumanists. To be sure, then, change
has occurred. But change is not sufficient for progress – there must also be directional movement towards a valued
goal (Verdoux 2009a; Ruse 1996, 19-20). Anthropological argument: The thesis of this subsection is that the
anthropological data does not epistemically support the progressionist
conception of history. I proceed with two arguments: first, that transhumanists
often commit the fallacy of hasty
generalization when arguing for
the progressionist position; and second, that transhumanists often commit the straw man fallacy when arguing against alternative philosophies that
advocate some form of relinquishment, such as neo-Luddism (broad
relinquishment), the steady-as-she-goes option (which permits world-engineering
but relinquishes person-engineering) and anarcho-primitivism (comprehensive
relinquishment).22 The first part of the thesis is general and
applies to transhumanism insofar as is assumes a progressionist posture, while
the second part specifically targets Kurzweil’s arguments against neo-Luddism
and anarcho-primitivism. I focus on Kurzweil because his objections seem to
typify those of many other progressionistic transhumanists. Now, to be sure that I am not knocking down a straw man, let us begin with a look at the
following passages from Kurzweil’s The
Singularity is Near (2005), which evince both fallacies mentioned above: This romancing of software from years or
decades ago is comparable to people’s idyllic view of life hundreds of years
ago, when people were “unencumbered” by the frustrations of working with
machines. Life was unfettered, perhaps, but it was short, labor-intensive,
poverty-filled, and disease and disaster prone. (Kurzweil 2005, 436.) Technology has [brought] benefits such as
longer and healthier lifespans, freedom from physical and mental drudgery, and
many novel creative possibilities […]. Substantial portions of our species have
already experienced alleviation of the poverty, disease, hard labor, and
misfortune that have characterized much of human history. Many of us now have
the opportunity to gain satisfaction and meaning from our work, rather than
merely toiling to survive. (Kurzweil 2005, 396.) Imagine describing the dangers (atomic and
hydrogen bombs for one thing) that exist today to people who lived a couple of
hundred years ago. They would think it mad to take such risks. But how many
people in 2005 would really want to go back to the short, brutish,
disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the
human race struggled through a couple of centuries ago? (Kurzweil 2005, 408.) Technological advances, such as antibiotics
and improved sanitation, have freed us from the prevalence of […] plagues
[etc.]. (Kurzweil 2005, 409.) Let us begin with the second logical error mentioned
above: the straw man fallacy. In such
passages, many more of which could be adduced, Kurzweil decries the
romanticization of life “a couple of centuries ago” or “hundreds of years ago”
as idyllic and unencumbered. But who exactly romanticizes these periods? Who
expresses nostalgia for the Late Middle Ages and Early Modern Period? The
answer is, of course, that no one does!
On the one hand, the neo-Luddites champion a reform
policy that entails dismantling or imposing moratoria on certain classes of
technologies deemed too socially, psychologically, environmentally destructive
or existentially risky. The psychologist Chellis Glendinning, for instance,
argues that we ought to jettison television from the societal ship, since it
“functions as a centralized mind-controlling force, disrupts community life,
and poisons the environment” (Glendinning 1990). Similarly, Bill Joy advocates
the imposition of moratoria on the development of nanotechnology, given the
unprecedented risks it is expected to introduce (Joy 2000). However, no
neo-Luddite advocates returning to an
“idyllic” past lost by technology. Instead, exponents of this position are
explicit in envisaging a thoroughly technological
future, one in which humans have broadly relinquished the “bad” technologies
(such as television and nanotechnology) while actively developing the “good”
kinds of technology (one thinks of such entities as vertical farming, Lily pad
Cities and the Fab Tree Hab – “an edible prefab home for humanity”).23
It is in fact a common misperception of neo-Luddism that it universally rejects
technology. One needs to look no further than Joy’s oft-cited article in Wired magazine24 or
Glendinning’s 1990 manifesto, which postulates as the first principle of
neo-Luddite philosophy that “neo-Luddites
are not anti-technology” (emphasis in original). Nor, for that matter, is the more ideologically
radical philosophy of anarcho-primitivism strictly anti-technology. As Ted
Kaczynski writes in his 1995 manifesto, the primitivist perspective only sees
large-scale “organization-dependent” technologies as deleterious. “Small-scale”
technologies, or those artifacts “[useable] by small-scale communities without
outside assistance,” are actually seen as beneficial and socially desirable (Kaczynski
1995, 208). Thus, it is only when technological systems transmogrify into the
massive megatechnics of the industrialized West that they begin to truncate the
freedom of human autonomy (or so Kaczynski claims). One finds important echoes
of this idea in the theory of “normative determinism” (Bimber 1994; Ellul 1964)
and Winner’s notion of “reverse adaptation.” According to these positions, the
norms and standards of technology are, in advanced technological societies,
problematically universalized in all or most other domains of human thought,
experience and activity – domains in which the application of such standards
would normally be seen as inappropriate (Winner 1977, 238-251).25 Even more important for the present paper is the
futurological vision of anarcho-primitivism, which does not prescribe recreating the “short, labor-intensive,
poverty-filled, and disease and disaster prone” (to quote Kurzweil) conditions
of seventeenth century Even more striking is the consistent failure of
transhumanists to accurately depict what life was like for our distant Homo ancestors. This leads to the first
logical error mentioned above: the fallacy of a hasty generalization. To be sure, a full explication of this
fallacy would require an encyclopedia-length paper reviewing an oceanic mass of
anthropological data. My goal below is modest: to merely sketch out the
relevant paradigms currently established within contemporary anthropology (as
enunciated in standard textbooks on the subject; see Ember et al. 2005). Thus,
let us begin with the following statement from Mark Cohen’s seminal Health and the Rise of Civilization
(1989): “Some of our sense of progress comes from comparing ourselves not to
primitives but to urban European populations of the fourteenth to eighteenth
centuries. We measure the progress that has occurred since then and extrapolate
the trend back into history” (Cohen 1989, 141). This is, of course, precisely
what Kurzweil does in the above quoted passages – that is, he induces to a
general proposition about human history based on a highly impoverished
selection of data extracted from a single period of history. Cohen continues: “A good case can be made that urban
European populations of that period may have been among the nutritionally most
impoverished, the most disease-ridden, and the shortest-lived populations in
human history” (Cohen 1989, 141). Using (i) paleopathological studies, (ii)
epidemiological extrapolations based on the same uniformitarian principles that
underlie other scientific disciplines, and (iii) ethnographic studies of
contemporary hunter-gatherer groups, Cohen argues that the Neolithic
revolution, marked by the domestication of plants and animals and the adoption
of sedentarism, was followed by an appreciable decline in human health and
well-being. Quite incredibly, this decline persisted more-or-less until the
mid-twentieth century, at which point human health finally improved. But Cohen
is quick to note that the observed amelioration in health was largely limited
to the relatively affluent citizens of
the industrialized West.27 Indeed, contrary to what most
progressionists uncritically assume, “until the nineteenth or even twentieth centuries,
the improvement in overall life expectancy appears to have been fairly small”
(Cohen 1989, 140). Thus, there exists no
consistent correlation between the development of technology and the
improvement of human well-being throughout history. The sociologist Ruut Veenhoven (2005) notes a similar
decline in life-quality throughout most of human history, arguing that the
start of “the agrarian phase marked a historic dip in human quality-of-life”
(Veenhoven 2005).28 Along these same lines, the Harvard psychologist
Gregg Jacobs argues that modern society – with its phenomena of social atomism,
consumerism and information overload – is largely responsible for the growing
statistical prevalence of such psychopathologies as depression and anxiety. In
Jacobs’ words, “The root cause of modern
stress is the discrepancy between [the] modern world and ancestral world”
in which we evolved (Jacobs 2003, 60; emphasis in original). This observation –
which one also finds explicitly discussed in Bostrom and Sandberg 2008 – leads
Jacobs to declare “that progress has come at a great cost, for by creating
maladaptive negative emotions and inhibiting positive ones, we have disrupted
nature’s balance” (Jacobs 2003, 60; Verdoux 2009b). (Note that Bostrom and
Sandberg’s paper discusses ways to fix this organism-environment mismatch using
technological interventions.) Cohen similarly concludes with the assertion
that: These data clearly imply that we need to
rethink both scholarly and popular images of human progress and cultural
evolution. We have built our images of human history too exclusively from the
experiences of privileged classes and populations, and we have assumed too
close a fit between technological advances and progress for individual lives.
[…] In popular terms, I think that we must substantially revise our traditional
sense that civilization represents progress in human well-being – or at least
that it did so for most people for most of history prior to the twentieth
century. The comparative data simply do not support that image. (Cohen 1989,
140, 141.) For these reasons, then, I argue that transhumanists
ought to relax their progressionist posture. Based on the empirical data
considered by the theorists above, it turns out that the most “solitary, poor,
nasty, brutish, and short” periods of human existence have actually resulted
from civilization itself, rather than from lack of technology. The
anti-progressionist position here advocated thus concurs with the historian
George Basalla in asserting that “a workable theory of technological evolution
requires there be no technological progress in the traditional sense of the
term but accepts the possibility of limited progress toward a carefully
selected goal within a restricted framework” (Basalla 1988, 218). Indeed, in
order to bring about a posthuman state, transhumanism does require comparative progress, or progress
towards a limited goal of value (e.g., the development of nootropic drugs to
enhance human cognition, or the creation of superintelligent machines). But
instances of comparative progress do not necessarily add or cumulate to create
a transhistorical trend of absolute
progress, and in fact the anthropological and futurological evidence strongly
suggests that technology is a primary cause of human non-well-being. And, as I suggested above, the advanced
technologies of the GNR revolution may very well precipitate an But while technology has created a rather dismal
future – one full of new and rapidly ramifying eschatological scenarios – does
this mean we should abandon it? Should we restrict technological development
according to kind or whether the
artifacts produced target the human organism for modification? Or should we get
rid of it in a comprehensive manner, as the primitivists contend? 3. Rational capitulationism An argument for the anti-progressionist version of
transhumanism here advocated goes as follows: Premise
1. The futurological program of transhumanism would by all accounts
increase the likelihood of self-annihilation. By now, the truth of this proposition should be
obvious: the philosophy of transhumanism asserts both that (i) the advanced
technologies of the GNR revolution will enable us to world-engineer and
person-engineer in radically new ways – possibly even enabling us to construct
a novel species of technologized posthumans to take our place on the
phylogenetic tree30 – and (ii) the “enhancement” technologies that
promise to make the creation of posthumans possible ought to be pursued, albeit in a circumspect if “proactionary”
manner (More 2005). As expounded in the futurological argument of Section 2, of
all the technology kinds that humans have devised since the Oldowan industry
(circa 2.6 mya), those of the GNR revolution are by far the riskiest: not only has the number of existential risks
and probability of their actualization increased significantly in the past 50
years, but this trend may actually be exponential.
In the worst case scenario, this exponential growth of existential risks would
persist into the next couple centuries, thus precipitating an Premise
2. The alternative futurological programs proposed would almost certainly
increase the likelihood of self-annihilation more than transhumanism would. There are myriad reasons for accepting this claim. One
line of argumentation suggests that humanity has (so to speak) crossed the
Rubicon of technological development: there is no turning back now, at least
not without further exacerbating our existential plight or significantly increasing
human suffering.31 Consider the option of broadly relinquishing
(say) genetic engineering, due to the profound risks associated with its dual
use properties. How might this be accomplished? As Relinquishment requires us to not only stop
future developments but also to turn back the hands of time, technologically
speaking. If we want to keep ourselves completely immune from the potential
negative effects of genetic engineering we would have to destroy all the tools
and knowledge of genetic engineering. It is hard to imagine how this might be
done. […] Think of the alcohol prohibition experiment in the early part of the
century in the U.S. Part of the reason that prohibition was unsuccessful was
because the knowledge and rudimentary equipment necessary for brewing was
ubiquitous. It is these two features, availability of knowledge and equipment,
that has made biohacking possible. And where would a relinquishment policy be
implemented? If it is truly a viable and long-term strategy then relinquishment
will have to be adopted globally. Naturally very few countries with advanced
genetic technologies are going to be enthusiastic about genetically disarming
unless they have some pretty good assurances that all other countries will also
genetically disarm. This leads us to the usual disarmament impasse. (Walker
2009.) Indeed, just as a community of computer hackers emerged in the second half of the twentieth
century, so too has a group of biohackers
– or “hobbyists who experiment with In contrast, the steady-as-she-goes option
relinquishes not any particular field of technological research but rather the
use of GNR technologies to modify the human organism. This position is
exemplified by the bioconservative Francis Fukuyama (2002), who argues that
political liberalism is predicated on the existence of a common metaphysical
essence shared by all humans, since it is in virtue of this essence that we
humans are moral beings with an “inherent value” (and therefore deserving of
equal rights). Thus, by modifying this essence with person-engineering
technologies, the transhumanist project would extract a necessary ingredient
from political liberalism’s moral recipe. It follows that only world-engineering technologies ought to be pursued. But again,
we are left with the crucial question: How might one enforce such a
restriction? Wouldn’t any attempt to prevent person-engineering just drive
experimentation underground? And might these underground person-engineers
actually emerge as superior to us “normals” in some important respect?
Kurzweil, in fact, gestures at the plausibility of the latter scenario in a
mock dialogue with Ned Ludd, who expresses (like Furthermore, Before concluding this subsection, though, it is worth
taking a brief look at why the anarcho-primitivist option of comprehensive
relinquishment also fails.33 To begin, recall the problems with
Kurzweil’s arguments against primitivism: all those given involve fallacious
mischaracterizations of the position or inaccurate portrayals of our
“primitive” ancestors according the specious “Hobbesian ideology” (see Zerzan
1998, 258). Nonetheless, there are a
number of cogent and compelling reasons for rejecting the proposition that an anarcho-primitivist
revolution ought to be pursued: for one, recreating the mode of life had by our
Pleistocene forebears would entail a massive, albeit transitory, increase in
human suffering. As Ellul (a major intellectual source for Kaczynski) notes, “arrest
and retreat only occur when an entire society collapses” (Ellul 1964, 89).
Indeed, given the world population today, which far exceeds what could be
supported by hunting, gathering, and fishing (especially after the many
deleterious alterations of the environment brought about by human activity –
see the “Holocene extinction event”), a primitivist revolution would entail
realizing at once all the Malthusian
catastrophes that technology has obviated over the centuries, such as that
avoided by the Green Revolution (which of course introduced a myriad of new and
more serious anthropogenic problems). While one could, and Kaczynski in fact
does, argue along utilitarian lines that the suffering caused by transitioning
to a long lost modus vivendi – the
“primitivist singularity” – would ultimately be less than that resulting from
the GNR revolution, the thought of effectuating such suffering via an overthrow
of industrial capitalism and its heteronomous megatechnics is for most thinkers
(present company included) too morally repugnant. We may thus eliminate the
anarcho-primitivist position as a viable alternative plan for the future. And with these negative appraisals we come to our… Conclusion.
The futurological program of transhumanism ought to be implemented rather than
the alternative options available, that is, if one wishes to maximally minimize
the inevitable increase in the probability of self-annihilation. There are, I have attempted to show, dire
eschatological consequences to all
the possible routes into the future thus far proposed: no matter which is
ultimately implemented, our chances of survival have fallen nontrivially. And,
as I have also attempted to establish, technology constitutes a crucial
enabling factor in the network of causes responsible for our existential
plight. But what are the practical implications of this thesis with respect to
transhumanism? If absolute progress driven by technology is illusory and our
future dismal, then why not jettison – so to speak – technology from the ship
of humanity’s future? My line of reasoning to the conclusion above follows a
simple process of elimination: transhumanism offers (what one might call) the safest unsafe passage into the future,
that is, compared to the alternatives specified. But not only does the
transhumanist program appear to constitute the best option for the future by
avoiding the problems associated with certain forms of relinquishment, but it
might actually contribute positively – in ways the alternatives could not – to
the amelioration of our predicament. I refer here specifically to the creation
and use of cognitive enhancement technologies, including neural implants,
tissue grafts and nootropic drugs (Walker 2008b; Bostrom and Sandberg 2006;
Bostrom and Sandberg 2009). After all, who
better to grasp, manipulate and control the problems unique to the GNR
revolution than an advanced “species” of cognitively enhanced posthumans?34
Indeed, as many authors have noted, the rapid expansion of human
knowledge in the past several centuries has entailed a corresponding increase
in individual ignorance (Winner 1977, 283; see also Kelly 2008). No doubt, a
major obstacle to effectively guarding against the worse possible scenarios
considered by Bostrom (2002) is epistemic
or cognitive in nature. It thus
follows that enhancing our ability to think carefully, comprehensively and
deeply about the (impending) problems confronting intelligent life on Earth
will greatly augment our collective ability to survive. Person-engineering must
not be wholly restricted. The general view defended here is, I believe, already
implicit in certain corners of the transhumanist literature: one finds in
several authors a recognition of the technogenicity of our worsening situation as well as a sense that the best way to fix
this situation – now that we have crossed the Rubicon of technology – is more
technology, designed and implemented in a strategically prudent manner.35
Walker, for example, argues that “even though creating posthumans may be a very
dangerous social experiment, it is even more dangerous not to attempt it:
technological advances mean that there is a high probability that a human-only
future will end in extinction” (Walker 2009). And as I have already discussed,
Bostrom has not only recently suggested that transhumanists eschew the term
“progress”, but he continues to be a major intellectual figure in the exploding
field of techno-eschatology. In closing, a primary impetus behind this paper was to
make the position that I have termed rational
capitulationism explicit. This involved refining and elaborating Notes 1. One might term these different strategies
“world-engineering” or “niche construction” (following Odling-Smee et al. 2003)
and “person-engineering” or “organism construction” (following Verdoux 2009b).
The latter is accomplishable either by (a) enhancing a native capacity of the
human organism, such as improving visual acuity or increasing the speed of
cerebration using a “nootropic” drug, or (b) adding an entirely new capacity to
the individual. There are, for example, vast universes of knowledge to which
the mental apparatus of Homo sapiens
has no epistemic access due to limitations inherent in our biological wet-ware
and the “mind” program it runs. In other words, just as chimpanzees are cognitively closed to learning a human
natural language (no matter how hard they might try), so too are we humans
unable to entertain an infinite range of ideas that a cognitively enhanced
posthuman might possibly come to understand. One way of putting this is that future cognitive enhancement technologies
will allow us to radically redefine the boundaries between what some
philosophers have called “mysteries”
(in principle insoluble for unenhanced humans) and “problems” (in principle soluble, even if presently
unsolved). And similarly, the advanced person-engineering technologies favored
by transhumanists will allow us to augment our sensorium to include entirely
new modalities, such as echolocation and magnetoception. 2. Here there is an important ambiguity with respect
to the term “progress” – an ambiguity that obfuscates talk about the reality of
progress. On the one hand, progress can be understood in a local or comparative
sense, while on the other hand the term can refer to a global or absolute
phenomenon (see 3. Note that Kurzweil resists the term
“transhumanism.” In fact, he doesn’t mention it even once in his 2005 book on the Singularity. Nonetheless, he stands as
a central intellectual figure in the transhumanist movement, and indeed Michael
Anissimov (2006) lists singularitarianism as one of nine distinct but
overlapping “sects” of the transhumanist movement. 4. It should be noted that transhumanism is an
internally heterogeneous movement. As mentioned in note 3., Anissimov
identifies a total of nine different variants of transhumanist philosophy. Most
relevant to this paper is the fact that transhumanism contains both utopian and apocalyptic tendencies. The
burgeoning literature on existential risks, in fact, is motivated primarily by
the work of transhumanists. Nonetheless, the utopian position – with its
millenialist eschatology involving a future “techno-rapture” (the Singularity)
– seems to predominate strongly within the tradition. 5. I should make explicit that I take “progressionism”
and “progressivism” to be semantically equivalent terminological variants (at
least for the present purposes). Both signify a worldview built around the idea
that progress – in the absolute sense of 6. For example, consider the following passage from
the Humanity+ website: “The history of economic and technological development,
and the concomitant growth of civilization, is appropriately regarded with awe,
as humanity’s most glorious achievement. Thanks to the gradual accumulation of
improvements over the past several thousand years, large portions of humanity
have been freed from illiteracy, life-expectancies of twenty years, alarming
infant-mortality rates, horrible diseases endured without palliatives, and
periodic starvation and water shortages” (Bostrom 2005a). It is precisely this view that I find extremely problematic and in need
of serious revision. 7. Unfortunately, Bostrom then proceeds to talk
approvingly about technological progress in the very same paper! See also
Niiniluoto 2007 for more on the use of “development” and “change” as neutral
alternatives to “progress.” 8. Simon Young argues that “A pessimistic
Transhumanist is a contradiction in terms” (personal
communication). In the present paper, I attempt to show this claim to be
completely false. 9. Note that Bostrom’s definition is highly
theory-dependent, since it makes implicit reference to a future posthuman state
assumed to be desirable. 10. Where the former term is superordinate of the
latter: to say that something is technogenic is necessarily to say that it is
anthropogenic, but the reverse is not necessarily true. 11. Note, however,
that Bostrom does not accept all of Kurzweil’s prognostications. 12. As Bostrom asseverates: “Considering that many of
the existential risks that now seem to be among the most significant were
conceptualized only in recent decades, it seems likely that further ones still
remain to be discovered” (Bostrom 2009). 13. Not only are
types of existential and
sub-existential risks increasing in number, but the tokens of these types are
increasing as well. Consider the following fact, articulated nicely by Joy in
his 2000 manifesto: “The 21st-century technologies – genetics, nanotechnology, and robotics
(GNR) – are so powerful that they can spawn whole new
classes of accidents and abuses. Most dangerously, for the first time, these
accidents and abuses are widely within the reach of individuals or small
groups. They will not require large facilities or rare raw materials. Knowledge
alone will enable the use of them.” A prime example of this is the growing
phenomenon of biohacking. No doubt
there will also emerge nanohackers,
as well as computer whizzes able to create new AI systems in the privacy of
their own homes. See Section 3 for more. 14. It is worth pointing out the possibility of a
publication bias here. As Bostrom writes: “It is possible that a publication
bias is responsible for the alarming picture presented by these opinions.
Scholars who believe that the threats to human survival are severe might be
more likely to write books on the topic, making the threat of extinction seem
greater than it really is. Nevertheless, it is noteworthy that there seems to
be a consensus among those researchers who have seriously looked into the
matter that there is a serious risk that humanity’s journey will come to a
premature end” (Bostrom 2009). Indeed, Russell and Einstein’s statement (quoted
in the body text) still appears to accurately describe the situation. 15. This will be the case even with “risk reduction”
strategies such as differential development (e.g., nano-immune systems to guard
against ecophagy) or the use of preemptive war to prevent rogue states from
acquiring extremely dangerous weaponry, which both Bostrom (2002) and Kurzweil
(2005) endorse. 17. For example, Kurzweil writes that “degenerative
(progressive) diseases – heart disease, stroke, cancer, type 2 diabetes, liver
disease, and kidney disease – account for about 90 percent of the deaths in our
society. Our understanding of the principal components of degenerative disease
and human aging is growing rapidly, and strategies have been identified to halt
and even reverse each of these processes” (Kurzweil 2005, 217). Surely, curing
these conditions would create a robust impression of medical progress. But
consider their cause: it turns out that
most of these are actually so-called “diseases of civilization.” Thus, such
“progress” would actually involve solving a problem that civilization itself
has created – merely returning us to a pre-technological state of good health. 18. Winner similarly writes that “technology is most
productive when its ultimate range of results is neither foreseen nor
controlled” (1977, 98). 19. Basalla (1998, 14) makes a similar claim in his
argument that humans have “chosen an excessively complex, technological means
of satisfying basic necessities.” Indeed, this leads Basalla to endorse Jose
Ortega y Gasset’s characterization of technology as “the production of the
superfluous,” since “primitive” humans survived just as well – in some cases
much better – than modern humans, despite our sophisticated megatechnics
(Verdoux 2009b). 20. Note also that, as Basalla (1988) convincingly
argues, a sizable portion of technology has resulted from “play.” We may be Homo faber, but we are also Homo ludens (“man the player”). 21. As Bostrom (2202) writes: “Without technology, our
chances of avoiding existential risks would therefore be nil. With technology,
we have some chance, although the greatest risks now turn out to be those
generated by technology itself.”. 22. Note that there is a continuum of relinquishment.
In fact, Kurzweil himself advocates a limited kind of “fine-grained”
relinquishment (2005, 411). 23. Most neo-Luddites reject the neutralist hypothesis
that technologies are mere tools, neither “good” nor “bad” (e.g., “Guns don’t
kill people, people kill people”). Nonetheless, one need not believe that
technologies have moral or political properties to worry about the GNR
revolution and its eschatological ramifications. From all this, I trust it is clear that I
am not a Luddite. I have always, rather, had a strong belief in the value of
the scientific search for truth and in the ability of great engineering to
bring material progress. The Industrial Revolution has immeasurably improved
everyone’s life over the last couple hundred years, and I always expected my
career to involve the building of worthwhile solutions to real problems, one
problem at a time. (Joy 2000.) 25. For instance, reverse adaptation might lead one –
in an insidious manner – to evaluate interpersonal relationships in terms of technological norms like reliability,
efficiency, productivity, and so on. But this would normally be considered an
inappropriate way of thinking about such relationships – or so the argument
goes. Thus, in this way, human activity becomes “adapted” to technology, rather
than technology to human activity. See Kurzweil 2005, 25-29 for some
interesting examples of this putative phenomenon. 26. To put this in perspective, consider again
Kurzweil’s notion of the Singularity. One can, I believe, talk about the
radical transition from the modern mode of life to a pre-Neolithic-like mode as
involving a kind of “primitivist singularity.” Consider the fact that, on
Kurzweil’s view, the Singularity will be catalyzed by the creation of
superintelligence – “the last invention that humans will ever need to make,
since superintelligences could themselves take care of further scientific and
technological development” (WTA FAQ, 2.3). Note an important corollary of this
view, namely that our epistemic access to what life will be like after the Singularity has occurred is
completely restricted; one might say, employing a cosmological metaphor, that
the Singularity constitutes a kind of “event horizon” that renders meaningless
any prognostication of post-Singularity existence and what it might be like.
This is in fact very similar to the primitivist’s understanding of an
anti-civilization revolution: leading exponents of primitivism to explicitly
eschew speculating about what exactly life might be like post-civilization. All
one can say is that it will be novel,
unlike any state-of-being that has been. Thus, primitivism entails its own
“Singularity” of sorts – that is, a singular
future event in human history that would inaugurate an era about which
humans cannot at present know. 27. Personal communication. 28. Veenhoven (2005) optimistically adds that “the
transition to modern industrial society brought a change for the better.” 29. As Bostrom (2002) argues, “We should not blame
civilization or technology for imposing big existential risks.” 30. Indeed, this will not only be a new token species,
but a new type of species too: one
that is biotechnological in constitution, or even entirely artificial (as in
the case of mind-uploading or Strong AI systems). 31. The sentiment expressed here is not, of course,
new. For example, “returning to a Mayan folk community he had studied in the
early 1930s,” the anthropologist Robert Redfield “wrote of the ‘Village that
Chose Progress’ as having now ‘no choice but to go forward with technology,
with a declining religious faith and moral conviction, into a dangerous world’”
(quoted in Stocking 1996). 32. Furthermore, Kurzweil worries that enforcing such
moratoria would require a totalitarian state (2005, 406). 33. That is, comparatively speaking. 34. As The application to our problem is obvious:
our fears about the misuse of 21st century technology reduce down to
fears about stupidity or viciousness. [The] worry is that we may be the authors
of an accident, but this time one of apocalyptic proportions: the end of
civilization. Likewise, our moral natures may also cause our demise. Or, to put
a more positive spin on it, the best candidates amongst us to lead civilization
through such perilous times are the brightest and most virtuous: posthumans.
(Walker 2009) 35. With respect to restoring the environment, this
view finds expression in the bright green environmentalist movement called “Technogaianism” (see Hughes 2004,
212). References Anissimov,
M. 2006. Transhumanist sects. Accelerating
Future. URL
= <http://www.acceleratingfuture.com/michael/blog/2006/12/sects-in-transhumanism/>. Ayala,
F. J. 1988. Can “progress” be defined as a biological concept? In Evolutionary progress, ed. M. Nitecki,
75-91. Ayres,
C. 2008. Biohackers attempt to unstitch the fabric of life. The Times. URL
= <http://technology.timesonline.co.uk/tol/news/tech_and_web/the_web/article5400645.ece>. Basalla,
G. 1988. The evolution of technology.
Bimber,
B. 1994. Three faces of technological determinism. In Does technology drive history: The dilemma of technological
determinism, ed. M. R. Smith and L. Marx, 79-101. Bostrom,
N. 2002. Existential risks: Analyzing human extinction scenarios and related
hazards. Journal of Evolution and
Technology 9. Bostrom,
N. 2005a. Transhumanist values. World
Transhumanist Association. URL = <http://www.transhumanism.org/index.php/WTA/more/transhumanist-values/>. Bostrom,
N. 2005b. A history of transhumanist thought. Journal of Evolution and Technology 14(1): 1-25. Bostrom,
N. 2005c. Letter from Utopia. Journal of
Evolution and Technology 19(1): 67-72. Bostrom,
N. 2007. The Future of humanity. In New
waves in philosophy of technology, ed. J. B. Olsen, E. Selinger and S. Riis,
186-216. Bostrom,
N., and
A. Sandberg. 2006. Converging cognitive enhancements. Annals of the Bostrom,
N., and
A. Sandberg. 2008. The wisdom of nature: An evolutionary heuristic for human
enhancement. In Human enhancement,
ed. J. Savulescu and Bostrom,
N., and
A. Sandberg. 2009. Cognitive enhancement: Methods, ethics, regulatory
challenges. Science and Engineering
Ethics 15(3): 311-341. Bostrom,
N., and
M. Cirkovic. 2008. Introduction. In Global
catastrophic risks, ed. Bostrom,
N. 2009. Why I want to be a posthuman when I grow up. In Medical enhancement and posthumanity, ed. B. Gordijn and R.
Chadwick, 107-137. Carrico,
D. 2005. Progress as a natural force versus progress as the great work. Institute for Ethics and Emerging
Technologies. URL
= <http://ieet.org/index.php/IEET/more/130/>. Cirkovic,
M. M. 2008. Observation selection effects and global catastrophic risks. In Global catastrophic risks, eds. Cohen,
M. N. 1989. Health and the rise of civilization.
de
Grey, A., B. Ames, J. Andersen, A. Bartke, J. Campisi, C. Heward, R. McCarter,
and G. Stock. 2002. Time to talk SENS: Critiquing the immutability of human
aging. Annals of the Drexler,
E. 1987. Engines of creation: The coming
era of nanotechnology. Dusek,
V. 2006. Philosophy of technology: An
introduction. Ellul,
J. 1964. The technological society. Ember,
C. R., M. Ember, and
P. N. Peregrine. 2005. Anthropology. Glendinning,
C. 1990. Notes toward a neo-Luddite manifesto. Utne Reader 38(1): 50-53. Gowdy,
J.M. 1997. Limited wants, unlimited
means: A reader on hunter-gatherer economics and the environment. Hanson,
R. 1998. The great filter – Are we almost past it? URL
= <http://hanson.gmu.edu/greatfilter.html>. Heinberg,
R. 1995. The primitivist critique of civilization. URL
= <http://www.eco-action.org/dt/critique.html>. Hughes,
J. 2004. Citizen Cyborg: why democratic
societies must respond to the redesigned human of the future. Hughes,
J. 2008. Millennial tendencies in responses to apocalyptic threats. In Global catastrophic risks, ed. Huxley,
J. 1962. The coming new religion of humanism. The Humanist January/February. Jacobs,
G. 2003. The ancestral mind: Reclaim the
power. Joy,
B. 2000. Why the future doesn’t need us. Wired
Magazine 8.04. Kaczynski,
T. 1995. Industrial society and its future. URL = <http://en.wikisource.org/wiki/Industrial_Society_and_Its_Future>. Kelly,
K. 2008. The Expansion of Ignorance. The
Technium. URL = <http://www.kk.org/thetechnium/ archives/2008/10/the_expansion_o.php> Kranzberg,
M. 1986. Technology and history: “Kranzberg’s Laws.” Technology and Culture 27(3): 544-60. Kurzweil,
R. 2001. The law of accelerating returns. KurzweilAI.net. URL
= <http://www.kurzweilai.net/articles/art0134.html?printable=1>. Kurzweil,
R. 2005. The singularity is near: When
humans transcend biology. Marx,
L and B. Mazlish. 1998. Introduction. In Progress:
Fact or Illusion?, ed. L. Marx and B. Mazlish, 1-7. Merton,
R. K. 1936. The unanticipated consequences of purposive social action. American Sociological Review 1(6):
894-904. Moravec,
H. 2000. Robot: Mere machine to
transcendent mind. More,
M. 1998. Extropian Principles, Version 3.0: A Transhumanist Declaration. URL
= <http://www.maxmore.com/extprn3.htm>. More,
M. 2005. The proactionary principle. URL
= <http://www.maxmore.com/proactionary.htm>. Nagel,
E. 1979. The structure of science:
Problems in the logic of scientific explanation. Napier,
W. 2008. Hazards from comets and asteroids. In Global catastrophic risks, ed. Niiniluoto,
Niiniluoto,
URL = <http://plato.stanford.edu/archives/spr2009/entries/scientific-progress/>. Nisbet,
R. 1994. History of the idea of progress.
Nouri,
A. and C. F.
Chyba. 2008. Biotechnology and Biosecurity. In Global catastrophic risks, ed. Odling-Smee,
J., K. Laland, and
M. Feldman. 2003. Niche construction: The
neglected process in Evolution. Putnam,
H. Rees,
M. J. 2004. Our final hour: A scientist’s
warning: How terror, error, and environmental disaster threaten humankind’s
future in this century on Earth and beyond. Rifkin,
J. 1980. Entropy: A new world view. Russell,
B. and A.
Einstein. 1955. The Russell-Einstein manifesto. URL
= <http://en.wikisource.org/wiki/Russell-Einstein_Manifesto>. Shanahan,
T. 2000. Evolutionary progress? BioScience
50(5): 451-59. Shanahan,
T. 2004. The evolution of Darwinism:
Selection, adaptation, and progress in evolutionary biology. Smith,
M. R. 1994. Technological determinism in American Culture. In Does technology drive history? The dilemma
of technological determinism, ed. M. R. Smith and L. Marx, 1-36. Sober,
E. 1994. Progress and direction in evolution. In Creative evolution?, ed. J. H. Campbell and J. W. Schopf, 19-29. Stocking,
G. W. Jr. 1996. Rousseau redux, or historical reflections on the ambivalence of
anthropology to the idea of progress. In Progress:
Fact or illusion?, ed. L. Marx and B. Mazlish, 65-82. Veenhoven,
R. 2005. Is life getting better? How long and happily do people live in modern
society? European Psychologist 10:
330-43. Verdoux,
P. 2009a. Progress and its varieties: A conceptual analysis. Unpublished
manuscript. Verdoux,
P. 2009b. The construction of niches and organisms: Human evolution and
technology’s role in it. Unpublished manuscript. URL
= <http://philosophicalfallibilism.blogspot.com/2009/09/constructing-niches-and-organisms.html>. URL
= <http://www.metanexus.net/magazine/tabid/68/id/10682/Default.aspx>. Winner,
L. 1977. Autonomous technology:
Technics-out-of-control as a theme in political thought. Wright,
R. Young
, S. 2006. Designer evolution: A
transhumanist manifesto. Zerzan,
J. 1998. Future primitive. In Limited
wants, unlimited means: A reader on hunter gatherer economics and the
environment, ed. John Gowdy, 255-80. |