Institute for Ethics and Emerging Technologies

 

contents

call for papers

editorial board

how to submit to JET

support JET & IEET

search JET

 

Cosmological Forecast and
Its Practical Significance

 Journal of Evolution and Technology    Vol. 12 - September 2002 - PDF Version
http://jetpress.org/volume12/CosmologicalForecast.htm

 

Milan M. Ćirković

  Astronomical Observatory Belgrade
Volgina 7
11000 Belgrade

YUGOSLAVIA

e-mail: arioch@eunet.yu

 

Abstract. Cosmology seems extremely remote from everyday human practice and experience. It is usually taken for granted that cosmological data cannot rationally influence our beliefs about the fate of humanity except perhaps in the extremely distant future, when the question of heat death (in an ever-expanding universe) becomes actual. In this note, an attempt is made to show that it may become a practical issue much sooner, if humanity wishes to maximize its creative potential. Newer developments in fields of anthropic self-selection and physical eschatology give solid foundations to such a conclusion. This may open some new (and possibly urgent) issues in areas of future policy making and transhumanist studies generally.

 

 

The end of our foundation is the knowledge of causes, and secret motions of things; and the enlarging of the bounds of human empire, to the effecting of all things possible.

                                                           Francis Bacon, The New Atlantis (1626)

 

1. INTRODUCTION: PHYSICAL ESCHATOLOGY

 

Physical eschatology is a rather young branch of astrophysics, dealing with the future fate of astrophysical objects, as well as the universe itself. Landmark studies in physical eschatology are those of Rees (1969), Dyson (1979), Tipler (1986) and Adams and Laughlin (1997). Some relevant issues have been discussed in the monograph of Barrow and Tipler (1986), as well as several popular-level books (Islam 1983; Davies 1994; Adams and Laughlin 1999). Since the distinction between knowledge in classical cosmology and physical eschatology depends on the distinction between past and future, several issues in the physics and philosophy of time are relevant to the assessment of eschatological results and vice versa.

A necessary ingredient in most serious discussions of physical eschatology is the presence of living and intelligent systems in future of the universe (which ex hypothesi did not exist in its past). Dyson was the first to boldly spell it out in 1979:

It is impossible to calculate in detail the long-range future of the universe without including the effects of life and intelligence. It is impossible to calculate the capabilities of life and intelligence without touching, at least peripherally, philosophical questions. If we are to examine how intelligent life may be able to guide the physical development of the universe for its own purposes, we cannot altogether avoid considering what the values and purposes of intelligent life may be. But as soon as we mention the words value and purpose, we run into one of the most firmly entrenched taboos of twentieth-century science.

 The future of universes containing life and intelligence is essentially different from the future of universes devoid of such forms of complex organization of matter, as well as different from the past of the same universes in which complexity was lower. In a similar vein, John A. Wheeler in a beautiful paper wrote on the relationship of quantum mechanics and cosmology (Wheeler 1988):

 

Minuscule though the part is today that such acts of observer-participancy play in the scheme of things, there are billions of years to come. There are billions upon billions of living places yet to be inhabited. The coming explosion of life opens the door to an all-encompassing role for observer-participancy: to build, in time to come, no minor part of what we call its past—our past, present and future—but this whole vast world.

Obviously, most discussions of the role and long-term future of intelligent observers in the universe rely on some assumptions pertaining to the relevant motivations of intelligent communities. Various assumptions have been used in the existing literature, the most interesting one is that the expansion of such communities and consequent technologization of space are carried out by particular technical means, notably von Neumann probes (Tipler 1986, 1994). However, even its most fervent supporters do not claim that such actions on the part of intelligent communities are necessary, exclusive, or even dominant. To various arguments invoked to support the conjecture that expansion and colonization of space are generic characteristics of intelligent communities, may be added one which we shall attempt to describe in this essay, formulated as the generalization of the concept of self-interest. Before we do so, it is necessary to define an extremely useful auxiliary notion.

 

 

2. OBSERVER-MOMENTS AND A SELF-SAMPLING ASSUMPTION

When examining the possibility of life and intelligence playing a significant role in ever-larger spatial and temporal scales, one essential constraint to take into account is the so-called Doomsday Argument (henceforth DA; for a survey of already voluminous literature on the subject, see Leslie 1996; Bostrom 2001a, 2002). Roughly, the DA reasons from our temporal position according to a principle that is directly analogous other applications of anthropic reasoning from the expected typicality of our position in the multiverse or our spatial position within a universe. Here we are not interested in the DA per se, but in one notion whose introduction in the field of anthropic thinking has been motivated by the DA.

            Namely, the DA and similar probabilistic arguments have been grounded in basic equality of all observers within a reference class.[1] However, this may be insufficient in most realistic situations, and may as well misrepresent the actual contribution of the attribute “intelligent” to the ontological status of an “intelligent observer”. Therefore, Bostrom (2002) makes the following attempt in specification, which we shall accept in further discussion:

 

...We can take a first step towards specifying the sampling density by substituting “observer-moments” for “observers”. Different observers may live differently long lives, be awake different amounts of time, spend different amounts of time engaging in anthropic reasoning etc. If we chop up the stretch of time an observer exists into discrete observer-moments then we have a natural way of weighing in these differences. We can redefine the reference class to consist of all observer-moments that will ever have existed. That is, we can upgrade SSA to something we can call the Strong Self-Sampling Assumption:

 

(SSSA) Every observer at every moment should reason as if their present observer-moment were randomly sampled from the set of all observer-moments.

 

An additional motivation for introducing observer-moments comes directly from thinking about the future: it is difficult to predict properties of future observers, in particular their longevity and the metabolic/information-processing rates. For instance, the DA conclusion may turn out to be perfectly correct if humanity achieves immortality coupled with zero population growth; obviously it seems unfair to count observers (instead of observer-moments) in the same manner before and after transition to the “immortal” regime. Thus, counting observer-moments may be much more tractable approach, since one may absorb all changes in, say, metabolic rate—via Dyson's biological scaling hypothesis (Dyson 1979), or a convenient generalizations—in the simple arithmetic changes in the budget of observer-moments. This also offers the simplest unifying framework for treating various kinds of observers, originating at various locations in spacetime. As we shall now see, however, the tally of observer-moments is influenced by cosmological factors, in two different ways. The first, and the most obvious one is contained in relevant limits following from cosmological boundary conditions. The second, dealing with the impact of cosmological studies on possible social and technological policies of intelligent communities, however, has not been treated in the literature so far.

Following SSSA, we obtain a method of quantitatively comparing measure of “success” of different (actual or possible) civilizations. Plausibly, one may expect that advanced civilizations will seek to maximize its total tally of observer-moments which we shall denote by Q. Thus, the variational form of

 

 

over all possible civilization's histories describes the desired future in the most general form.[2] However, it is illusory to hope to explicate the functional Q in such general terms. Instead, we shall use a greatly simplified temporal model, in which we assume that the civilization is characterized by discrete individual observers, countable (with their observer-moments) at any given time. This may be mathematically expressed as:

 

                                       (*)

 

where N(t) is the number of observers at epoch t of cosmic time, and áσ(t)ñ the corresponding average density of their observer-moments.[3] The lifetime of the civilization considered spans the interval from tmin to tmax, where the upper limit may—in principle—be infinite. It is important to emphasize that we use physical time here (i.e. we acknowledge validity of Weyl postulate which enables one to define universal, “cosmic” timescale), although it is possible to change coordinates to some subjective timescale if more appropriate, in the manner of Dyson's biological scaling hypothesis (Dyson 1979; Krauss and Starkman 2000), or Tipler's Omega-point theory (Tipler 1994).[4] There are at least two distinct ways in which cosmological parameters enter into eq. (*):

            1. Most obviously, values of cosmological parameters determine absolute limits on tmin and tmax. If the entire lifetime of the universe is equal to t, than tmax £ t. In addition, tmin > 0, but also one may state that tmin ³ t*, where t* is the epoch of formation of first stars of sufficiently high metallicity for processes of chemical and biological evolution to take place.

            2. The shape of the function N(t) is dependent on the cosmological parameters when the nature of matter distribution is taken into account. Namely, the power spectrum of density perturbations determines which objects form as result of gravitational attraction and decoupling from the universal Hubble expansion (for a modern textbook treatment see Peebles 1993). On the other hand, the size of the matter aggregates like stars, galaxies, etc. is essential for answering the question how large parts of the rest mass can be converted into energy for purposes of (intelligent) information processing. It is plausible to assume that the maximal number of observers is proportional to the energy consumed for such purposes, which can be mathematically written as

 

                                 (**)

 

where ri denotes the relevant energy density, and q < 1 is the efficiency of whatever energy extraction process used by the civilization. The reason why we consider the maximal number of observers is that the exact number, of course, depends on the sociological factors which are completely outside of the scope of the present study. It may also strongly depend on the level of technology (e.g. Sandberg 2000), and may radically decrease with the further scientific and technological advancement (like in the cyberpunk scenarios of “collective consciousness” development). Neglecting this, we perceive that at least this upper limit is still cosmologically determined, since both relevant densities ri and integration bounds are contained in the cosmological discourse. Of course, the density áσ(t)ñ is even less tractable from the point of view of the present knowledge, since it may be expected to hinge crucially upon biological factors on which we know little. However, for the purposes of present study, it is enough to assume that it is non-zero function of time which either increases or decreases slower than exponential.

 

 

3. COSMOLOGICAL REVOLUTION: A STORY

How does the number of observer moments Q tally with various cosmological models, including the realistic one? Let us first note that it may be doubted whether such thing as the exact model can ever be reached. Several simplifications come handy at this point. Sufficiently high degree of symmetry leads to familiar Friedmann models (or generalization of them including the cosmological constant), and sufficiently small perturbations can be treated in a familiar way. However, even the general outline on which the future fate of a universe depends may not be obvious till some critical epoch to any internal observers. In particular, as discussed in detail in an illuminating essay by Krauss and Turner (1999), realistic universes are notoriously difficult to analyze completely, due to possible presence of very large (super-horizon) perturbations which enter the visible universe only at some later epoch. From the point of view of internal observers, there is no possibility to avoid this ambiguity. In such position, it is natural that priorities leading to maximization of the number of observer-moments in (*) are contingent on the contemporary cosmological knowledge. As Krauss and Starkman (2000) vividly put it, “funding priorities for cosmological observations will become exponentially more important as time goes on.”

            Let us investigate the following imaginary situation. A civilization inhabiting a particular, sufficiently symmetric universe, develops both theoretical and observational astronomy to the point where it can make useful working models of their universe as a whole. After an equivalent of Einstein of that particular world develops formalism describing curved spacetime at the largest scales, an equivalent of Hubble discovers universal expansion, and equivalents of Penzias and Wilson discover the remnants of primordial fireball, leading cosmologists begin to support the flat baryonic universe with ΩB = Ω » 1. At first it seems that all observations can be accomodated in the framework of such a model (we suppose that light elements' abundances, for instant, are not inconsistent with such high baryonic density, contrary to the situation in our observable universe!). Some circumstantial support for this model comes from ingenious theoreticians of that civilization, who discover that coupling of a universal scalar field to gravity leads to the exponential expansion during the very early epochs. This inflationary phase in the history of such a universe leads to prediction that ïΩ - 1ï = e » 10-5, while it is not clear whether the universe is marginally closed or marginally open. In the latter case (favored by most of the theoreticians in such a universe), the number of galaxies in their universe is infinite, and therefore such a universe offers a very optimistic prospects for survival of intelligence and life. There is no event horizon in such universe, and the particle horizon is (very) roughly given as the age of the universe in light years, i.e. the maximal path traversed by light along the observer's past light cone. What are prospects of intelligent beings to survive indefinitely in such a universe?

Gradually, bolder scientists begin to tackle physical eschatological issues. An equivalent of Dyson in that world reckons that this civilization can, in principle, indefinitely survive while exploiting sources of energy in larger and larger volume (tmax = ¥). In addition, it was suggested by some extremely speculative and ingenious cosmologists, that non-zero cosmological shear can be manifested at later epochs, providing in this manner additional energy which will be proportional to the volume of the technologized space (although this option has not been studied enough). Predominant attitude toward maximization of (*) is, therefore, very optimistic and not characterized by any sense of urgency. There are physical grounds to expect Qmax = ¥.

            Suddenly, a new and unexpected twist occurs. New cosmological observations, and in particular two superbly designed projects detecting standard candles at large distances in order to make the best-fit estimate of the Hubble constant, indicate a spectacular overthrow of the ruling paradigm. After the dust settles (which lasts for years, and probably decades), the new paradigm suggest that the universe is still geometrically flat, but dominated by the cosmological constant term Λ in such way that W = WB + WL = 1, WB = 0.1, WL = 0.9. Now, the situation radically changes with respect to the envisaged number of possible observer-moments given by (*). The universe is now found to possess not only a particle, but an event horizon also, defined as the surface through which any form of communication is impossible at all epochs. This is a consequence of the fact that after a phase of power-law expansion, the exponential expansion generated by L sets in, thus creating a second (future and final) inflationary phase in the history of the universe (see Appendix I for some technical details).

            There are further bad news for such a civilization. The decrease in the metabolic temperature envisaged by the Dyson-equivalent can not continue indefinitely, as was possible before the cosmological revolution, since the de Sitter universe possesses a minimal temperature, a circumstance following from the quantum field theory, and described in some detail in the Appendix I. This is an extremely small temperature, but still finite, and below it nothing can be cooled without expending precious free energy. Thus, the temperature scaling may be continued only to the final value of tmax in (*). In addition, one may not use any shear energy, since the equivalent of the so-called “cosmological no-hair” theorem guarantees that no significant shear remains during the exponential expansion (Gibbons and Hawking 1977).

            It seems obvious that the cosmological revolution” will have important social and political consequences if the desire of maximizing Q in (*) remains the legitimate goal of considered civilization. There could be no more leisurely activities in the framework of the second paradigm. Although the survival cannot be indefinite, it still seems that it can be prolonged for very, very long time—but only if one starts early enough. Besides funding for cosmological observations, one may expect that funding for interstellar and even intergalactic expansion will suddenly rise. Colonization of other stellar and (ultimately) galactic systems should better start early in the Λ-dominated universe! 

 

 

4. DIFFICULTIES INVOLVED IN ESTIMATES

This story can teach us several lessons. It seems that we are currently in the middle of the “cosmological revolution” described above, although not as dramatic, since there was never a consensus on the values of cosmological parameters or the nature of matter constituents in the actual human cosmology. Also, the currently inferred value for the vacuum density WL is somewhat smaller, being about 0.7 (e.g. Perlmutter et al. 1999; Zehavi and Dekel 1999). However, the qualitative nature of the revolution and the implied potential change in the entire spectrum of human social and technological activities are analogous.       

Of course, this counterfactual example may be regarded as rather conservative. One may imagine much more drastic changes in the dominant cosmological paradigm. Let us, for instance, suppose that for some reason most cosmologists did accept classical steady state theory of Bondi, Gold and Hoyle in late 1940’s, and that in the same time the development of radio astronomy has been damped for several more decades. The attitude of humanitarian thinkers seeking to maximize Q could be very well encouraged by the steady state concept of creation of low-entropy matter in the manner conserving density of matter fields. Not only did one have tmax = ¥, one should also expect limt®¥ N(t) = ¥, and there would have been no plausible reason to expect s(t) to be anything but constant or even increasing function of time. From the particular human point of view, therefore, the steady state cosmology offered one of the most optimistic visions of the future.[5] (This is somewhat ironic, since the steady state model predicts essentially the same exponentially expanding spacetime as the Λ-dominated models.) As we know, after the fierce cosmological battle in 1950’s and early 1960’s, the steady state theory has been finally overthrown by discoveries of QSOs and the cosmic microwave background, as described in a colorful recent history of Kragh (1996). There has been no historical consensus about the exact cosmological model accounting for observations ever since, but it seems that we are on the verge of reaching one. However, it is conceivable that cosmology of some other civilization passes directly from the steady state into the Λ-dominated paradigm. This seems, curiously enough, at least in one respect easier and more natural than what has occurred in actual history (see Appendix II). This paradigm shift must be accompanied by a shift in technological and social priorities if one expects Θ to be maximized.

However, changes in cosmological paradigm currently underway in the real world should not be regarded as the end of the story. As mentioned above, perturbations of the scale larger than horizon scale are expected to enter our visible universe only at some late epochs. In the light of the argument above, one may expect that whatever the cosmological paradigm is established on the timescale of next ~101 years, may be upset by observing the perturbations on superhorizon scales (Krauss and Turner 1999). A recent intriguing study of Tipler (1999) shows that cosmological conclusions reached by local observations (i.e. those in the vicinity of the Milky Way) can be highly misleading, and that one should be on guard with respect to results of any local measurement of cosmological parameters.

            Let us try to estimate the effects of belated technologization to the lowest order. It perhaps goes without saying that any such estimate is notoriously difficult, speculative and on the very fringe of the domain of founded scientific speculation; some of the reasons, already mentioned, include our almost perfect ignorance of the evolutionary possibilities in the social domain, as well as the influence of various technological advances on the average census of observer-moments per observer, áσ(t)ñ. Even the simpler part of the problem, the estimate on the possibilities and modes of evolution of the number of observers N(t), poses almost intractable difficulties. We may be virtually certain that the current exponential population growth of humanity will be arrested at some future date, but whether it will result in transition to some other (power-law?) growing function, or tend to a stable asymptotic limit is impossible to establish at this time. There are certainly several timescales relevant for the history of an advanced technological community, which are related to the “quantized” nature of physical resources alluded to above (and which are, ultimately, consequences of the cosmological power spectrum). This may roughly correspond to Kardashev's famous classification of advanced intelligent communities into three types, depending on the energy resources available (e.g. tarter 2001 and references therein). However, there has been no estimates of the timescales required for transition between the types (and possible intermediate timescales corresponding to radically new technologies of energy extraction).

Baryonic mass of the Local Supercluster (henceforth LS) is of the order of 1015 solar masses (Oort 1983, and references therein), and its luminosity several times 1012 solar luminosities. Let us suppose that humanity will eventually technologize the entire spatial volume of LS, and gather all its negentropy resources for information processing. Let us also suppose that at whatever time humans (or posthumans) embark on the process of galactic and intergalactic colonization, the historical path of such colonization will be essentially the same; this is a reasonable assumption, since we expect that colonization timescale is significantly smaller from the cosmological timescales characterizing large-scale changes in the distribution of matter within LS. If we further assume (as many of the prominent anthropic thinkers, following Carter’s well-known argument, do) that we are the first technological civilization within LS, we may ask the question how many observer-moments (or conceivable human lives and experiences) we loose by postponing the onset of colonization by Dt? The simplest (“zero-order”) estimate is just to assume that all entropy produced by physical processes in LS during that interval is proportional to the loss of information from the “pool” available to the presumed “Type IV” future hypercivilization (i.e. the one exploiting the energy resources of LS). Major entropy producing process at present (and on the timescales relevant to the issue; see Adams and Laughlin 1997) is stellar nucleosynthesis. Its products are high-entropy photons escaping to intergalactic (and intersupercluster) space and being there further redshifted due to the universal expansion. Using the Brillouin (1962) inequality (essentially the integral version of eq. (**)), we may write

 

bits,

 

where is the Solar luminosity, and q is the (time-averaged) fraction of free energy which the hypercivilization converts into work of its computing devices. We expect that the temperature T at which computations are performed to be close to the temperature of the cosmic microwave background since the timescale even for colonization of a huge object like LS is short by cosmological standards, and thus such colonization is essentially isothermal. The quantity of information lost per a century of delay in starting the colonization is astonishing by any standard. For a conservative estimate of q = 0.1, and using Dyson’s (1979) estimate of “complexity” of an average present-day human being bits (quantity which is likely to grow in future, especially in the posthuman stage, but which is still useful as a benchmark), the number of potentially viable human lifetimes lost per a century of postponing of the onset of galactic colonization is simply (if we assume that the luminosity fraction in the equation above is unity, which is probably an underestimate for a factor of a few)

 

            . (!!!)

 

Of course, this is only the total integrated loss; if for some currently unknown reason the colonization of LS is impossible or unfeasible, while colonization of some of its substructures is possible and feasible, this huge number should be multiplied by fraction of accessible baryonic matter currently undergoing significant entropy increase (essentially luminous stars). On the other hand, our estimate is actually conservative for the following reasons. There are other entropy-producing processes apart from stellar radiation (notably the stellar black-hole formation becomes more and more important as the time passes), and thus our estimate is actually very conservative, since the lost quantity of information is likely to be higher. Another reason why this estimate should be taken as the absolute lower limit is the entire spectrum of existential risks (see Bostrom 2001b), which have not been taken into account here. Namely, the realistic history of posthuman civilization would be the convolution of the integrand functions in (*) with a risk function frisk(t) describing the cumulative probability of existential risks up to the epoch t (and their presumed impact on the observer-moment tally). Obviously, this function would be biased toward higher values at small values of t (as measured, for instance, from the present epoch for humans), since smaller—i.e. those not colonizing the universe—civilizations are more prone to all sorts of existential risks. Thus, the risk inherent in “colonization later” policy makes our estimate very conservative (or “optimistic” from the point of view of lost observer-moments). However, this estimate possesses the virtue of being a natural extension of the Dyson’s concept of development of a Type II (Kardashev) civilization: in order to truly technologize domicile planetary system, an advanced society must strive to capture and exploit the entire stellar energy output of its home star, via Dyson spheres or similar contraptions (Dyson 1960). Mutatis mutandis, the same arguments apply to larger scales of density fluctuations, and in the L-dominated cosmological model we are supplied by a natural cut-off at large scales.

 

 

5. SUMMARY

 

The above testifies to the simple truth that awareness of the cosmological situation is a first step toward true long planning for any community of intelligent observers interested in self-preservation and achieving maximum of its creative potential. However, in an evolving universe, the factor of timing seems to set stringent limits on the efficiency with which such intelligent communities are fulfilling their goals. While those limits are certainly to be subject of much debate and discussion in the future, the very fact of their existence makes cosmology interesting from a transhumanist perspective. Decision-making performed today, as far as humanity is concerned, may have enormous consequences on very long timescales. In particular, an overly conservative approach to space colonization and technologization, may result (and in fact might have already resulted) in the loss of substantial fraction of all possible observer-moments humanity could have had achieved. It is our modest hope that this cursory study will contribute to the wider and livelier discussion of these issues and reaching other, more precise predictions for intelligence’s cosmological future.

Finally, let us note that this approach is not necessarily the only manner in which cosmology may enter our everyday life. If some approaches in the fundaments of quantum mechanics and its links to the human conscience are correct, we may find ourselves in a situation where the cosmological boundary conditions determine the nature of our perceptions and self-awareness (Wheeler 1988; Dugić, Raković and Ćirković 2000). This differs markedly from our approach in this essay, which is based on classical cosmology (as well as classical logic and probability theory). One may imagine that the future correct physical theory of conscience will incorporate these elements, and that they will a fortiori play some role in any policy-making attempts based on such a theory.

 

APPENDIX I

 

Behavior of universe with large positive vacuum energy density—commonly (and somewhat imprecisely) known as the cosmological constant—L has been investigated in several publications even before the cosmological supernovae began to throw light on its reality (Carroll, Press and Turner 1992; Krauss and Turner 1999; Ćirković and Bostrom 2000). In the L-dominated epoch, the scale factor behaves according to the de Sitter law, i.e.

 

            ,

 

where the effective Hubble constant is given as . In such a universe, after a transition period between matter-domination and vacuum-domination, the event horizons of the size given as: 

 

pc,     

 

where c is the speed of light, H0 º 100 h km s-1 Mpc is the present-day Hubble constant (parametrized in such way that h is dimensionless number of order unity), and WL is the cosmological density of vacuum. Beyond this distance no communication is possible at any time. This is very different from the situation in the matter-dominated universes, where the contribution of cosmological constant is very small or completely vanishing, where there are only so-called particle horizons, representing temporary obstacles to communication (i.e. any two arbitrarily chosen points will get into region of causal influence in finite time).

Minimal temperature of the exponentially expanding (de Sitter) universe characterized by cosmological constant L is given by the equation (Gibbons and Hawking 1977):

 

 K,                                     (I. 2)

 

where k is the Boltzmann constant. The expression under the square root on the right-hand side of (I. 2) is close to unity, and h » 0.6. Therefore, this temperature is low beyond description, but as longer and longer timescales in the future unfold, its finite value precludes the asymptotic process of lowering metabolic rate of intelligent creatures of far future suggested by Dyson (1979) as a method for achieving immortality (Krauss and Starkman 2000).

 

 

APPENDIX II

 

Ironically enough, it would not be so extremely difficult to confuse the classical steady-state cosmology with L-dominated ones if the level of sophistication of (neo) classical cosmological tests (e.g. Sandage 1988) is not very high. Namely, the major observational parameter used in empirical discrimination between world models is the decceleration parameter q0, defined as

 

            ,

 

where R is the cosmological scaling factor. Of course, this definition is not of much practical value. Instead, it can be shown that in standard relativistic Friedmann-Robertson-Walker cosmologies, q0 is related to densities in matter and vacuum in the following way (with the usual assumption of negligible pressure):

 

,

 

which delivers the classical  value of 0.5 for Einstein-de Sitter model (W = Wm = 1, WL = 0), but becomes strongly negative for the vacuum-dominated models. In particular, for the extreme model considered above (Wm = 0.1, WL = 0.9), we have

 

            q0 = – 0.85.

 

It is well-known that, on the other hand, the decceleration parameter in the steady-state model is

 

q0 = const. = – 1.

 

Obviously, the last two values are close enough for the clear and unequivocal discrimination between them to be an extremely hard observational task.

 

 

Acknowledgements. I use this opportunity to express my gratitude to Olga Latinović, Vesna Milošević-Zdjelar, Srdjan Samurović, Milan Bogosavljević and Branislav Nikolić for their help in finding some of the references. The manuscript enormously benefited from discussions with Nick Bostrom, Petar Grujić and Fred C. Adams. Kind advice of Robert J. Bradbury, Mark A. Walker and Mašan Bogdanovski is also appreciated. Technical help of my mother, Danica Ćirković, has been invaluable in concluding this project.

 

 

References

Adams, F. C. and Laughlin, G. 1997, Reviews of Modern Physics 69, 337.

Adams, F. C. & Laughlin, G. The Five Ages of the Universe (The Free Press, New York, 1999).

Barrow, J. D. and Tipler, F. J. 1986, The Anthropic Cosmological Principle (Oxford University Press, New York).

Bondi, H. and Gold, T. 1948, Monthly Notices of the Royal astronomical Society 108, 252.

Bostrom, N. 2001a, Synthese 127, 359.

Bostrom, N. 2001b, preprint available at http://www.nickbostrom.com/existential/risks.pdf.

Bostrom, N. 2002, Anthropic Bias: Observation Selection Effects (Routledge, New York).

Brillouin, L. 1962, Science and Information Theory (Academic Press, New York).

Ćirković, M. M. 2000, Serbian Astronomical Journal 161, 33 (preprint astro-ph/0010210).>

Ćirković, M. M. and Bostrom, N. 2000, Astrophysics and Space Science 274, 675.

Ćirković, M. M. and Radujkov, M. 2001, Serbian Astronomical Journal 163, 53 (preprint astro-ph/ 0112543).

Davies, P. C. W. 1994, The Last Three Minutes (Basic Books, New York).

Dugić, M., Raković, D. and Ćirković, M. M. 2000, contributed paper to 3rd International Conference on Cognitive Science, Ljubljana, Slovenia, October 17-19, 2000 (preprint quant-ph/0010042).

Dyson, F. J. 1960, Science 131, 1667.

Dyson, F. J. 1979, Reviews of Modern Physics 51, 447.

Frautschi, S. 1982, Science 217, 593.

Gibbons, G. W. and Hawking, S. W. 1977, Physical Review D 15, 2738.

Gott, J. R. 1993, Nature 363, 315.

Hanson, R. 1998, Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization, preprint available at http://hanson.gmu.edu/filluniv.pdf

Hoyle, F. 1948, Monthly Notices of the Royal astronomical Society 108, 372.

Islam, J. N. 1983, The Ultimate Fate of the Universe (Cambridge University Press, Cambridge).

Kragh, H. 1996, Cosmology and Controversy (Princeton University Press, Princeton). Krauss, L. M. and Turner, 1999, General Relativity and Gravitation 31, 1453.

Krauss, L. M. and Starkman, G. D. 2000, The Astrophysical Journal 531, 22.

Leslie, J. 1996, The End of the World: The Ethics and Science of Human Extinction (Routledge, London).

Milne, E. A. 1940, Astrophysical Journal 91, 129.

Olum, K. D. 2001, Philosophical Quarterly 52, 164.

Oort, J. H. 1983, Annual Reviews of Astronomy and Astrophysics 21, 373.

Peebles, P. J. E. 1993, Principles of Physical Cosmology (Princeton University Press, Princeton).

Perlmutter, S. et al. 1999, Astrophysical Journal 517, 565.

Rees, M. J. 1969, The Observatory 89, 193.

Sandage, A. 1988, Annual Reviews of Astronomy and Astrophysics 26, 561.

Sandberg, A. 2000, This Journal 5 (available at http://www.jetpress.org/volume5/Brains2.pdf).

Tarter, J. 2001, Annual Reviews of Astronomy and Astrophysics 39, 511.

Tipler, F. J. 1982, The Observatory 102, 36.

Tipler, F. J. 1986, International Journal for Theoretical Physics 25, 617.

Tipler, F. J. 1994, The Physics of Immortality (Doubleday, New York).

Tipler, F. J. 1999, Astrophysical Journal 511, 546.

Wheeler, J. A. 1988, IBM Journal of Research and Development 32, 4.

Zehavi, I. and Dekel, A. 1999, Nature 401, 252.

 

Footnotes

[1] The latter presents a separate problem, far from being solved in the anthropic thinking. What constitutes a reference class is by no mean clear, and some recent discussions (from different premises!) can be found in Bostrom (2001) and Olum (2001).

[2] We tacitly assume that Q is well defined for each history. This conjecture may be impossible to prove, but it does seem plausible in the light of our belief that the reference class problem will eventually be solved.

[3] Important assumption here is that histories of intelligent species are ergodic, i.e. that the ensemble averaging is the same as temporal averaging. Since ergodicity conjectures are notoriously difficult to prove even for simple physical systems, we cannot hope to improve upon this assumption in the present case. Note, however, that most transhumanist issues are inherently ergodic.

[4] From the mathematical point of view, such transformation should be non-singular except possibly at the boundary of the relevant region. Such is the case with usually suggested transformations; for instance, in the classical Milne universe, we have the connection between the two timescales as t = ln (t/t0) + t0, where t0 is a constant (e.g. Milne 1940). The zero point of t-time occurs in the infinite past of t-time.

[5] Although, of course, such future could hardly be called eschatological, since physical eschatology is trivial in an unchanging universe. In addition, there is an entire host of very problematic features of the steady state theory following from the application of the Strong Anthropic Principle, since the very absence of obstacles to unlimited growth of civilizations in such a universe would be the clear sign that there must be a factor sharply limiting their growth—since we have not perceived advanced civilizations of arbitrary age in our past light cone  (Tipler 1982; Barrow and Tipler 1986). For the purposes of our present discussion, however, we are justified in neglecting this complication, since it is always possible to imagine a logically consistent cosmological model that very slowly passes from a quasi-stationary to an evolutionary phase (similar to the historically interesting Eddington-Lemaître model; see Ćirković 2000).

Back to Journal of Evolution and Technology

 


HOMEJOURNAL TABLE of CONTENTS EDITORIAL BOARD | AUTHOR INFO | JOURNAL HISTORY

© 2004 Journal of Evolution and Technology.  All Rights Reserved.

Published by the Institute for Ethics and Emerging Technologies

Mailing Address: James Hughes Ph.D., Williams 229B, Trinity College, 300 Summit St., Hartford CT 06106 USA


ISSN: 1541-0099  Reprint and © Permissions