Institute for Ethics and Emerging Technologies

 

contents

call for papers

editorial board

how to submit to JET

support JET & IEET

search JET

PEER COMMENTARY ON MORAVEC'S PAPER


Robin Hanson: 18/3/98

Moravec's article offers a provocative and hopeful hypothesis, and some
evidence and reasoning to support it. The article has made me
seriously consider becoming much more hopeful about AI timescales. The
major flaw in the article, however, is that it does not attempt to be
scholarly in the sense of anticipating and responding to possible
objections. The article seems more like a chapter of a book aimed at a
popular audience.

I'm sure Moravec could think of the following objections, but I'll
mention them because he didn't.

1. Moravec argues that AI marked time for 30 years because in 1960 AI
pioneers had 1 MIPS supercomputers, and in 1990 typical AI workstations
did 1 MIPS. Is the argument that progress is driven by the MIPS of the
median researcher's computer? If so, the implication would be that we
could increase progress greatly by giving supercomputers to a few
researchers and firing all the rest. Would Moravec endorse this
suggestion?

Alternatively, is the argument that progress is driven by the maximum
MIPS available to any AI researcher? If so, then Moravec needs to give
evidence about what this max was between 1960 and 1990. I thought
connection machines were used by AI researchers before 1990, for
example. It is only the exceptional fields he names that had access to
> 1 MIPS?

2. The fields he mentions where progress has tracked the speed of
machines used, chess, image analysis, voice recognition, and
handwriting recognition, are all fields which many AI researchers long
avoided exactly because they perceived them to be strongly
CPU-limited. Those researchers instead choose fields they perceived to
be knowledge-limited, limited by how much their programs knew. And
such researchers explain slow progress in their chosen fields via large
estimates of the total knowledge which needs to be encoded.

So what is the argument that these field are actually CPU-limited,
contrary to their researcher's impressions? After all, if these fields
are knowledge limited, then there is no particular reason to expect AI
abilities in these fields to track available CPU.

These are the sorts of issues I would think would be addressed in a
more scholarly version of this paper.

Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140
Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614


Moravec replies: 18/3/98

Well, yes, it IS a popular chapter! That's pretty much my style, even
in technical papers. I'm better at making up ideas than fighting for
them, and prefer to leave the battle to any others who more enjoy that
sort of thing. Leaves me free to cause more mischief elsewhere!

1. AI didn't have greater computer power for a couple of reasons.

A minor one was that McCarthy and others didn't believe it was
necessary, an attitude conveyed to generations of students,
especially on the abstract reasoning side, and still held by many.

A major reason was that AI never had enough money to afford a
supercomputer. Even pooling the few millions spent on it over decades
wouldn't have bought a serious supercomputer, let alone supported its
upkeep. A lot of effort was wasted over the decades in robotics
programs trying to build cheap special-purpose supercomputers for
vision and the like. It always took five years or so before the
hardware and compilers were working well enough to make them usable
for actual vision, and by then the power could be had cheaper in
general purpose computers. The Connection Machine was an especially
big one of those efforts. Several 4,096 processor CM-2 machines were
given to a handful of AI places, like SRI. The CM-2 was an array of
tiny processors linked in a grid. It was very good for cellular
automata and finite element calculations, but the slow communication
made it a pain for less straightforwardly gridlike things. I tried to
fit my "sensor evidence rays into spatial grid map" robot program onto
a 4,096 processor CM-2 during a 1992 sabbatical at Thinking machines,
in a half-dozen different ways, but because there were two separate
grids that had to be brought into different registrations repeatedly,
the communications delays prevented me from ever getting more than
about 40 MIPS effectively. At that time the computer on my desk was a
20 MIPS Sparc-2, so the advantage of using a limited, expensive,
occasionally available machine with idiosyncratic programming, was
pretty limited. Far better, cheaper and more convenient to simply use
two Sparc-2's. Other users had the same experience, and the CM-2s in
AI labs got very little use. The later CM-5 machine, a bunch of
Sparcs interconnected by a more flexible tree network, would have been
more useful, but at a few million $ for the smallest, they were too
expensive for use by any AI project that I know of. Anyway, it was
cheaper to use the workstations already on your network. These earned
their keep by being available for individual users, but could be used
in parallel occasionally. I myself have run learning programs on a
few dozen machines at a time, for weeks over some holiday periods. So
have many others. But it's impractical to use this approach routinely
to control a robot: the users start to complain about slowdowns in
their interaction. Robin suggests that pooling resources could have
increased productivity greatly. But if we had confiscated the
equipment of 99% of the AI sites, and given tem to the remaining
1%, we would have increased individual computer power 100 fold,
about a seven year advantage. But the political fallout would
probably have reduced funding by 90%.

So, yes, only a few eceptional areas had supercomputer power
available. Remember, there were only a handful of supercomputers
available, and almost most of them were at the national labs designing
nuclear weapons or at the NSA cracking codes. Even the national
weather service was relegated to lower cost machines. The CDC and
Cray machines used in chess were just being tested before being
shipped to the weapons labs.

2. I think Newell&Simon, McCarthy and followers made a giant mistake
when they thought they could achieve full intelligence by just
skimming off the conscious surface of human thought. Most of our
intuitive smarts is unconscious, and requires Teraops as well as
Terabytes of accumulated knowledge. In another chapter of the book I
propose a several stage bottom up evolution of robots, paralleling the
evolution of our own brain, to create the necessary foundation.


Robin Hanson follows up: 23/3/98

I asked:

Is the argument that progress is driven by the MIPS of the
median researcher's computer? If so, the implication would be that we
could increase progress greatly by giving supercomputers to a few
researchers and firing all the rest. Would Moravec endorse this
suggestion? Alternatively, is the argument that progress is driven by the maximum
MIPS available to any AI researcher?

Moravec responded:

Robin suggests that pooling resources could have
increased productivity greatly. But if we had confiscated the
equipment of 99% of the AI sites, and given them to the remaining
1%, we would have increased individual computer power 100 fold,
about a seven year advantage. But the political fallout would
probably have reduced funding by 90%.

But a 90% funding cut, with the remaining funding given to 1% of
researchers, would still have increased progress according to
your logic. And this logic would apply just as well to today.
So may we assume you endorse such a funding propsal?

I also asked:

Those researchers instead choose fields they perceived to
be knowledge-limited, ...
So what is the argument that these field are actually CPU-limited,
contrary to their researcher's impressions? After all, if these fields
are knowledge limited, then there is no particular reason to expect AI
abilities in these fields to track available CPU.

Moravec replied:

2. I think Newell&Simon, McCarthy and followers made a giant mistake
when they thought they could achieve full intelligence by just
skimming off the conscious surface of human thought. Most of our
intuitive smarts is unconscious, and requires Teraops as well as
Terabytes of accumulated knowledge. In another chapter of the book I
propose a several stage bottom up evolution of robots, paralleling the
evolution of our own brain, to create the necessary foundation.

So do you or don't you grant that your claim that "the performance
of AI machines tends to improve at the same pace that AI researchers
get access to faster hardware" may not hold regarding the project
of acquiring those "terabytes of accumulated knowledge"?


Moravec replies: 24/3/98

To paraphrase, Robin probes the depth of my conviction in the direct
connection between computer power and AI.

I'm sure that in extreme scenarios (say 100 Teraops dumped on a few
researchers overnight) other bottlenecks would come to the fore. But,
under current circumstances, I think computer power is the pacing
factor for AI. As personal computers become smarter, commercial
research will become more important, and academic AI will be more in
the position of training and filling niches. Maybe Microsoft or
someone else will decide to greatly increase the computer power
available to its researchers, speeding up the work somewhat, even if
not in proportion to the power increase. Anyway, I expect those
decisions will be more distributed and competitively motivated than
they are now. Commercial competition will seek the optimum trade-off
between faster typewriters and more monkeys.

I assume that AI can be evolved by a feasible (but non-zero!) amount of
engineering trial and error because biological evolution evolved
natural intelligence in a limited number of survival experiments (no
more than about 10^18, including all the failures), and engineering has
recapitulated a lot of that ground already.

I think it will be appropriate soon to make bigger AI systems, and
perfecting those will require a lot more attention to detail,
experimentation and data gathering than has been mustered so far. My
hope for achieving it is a soon-to-begin commercial growth of
intelligent robotics, eventually into an industry much bigger than
today's information industry. Incremental steps in most areas critical
to AI will translate into commercial advantage in robots more directly
than they do in normal computers. Computers must constantly interact
with humans anyway, so have the option of relying on human intelligence
to avoid the hard parts of various problems (like getting their data
from the physical world, or manifesting their results in it). For
robots, the hard parts are front and center. I lay this out in the new
book.


Moravec expands: 28/3/98

Loosely inspired by Robin Hanson's engaging economic and social models
of the consequences of various extreme technological contingencies, I
decided to make a simple model of my AI progress/computer power
intuition. Using simplified versions of my assumptions, we get the
following:

Suppose a researcher costs $100K per year, and a baseline workstation,
with full support, also costs $100K per year.

In year 1, let a baseline computer have 100 MIPS. Assume that 10^8
MIPS is required to achieve an AI with full human performance. In any
given year, let the amount of computer power vary linearly with the
cost of the computer. Also assume that the cost of computer power
halves each year.

Scenario 1 is like today, let there be 1,000 AI researchers, each with
baseline computing. This costs $200 million per year. With a 10%
return, this represents a capital investment of $2 billion. These
researchers will work to produce full AI, but won't succeed until the
baseline computer grows to 10^8 MIPS. That will be year 20.

Scenario 2, we fire half the researchers, and use the money to double
the computer power for the rest. Now full AI arrives in year 19, if
the remaining 500 researchers can make all the necessary discoveries in
19 years that the 1,000 researchers above made in 20 years.

Scenario 3, we fire 7/8 of the researchers. Now each survivor has 8
times as much computing, and AI could be ready in year 17, if the
remaining 125 researchers can pull the accelerated load.

Scenario 4, we fire all but 10 researchers. We'd better make sure
they're the best ones, they have a big load to pull. Each has a $10
million/year supercomputer to work with, and nursemaid. Being uncommon
machines, supercomputers don't have the software support or reliability
of standard machines. But their power will be adequate for full AI in
year 14. If the 10 researchers manage to complete their Herculean task
in 14 years, they may still have to wait a several more years before
their results become affordable to the masses, because few applications
are worth the $10 million per year an AI costs in year 14.

Anyway, viewing AI as a recapitulation of the evolution of natural AI,
I think ten researchers can't do enough trial and error to do the job
in such a short time. Information technology overall has been
recapitulating nervous system evolution at about 10 million speed, but
that's because hundreds of thousands of workers have made frequent
small and occasional large contributions. A lot of the contributions
depend on luck, and luck depends on having enough lottery tickets.


Robin Hanson replies: 28/3/98

This is the start of a model, but to complete it we need to say how
many trial and error steps are needed, how much each one costs, and how
the number of trials vary with the number of researchers. Or better
yet, we need an economic "production function" describing the rate of
successful trials given the MIPS of machines and the number of
researchers involved. Then given the number of trials needed, and the
expected rate of hardware improvement, we could derive the optimal
research plan.

Note that if there were no diseconomies wrt number of workers, we'd
want to stop research now, then hire millions of researchers the day
the hardware is cheap enough.


Anders Sandberg: 10/3/98

General comments: A readable essay on a popular level. The estimates
for human brain capacity appear to be fairly robust.

It would be a good idea to include more references, especially as
examples in the first paragraph and the discussion of quantum and
nano-logic.

The inclusion of the data in an appendix is a good idea. I tried to
fit it to a hyperbolic curve, but it seems to be just
superexponential. :-)

A big problem is the use of MIPS as a measure of computation. It is
very sensitive to the kind of benchmark used and the architecture
(RISC vs. CISC). For comparisions between similar programs running on
similar machines it probably works well, but it is not clear that it
gives any useful information when we try to compare one systems that
are very different. However, since there are no better measures, MIPS
will have to do. Most likely estimates will just be order of magnitude
estimates, and then the uncertainty in the measure will become less
important.

A more serious problem is that we do not know if the retina and visual
system really can be taken as a good estimate for the brain and
cognitive systems (just as computer vision for AI). The retina is a
highly optimized and fairly stereotypal neural structure, this can
introduce a significant bias. It's 2D structure also may fool us into
mis-estimating its capacity; it has to be 2D to function, which means
that distances are increased. In the cortex the structure appears to
be a dense 3D network, which can have significantly more computing
power. So using the retina as an estimate for the brain is very
uncertain.

The calculations of total brain power as estimated from the retina
seems to be slightly wrong (most likely a trivial error, given that
the correct number of neurons are mentioned later; volume cannot be
compared due to the differences in tissue structure and
constraints). The human brain has around 10^11-10^12 neurons, which
makes it just a 10000-1000 times larger than the retina with its 10^8
neurons. Hence the estimate for 10^8 MIPS to match human performance
may be one or two orders of magnitude too small.

Another rough estimate would be based on cortical assemblies and what
is known from neural simulations. The 30*10^9 cells of the cortex are
apparently organized into cortical columns, each containing around 100
neurons and representing a single "state" or unit of cognition. That
gives us around 10^8 columns. These are sparsely interconnected, with
around 1% connections, giving a total number of 10^14 column-column
links. Given around 100 spikes per second, we get 10^16 spike-events
per second along these links. If each spike-event requires one
instruction to handle (not unreasonable on dedicated hardware), the we
get 10^10 MIPS.

A small factual error in the section started by the discussion of
insect nervous systems: only synapses seem to be trimmed away, not
whole neurons.

The estimate of one byte per synapse seems to be borne out by
modelling experience. This would give the brain an approximate
capacity of 10^14 bytes.

The quantum computer section curiously lacks a reference to the bulk
spin resonance results of Gershenfeld and Chuang (N. Gershenfeld and
I. Chuang, Science, 275, pp. 350-356, 1997,
http://physics.www.media.mit.edu/publications/papers/97.01.science.pdf ,
http://physics.www.media.mit.edu/publications/papers/97.09.itp.pdf ).

What about special purpose hardware for neural modelling?

How much do algorithms matter?


Moravec replies: 18/3/98

I just use MIPS as a convenient common notation. My numbers for
recent machines are obtained from various benchmark suites, Spec-92 (1
Spec92 = 1 MIPS), Spec-95 (1 Spec95 = 40 MIPS), MacBench (1 Macbench =
0.66 MIPS). These excecise cache, calculation, memory and various
other aspects in a fair way, so are pretty representative of
performance most programs get. They usually agree within a factor
better than two,

The retina is untypical, and I would use some other structure if I had
convincing computational analogs. But I think volume (or mass: it's
all water) is a far better extrapolator than neuron count. Evolution
can just as easily choose two small neurons as one twice as large.
The cost in metabolism and materials is the same. So I would expect
brain structures to maximize for effective computation per volume, not
per neuron. After all, one neuron with ten thousand synapses might be
the computational match of 50 neurons with 50 synapses each.

The retina gives one measure of computation per volume. Because
vision is so important, and because the retina must be transparently
thin, the retina may be evolutionarily more perfected,
i.e. computationally dense, than the average neural structure. If so,
my stimate for the brain is an overestimate.

On the other hand, not having the transparency constraint may have
given evolution more degrees of freedom for optimization in the rest
of the brain, and thus allowed for a better solution there. In that
case, my brain computation number would be an underestimate.

Unlike the reviewer, I don't think counting neural switching events is
a very useful way to measure computation, because structural
constraints can make a huge difference in the relation between
primitive switching and end-result computation. And it is the final
computation that matters, not the fuss in doing it.

In [a forthcoming book, Robot, Being: from mere machine to transcendent mind. Oxford Univ. Press.] I discuss why control and learning organizations
more situation-specialized than neural nets seem to be much superior
for robots. The brain is stuck with shallow masses of very slow
components, which limit the possible solutions, but robots, with fast
serial processors are not! But I think that discussion is beyond the
scope of this article.

Dan Clemmensen: 21/3/98

Dr. Moravec's paper looks like a good overview, and is very readable.
The paper provides strong support for its thesis that for human-level
AI, " required hardware will be available in cheap machines in the
2020s". However, the paper makes the assumption that the "cheap
machine" must be a general-purpose computer.

There are strong historical precedents for this assumption. In general,
specialized hardware as been more trouble than it's worth. In his
response to Robin Hanson, Dr. Moravec relates some of his personal
experiences of this with the CM2 and CM5. In general, by the time the
specialized tools and techniques to employ a specialized computer are
available, the general-purpose computer technology will have advanced
to the same performance level. The History of computing is littered
with additional examples.

However, it's not clear to me that this rule will hold in all cases.
The paper actually gives part of a counterexample with Deep Blue. Deep
Blue combines a powerful general-purpose multiprocessing computer with
specialized chess-position evaluators to achieve human-level
chess-playing ability. This same model may be generalizable by taking
advantage of software-reprogramable logic devices such as those made by
XILINX or Altera. I would guess that a chess-position evaluator could
be programmed into a single Altera Flex 10K part that costs $20 today.
Deep Blue has 256 evaluators. If my guess is correct, an engineer can
create a machine with Deep Blue's capability by adding less than $6000
of hardware to a high-end desktop. The difference is that the result is
general-purpose, because the evaluators are reprogrammable. Note that
there is no reason for all the evaluators to run the same program.
Since this architecture is based on general-purpose parts that are
widely used in commercial designs, it will become smaller, faster,
cheaper and more powerful at roughly the same rate at general-purpose
computers.

Dr. Hugo de Garis, http://www.hip.atr.co.jp/~degaris , is attempting
to build an AI using XILINX parts to simulate neurons. This is not
quite what I had in mind. I'm thinking more in terms of a model with a
single-threadded program that uses the evaluators to perfom incredibly
powerful, highly specialized instructions.

Dr Moravec estimates that Deep Blue can apply about 3 million MIPS in
its problem domain. I'm guessing that we can build an equivalent,
affordable machine today that is not restricted to the chess domain.If
so, the hardware for human-level AI is available today, and human-level
AI is "merely" a small matter of programming.

Dan Clemmensen
Systems Architect, Netrix Corporation
Dan@Clemmensen.ShireNet.Com
http://www.ShireNet.Com/~dgc


Moravec replies: 21/3/98

Dan's comments regarding the occasional benefit of specialized
hardware well taken. Other strong, if not AI, examples are the
DSPs in modems and the graphics hardware now augmenting processors.

But even there the advantage may be fleeting. Motorola is dropping out
of the hardware modem business, because the functionality can now be
achieved more flexibly with software, in multi-hundred-MIPS computers
to whom audio bandwidth is a minor distraction.

I look forward to seeing how effectively programmable logic contributes
to AI.


Dan Clemmensen follows up: 21/3/98

This is one of the continuing oscillations in our industry. A task
that's only achievable with specialized hardware becomes amenable to
cost-effective solution with the main CPU instead. But the hardware
guys then find other more complex tasks for special hardware. For
example, modems for phone lines are now "soft" as you say, but ADSL
modems and 100BaseT tranceivers need special hardware, as evidenced in
Motorola's newer QUICC devices.

Another interesting oscillation is the relative costs of processing
versus communications bandwidth.

What I was proposing is really the next generation of the "customizable
instruction set" idea. In the early '70s, this called
"microprogramming". I just pointed out that we could adapt the Deep
Blue concept by permitting programmable evaluators. Interestingly, the
skills and tools used by "hardware designers" to program XILINX or FLEX
10K parts are more akin to software skills than to traditional
logic-gate design skills. A programmer can read and understand a VHDL
manual more quickly than a "traditional EE" can, unless the EE is also
a programmer.


Paul Hughes: 22/3/98

I found Hans paper to be overall highly consistent, logical and well
thought out.

However, there has alway been an estimate made by Hans regarding the
capacity of the human brain that doesn't take into consideration the
elaborate cytoskeletal structure of microtubules within each neuronal
cell. The shear compelexity within these cyto-networks combined with
their influence on neurotransmitter activity would seem to shed a great
deal of doubt on Hans and many other neuro-computer scientists
continued treatment of individual neurons as simple on/off transitors.
For a brief tutorial on these networks see:

http://www.reed.edu/~rsavage/microtubules.html

and its larger section integrated with quantum cosmology at:

http://galaxy.cau.edu/tsmith/ManyWorlds.html

I would like to know why Hans and others continue to treat the neuron
as a simple on/off switch in the face of the evidence of a greater
intra-neuronal complexity?

If the cytoskeletal/microtubule networks do turn out to play a vital
role in neuro-computation, then Hans will have to revise his estimates
of human-level MIPS/Memory by at least 2 orders of magnitude.


Moravec replies: 23/3/98

My brain: computer comparison doesn't start with neurons but with the
whole retina as a functional unit. It assumes the computational
performance of the postage stamp of retina is representative of other
neural tissue.

There is extensive evidence that the human retina accomodates to a huge
light variations, and detects edges and motion at a million locations
at about ten hertz. Similar performance can be obtained from a 1,000
MIPS computer processing a million pixel image high definition TV
image.

Unless the retina provides results no one yet suspects, this approach
would seem to weigh the contribution from all relevant mechanisms.


Wlodzislaw Duch replies to Hughes' comment: 16/4/98

I (in agreement with almost all other physicists) do not see any
evidence that microtubules have anything to do with computations in the
brain. Cognitive computational neuroscience makes great progress
modeling real phenomena and the behavior of neurons in vitro is very
well described by the Hudgkin-Huxley model (not by the on-off switch).
Experiments and simulations go hand in hand here. Microtubules are in
all eucariotic cells, so why are our minds so dependent on our brains?
Please explain first the paramecium behavior (I am sure that it is due
to the biochemical reactions, not quantum computing), prove it
experimentally and than talk about human brains.

W/lodzis/law Duch
Computational Intelligence Lab,
Nicholas Copernicus University
duch@phys.uni.torun.pl
http://www.phys.uni.torun.pl/~duch


Wlodzislaw Duch comments on Moravec's article: 16/4/98

In the article by Hans Moravec I was surprised to see so much emphasis
on computer speed. The 5th generation AI project has emphasized how
many LIPS (logical inferences per second) their machines will provide
and not much came out of it. Will the classical problems of AI be
solved by speed/memory?

These problems include representation of complex knowledge structures,
creation of huge knowledge bases to simulate common reason (addressed
by the CYC project), representation of time and space, behavioral based
intelligence (addressed by the Cog project) and the importance of the
embodiment. Speed is just one necessary condition, proper structure of
intelligent machines is the other. It is relatively simple in chess
(graphs and heuristic search) but already much more complex in the game
of go and even more complex in the everyday thinking.

Simulations of the human brain by neural networks, the second route to
AI, are still at quite primitive stage. Either we simulate the spiking
neurons well, and than are able to take a few of them, or we have very
crude approximation and may take more neurons, but than they are not
able to do the same job. Neurodynamical processes are very complex and
we still struggle with a few degrees of freedom. Not to mention that
many connections between brain structures and functions of these
structures are still unknown.

This is not to deny that AI will make progress, but to stress that
estimations of speed/memory are only a small part of the story.


Moravec replies: 16/4/98

But I think scale is much more important than most in the AI community
thought, or many still think. Could a mouse achieve human
intelligence? There is only a factor of 1,000 in brain size between
mouse and man. The deficits in computer power to human brain power I
estimate were a hundred million-fold.

[In my forthcoming book (Robot, Being) there is a chapter that] outlines a strategy for four decades of robot evolution that very coarsely parallels stages in four hundred
megayears of vertebrate mental evolution. The evolutionary framework
is reassuring, because nature tells us there is an incremental
development path along it, made of small, individually advantageous,
steps. If Darwinian trial and error made those little steps, so can
human technological search. Especially since we can cheat by
occasionally peeking at the biological answers.

Generally, I see compelling evidence that availability of processing
power is the pacing factor in the improving performance of the many
research robots around me. For instance, in the 1980s mobile robots
could not reliably cross a room, now they drive cross-country. And
this with still insectlike processing power (or like a tiny chordate,
if you want phylogenetic purity). Lizardlike motor-perceptual
competence is the first "vertebrate" target in my speculative
evolution. We're not there yet, but I expect we will be by 2010.

David Villa: 19/4/98

On the whole this was a very readable and interesting paper. I have
one comment, though. You wrote:

The most powerful experimental supercomputers in 1998,
composed of thousands or tens of thousands of the fastest
microprocessors and costing tens of millions of dollars,
can do a few million MIPS. They are within striking
distance of being powerful enough to match human
brainpower, but are unlikely to be applied to that end.
Why tie up a rare twenty-million-dollar asset to develop
one ersatz-human, when millions of inexpensive original
model humans are available? Such machines are needed for
high-value scientific calculations, mostly physical
simulations, having no cheaper substitutes. AI research
must wait for the power to become more affordable.

I can think of at least two reasons why a twenty-million-dollar
investment to reproduce human-level intelligence would be worthwhile.

1) Simply to prove that it is possible.
There are still those, even some penetrating and deep thinkers
(Roger Penrose springs to mind) who doubt this. It may seem a less
than noble reason for such expense, but it is not inherently different
from the vastly greater sums spent verifying one theory of particle
physics over another.

2) If a twenty-million-dollar investment would bring us to within
striking distance of human-level intelligence, thirty or forty million
dollars may take us beyond it. This done, the whole process would
potentially bootstrap, ultimately leading to very cheap, very powerful
super-minds - and everything their existence would imply.


Moravec replies: 19/4/98

David Villa asks, why not invest big bucks in supercomputers for AI?

1) Simply to prove that it is possible.
There are still those, even some penetrating and deep thinkers
(Roger Penrose springs to mind) who doubt this. It may seem a less
than noble reason for such expense, but it is not inherently
different from the vastly greater sums spent verifying one theory of
particle physics over another.

Atomic physics was considered an oddball interest, with very limited
support before World War II, comparable to AI now (goofball scientists
splitting atoms? weird, weird). Only the atomic bomb raised its
interest in the halls of power. No one, outside a small circle of
irrelevant goofballs, sees anything of comparable interest imminent
from AI. (Before WWII, it was chemists who got the bucks, because they
had developed the gas and explosives the mattered in the last war.)

2) If a twenty-million-dollar investment would bring us to within
striking distance of human-level intelligence, thirty or forty
million dollars may take us beyond it. This done, the whole process
would potentially bootstrap, ultimately leading to very cheap, very
powerful super-minds - and everything their existence would imply.

The investment would have to be in the hundreds of millions of dollars
at least. Buying the computer creates the need to keep it fed. There
simply isn't enough perceived need or plausibility that it would pay
off. There were times when such a perception did exist. In the 1960s,
AI type efforts towards automatic translation, management of nuclear
war and, in the USSR, management of the economy, got huge, national
interest types of funding. The gap in required power was then was so
large, that even that investment didn't bridge it. (But Strategic Air
Command probably still uses some of the original SAGE equipment that
was developed then)

Given the fast exponential increase of computer power over time,
compared to the merely linear increases bought by money, I'm happy to
spend my time hacking patiently towards AI around 2020 rather than
campaigning for a wildly expensive crash project that might, possibly,
bring it a few years sooner.

Actually, I think we may get the best of both worlds if commercial
development of mass-market utility robots takes off in the next decade,
as I hope (and outline in the book). Market forces will then generate
investment dwarfing any government program.


D. Lloyd Jarmusch: 7/3/99

Hans Moravec wrote

"Advancing computer performance is like water slowly flooding the
landscape. A half century ago it began to drown the lowlands,
driving out human calculators and record clerks, but leaving
most of us dry. Now the flood has reached the foothills, and our
outposts there are contemplating retreat. We feel safe on our
peaks, but, at the present rate, those too will be submerged within
another half century. I propose (Moravec 1998) that we build Arks
as that day nears, and adopt a seafaring life!"

How do we build or board these Arks? Is human mind/computer interface a
near term probability? How do I find out more? It seems that virtual
immortality through artificial consciousness is a possibility for the
future. How does one best go about achieving virtual immortality? Where
is the best information on the subject?

D. Lloyd Jarmusch

 


HOMEJOURNAL TABLE of CONTENTS EDITORIAL BOARD | AUTHOR INFO | JOURNAL HISTORY

© 2004 Journal of Evolution and Technology.  All Rights Reserved.

Published by the Institute for Ethics and Emerging Technologies

Mailing Address: James Hughes Ph.D., Williams 229B, Trinity College, 300 Summit St., Hartford CT 06106 USA


ISSN: 1541-0099  Reprint and © Permissions