By our estimate, today's very biggest supercomputers are within a factor of a hundred of having the power to
mimic a human mind. Their successors a decade hence will be more than powerful enough. Yet, it is unlikely
that machines costing tens of millions of dollars will be wasted doing what any human can do, when they
could instead be solving urgent physical and mathematical problems nothing else can touch. Machines with
human−like performance will make economic sense only when they cost less than humans, say when their
"brains" cost about $1,000. When will that day arrive?
The expense of computation has fallen rapidly and persistently for a century. Steady improvements in
mechanical and electromechanical calculators before World War II had increased the speed of calculation a
thousandfold over hand calculation. The pace quickened with the appearance of electronic computers during
the war−−from 1940 to 1980 the amount of computation available at a given cost increased a millionfold.
At the present rate, computers suitable for humanlike robots will appear in the 2020s. Can the pace be
sustained for another three decades? The graph shows no sign of abatement. If anything, it hints that further
contractions in time scale are in store. But, one often encounters thoughtful articles by knowledgeable people
in the semiconductor industry giving detailed reasons why the decades of phenomenal growth must soon
come to an end.
Wilder possibilities are brewing. Switches and memory cells made of single molecules have been
demonstrated, which might enable a volume to hold a billion times more circuitry than today. Potentially
blowing everything else away are "quantum computers," in which a whole computer, not just individual
signals, acts in a wavelike manner. Like a conventional computer, a quantum computer consists of a number
of memory cells whose contents are modified in a sequence of logical transformations. Unlike a conventional
computer, whose memory cells are either 1 or 0, each cell in a quantum computer is started in a quantum
superposition of both 1 and 0. The whole machine is a superposition of all possible combinations of memory
states. As the computation proceeds, each component of the superposition individually undergoes the logic
operations. It is as if an exponential number of computers, each starting with a different pattern in memory,
were working on the problem simultaneously. When the computation is finished, the memory cells are
examined, and an answer emerges from the wavelike interference of all the possibilities. The trick is to devise
the computation so that the desired answers reinforce, while the others cancel. In the last several years,
quantum algorithms have been devised that factor numbers and search for encryption keys much faster than
any classical computer. Toy quantum computers, with three or four "qubits" stored as states of single atoms or
photons, have been demonstrated, but they can do only short computations before their delicate superpositions
are scrambled by outside interactions. More promising are computers using nuclear magnetic resonance, as in
hospital scanners. There, quantum bits are encoded as the spins of atomic nuclei, and gently nudged by
external magnetic and radio fields into magnetic interactions with neighboring nuclei. The heavy nuclei,
swaddled in diffuse orbiting electron clouds, can maintain their quantum coherence for hours or longer. A
quantum computer with a thousand or more qubits could tackle problems astronomically beyond the reach of
any conceivable classical computer.
Molecular and quantum computers will be important sooner or later, but humanlike robots are likely to arrive
without their help. Research within semiconductor companies, including working prototype chips, makes it
quite clear that existing techniques can be nursed along for another decade, to chip features below 0.1
micrometers, memory chips with tens of billions of bits and multiprocessor chips with over 100,000 MIPS.
Towards the end of that period, the circuitry will probably incorporate a growing number of quantum
interference components. As production techniques for those tiny components are perfected, they will begin to
take over the chips, and the pace of computer progress may steepen further. The 100 million MIPS to match
human brain power will then arrive in home computers before 2030.
The Game's Afoot
A summerlike air already pervades the few applications of artificial intelligence that retained access to the
largest computers. Some of these, like pattern analysis for satellite images and other kinds of spying, and in
seismic oil exploration, are closely held secrets. Another, though, basks in the limelight. The best
chess−playing computers are so interesting they generate millions of dollars of free advertising for the
winners, and consequently have enticed a series of computer companies to donate time on their best machines
and other resources to the cause. Since 1960 IBM, Control Data, AT&T, Cray, Intel and now again IBM have
been sponsors of computer chess. The "knights" in the AI power graph show the effect of this largesse,
relative to mainstream AI research. The top chess programs have competed in tournaments powered by
supercomputers, or specialized machines whose chess power is comparable. In 1958 IBM had both the first
checker program, by Arthur Samuel, and the first full chess program, by Alex Bernstein. They ran on an IBM
704, the biggest and last vacuum−tube computer. The Bernstein program played atrociously, but Samuel's
program, which automatically learned its board scoring parameters, was able to beat Connecticut checkers
champion Robert Nealey. Since 1994, Chinook, a program written by Jonathan Schaeffer of the University ofAlberta, has consistently bested the world's human checker champion. But checkers isn't very glamorous, and
this portent received little notice.
The Great Flood
Computers are universal machines, their potential extends uniformly over a boundless expanse of tasks.
Human potentials, on the other hand, are strong in areas long important for survival, but weak in things far
removed. Imagine a "landscape of human competence," having lowlands with labels like "arithmetic" and
"rote memorization", foothills like "theorem proving" and "chess playing," and high mountain peaks labeled
"locomotion," "hand−eye coordination" and "social interaction." We all live in the solid mountaintops, but it
takes great effort to reach the rest of the terrain, and only a few of us work each patch.
Advancing computer performance is like water slowly flooding the landscape. A half century ago it began to
drown the lowlands, driving out human calculators and record clerks, but leaving most of us dry. Now the
flood has reached the foothills, and our outposts there are contemplating retreat. We feel safe on our peaks,
but, at the present rate, those too will be submerged within another half century. I propose (Moravec 1998)
that we build Arks as that day nears, and adopt a seafaring life! For now, though, we must rely on our
representatives in the lowlands to tell us what water is really like.
Our representatives on the foothills of chess and theorem−proving report signs of intelligence. Why didn't we
get similar reports decades before, from the lowlands, as computers surpassed humans in arithmetic and rote
memorization? Actually, we did, at the time. Computers that calculated like thousands of mathematicians
were hailed as "giant brains," and inspired the first generation of AI research. After all, the machines were
doing something beyond any animal, that needed human intelligence, concentration and years of training. But
it is hard to recapture that magic now. One reason is that computers' demonstrated stupidity in other areas
biases our judgment. Another relates to our own ineptitude. We do arithmetic or keep records so painstakingly
and externally, that the small mechanical steps in a long calculation are obvious, while the big picture often
escapes us. Like Deep Blue's builders, we see the process too much from the inside to appreciate the subtlety
that it may have on the outside. But there is a non−obviousness in snowstorms or tornadoes that emerge from
the repetitive arithmetic of weather simulations, or in rippling tyrannosaur skin from movie animation
calculations. We rarely call it intelligence, but "artificial reality" may be an even more profound concept than
artificial intelligence (Moravec 1998).
The mental steps underlying good human chess playing and theorem proving are complex and hidden, putting
a mechanical interpretation out of reach. Those who can follow the play naturally describe it instead in
mentalistic language, using terms like strategy, understanding and creativity. When a machine manages to be
simultaneously meaningful and surprising in the same rich way, it too compels a mentalistic interpretation. Of
course, somewhere behind the scenes, there are programmers who, in principle, have a mechanical
interpretation. But even for them, that interpretation loses its grip as the working program fills its memory
with details too voluminous for them to grasp.
As the rising flood reaches more populated heights, machines will begin to do well in areas a greater number
can appreciate. The visceral sense of a thinking presence in machinery will become increasingly widespread.
When the highest peaks are covered, there will be machines than can interact as intelligently as any human on
any subject. The presence of minds in machines will then become self−evident.
So if things stay on track, they will have a "sentient" computer by 2020 ( or 2030 worst case ) according to their extrapolations.
Also expect further advances in the sw ( besides quantum computing hw ) in the field of neural net programming of course.