John von Neumann Sees the Future
John von Neumann |
JOHN VON NEUMANN CREATES
THE FUTURE
If you want to understand the world we live in, you
have to know about the thinking of three men – Alan Turing, Claude Shannon and
John von Neumann. I’ve talked a little
about Alan Turing in an earlier essay.
All three men knew each other, discussed their ideas with each other and
promoted each other’s ideas.
John von Neumann has been called the smartest
individual of the 20th Century.
Even an outline of his accomplishments, which you can read about in Wikipedia, is hard to believe. He invented game theory, which I discussed in
two earlier posts. He headed up a group
of mathematicians who invented new statistical techniques at Los Alamos that
made the atomic bomb possible. In 1944,
he wrote a memo that outlined the structure of the modern computer. He then managed the design and building of a
modern general computer and consulted on the building of virtually all the
early computers.
JOHN VON NEUMANN SEES THE
FUTURE
In the early 1950s, von Neumann, like Alan Turing, became interested in
the similarities and differences between computers and the human brain. The study of both was in its infancy. But von Neumann already saw that comparisons
between computers and human brains could benefit both computer science and
neuroscience. In the far future, he
believed they might even converge.
In 1956, von Neumann was asked to give a series of
guest lectures at Yale, summarizing his thinking. Unfortunately, he was dying of cancer and
couldn’t personally deliver the lectures. In 1957, they were published in a short book, The Computer and the
Brain.
Although it was not certain at the time, von Neumann assumed
(correctly) that the output of neurons was digital. Either a neuron fired or it didn’t. From this, he argued that a computer could
simulate the processing of the brain but that the converse wasn’t true.
Von Neumann calculated that the processing speed of
the brain was very slow but the brain overcame this through massive parallel
processing. All of the neurons, about 100 billion, are
processing at the same time; through synapses between neurons, they are all
computing simultaneously. This is how
supercomputers work – parallel processing through the interaction of many
computers. Only at vastly faster speeds
than the brain. The total processing,
measured in operations per second, is approaching that of the brain.
But von Neumann saw even further into the
future. He believed that we were at the
beginning of a turning point in human history. He foresaw that the exponential increase in knowledge of computer
technology and how the brain worked would have profound effects on humanity’s
future. In the early 1950s, he told Stan
Ulam, himself a brilliant mathematician, that
the ever accelerating progress of
technology and changes in the mode of human life give the appearance of
approaching some essential singularity in the history of the race beyond which
human affairs, as we know them, could not continue.
The possible implications of this statement,
especially the use of the word “singularity,” are scary. If you would like to know just how scary, and
how near we are, read Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology. I would not recommend reading this just
before you try to go to sleep.
WATSON PLAYS JEOPARDY
In 2011, IBM’s supercomputer Watson played against
the two best Jeopardy players. Watson’s
predecessor, Big Blue, had defeated the world’s best chess player, Gary
Kasparov. Chess is a highly structured
game where Big Blue’s ability to evaluate millions of combinations of future
moves gave it an advantage over the more limited memory of Kasparov. But Jeopardy was different. Besides knowing a vast and varied amount of
information, the supercomputer had to understand natural language and come up
with the most probably answers. Jeopardy
clues included puns, metaphors, double entendres, humor and worse, word
combination and rhymes not found in normal speech. Rather surprising, Watson beat the two human
competitors. For some computer
scientists, this meant that Watson had passed the Turing Test. For a few others, Watson had gone beyond it.
THE FUTURE ARRIVES
The following is a summary of an article in The New
York Times, “Brainlike Computers, Learning From Experience,” December 29, 2013.
“Computers have entered the age when they are able to
learn from their own mistakes.” They can
automate programming, like how to move a robot’s arm, and tolerate errors. The new computer chips are based on
neuroscience, “how neurons react to stimuli and connect with other neurons to
interpret information.” The new approach
to artificial intelligence will allow computers to do many things humans do
with ease – “see, speak, listen, navigate, manipulate and control.” Some of this, such as SIRI voice recognition,
is already here.
The big difference is that computers will no longer
be limited to what they have been specifically programmed to do. Computers use statistical algorithms to
learn. Last year, Google researchers
used a type of algorithm called a neural network into a computer, which was
able to learn without detailed instructions or human supervision. The computer was fed 10 million images and
taught itself how to recognize cats.
The new processors are not programmed in the usual
sense.
Rather, the connections between the
circuits are “weighted” according to correlations in data that the processor
has already “learned.” Those weights are
then altered as data flows in to the chip, causing them to change their values
and to “spike.” That generates a signal
that travels to other components and, in reaction, changes the neural network,
in essence programming the net actions much the same way that information
alters human thoughts and actions.
This is how the brain works. A neuron (nerve cell in the brain) receives
information from hundreds or thousands of other neurons. If a critical level of cumulative inputs is
reached, the cell “spikes” and sends an electrical impulse down a filament
called an axon. This releases chemicals
called neurotransmitters that are picked up by other neurons and may contribute
to their “spiking,” possibly changing other neurons. These changes lead to changes in human
thoughts and actions.
An advantage of the new approach is that the
algorithms can adapt and continue working even when there are failures to
complete prior tasks.
Computers are combining biological and statistical
techniques to overcome the limitations of traditional programming.
It seems to me that the logic of this approach is
similar to Bayesian statistics. The
general nature of the algorithms, neural networks and genetic algorithms, has
already been developed.
This is another step away from the rigid programming
and error-free hardware of computers.
There will be feedback effects as scientists learn more about how the
brain works and how computers are programmed to simulate the brain. Already, there is a field of research called
computational neuroscience.
The largest class at Stanford last fall was a
graduate course on applying biological and statistical techniques to computer
learning.
All of this returns us to the early speculations in
the 1940s and 1950s on how information theory and computers were going to
influence other disciplines, particularly the biology of the mind
(neuroscience). The difference is the
incredible advances in our knowledge of the brain and the equally incredible
increases in the processing capacity of computers. A “thinking computer’” is no longer a
metaphor or an impossibility. The
convergence and feedback of the two areas are leading us into a future even
beyond the wildest dreams of the early thinkers.
Except John von Neumann.
====================================================================
Related Earlier Posts:
Limits to Strategic Planning
The Limits of Negotiation: A Little Applied Game Theory
President Obama Learns Some Game Theory
Comments
Post a Comment