Excerpts from George Dyson, Turing's Cathedral: The Origins of the Digital Universe, 2012, Vintage.
It is no coincidence that the most destructive and the most constructive of human inventions appeared at exactly the same time. Only the collective intelligence of computers could save us from the destructive powers of the weapons they had allowed us to invent.
Turing’s underlying interest in the “decision problem” is easily overlooked. In answering the Entscheidungsproblem, Turing proved that there is no systematic way to tell, by looking at a code, what that code will do. That’s what makes the digital universe so interesting, and that’s what brings us here.
Perhaps the consciousness of animals is more shadowy than ours and perhaps their perceptions are always dreamlike,” physicist Eugene Wigner recalled in 1964. “On the opposite side, whenever I talked with von Neumann, I always had the impression that only he was fully awake.”
for people who had power. He admired people who could influence events. In addition, being softhearted, I think he had a hidden admiration for people or organizations that could be tough.”51
that you don’t have to be responsible for the world that you’re in. So I have developed a very powerful sense of social irresponsibility as a result of Von Neumann’s advice. It’s made me a very happy man ever since.”67
Let the whole outside world consist of a long paper tape. —John von Neumann, 1948
“Preliminary Discussion of the Logical Design of an Electronic Computing Instrument,” drafted in the annex to Gödel’s office, would end up fulfilling Leibniz’s dreams of digital computing and universal language that Gödel believed had been overlooked.
According to Leibniz, relation gave rise to substance, not, as Newton had it, the other way around. “Back to Leibniz!” is how Norbert Wiener titled an article on quantum mechanics in 1932. “I can see no essential difference between the materialism which includes soul as a complicated type of material particle and a spiritualism which includes material particles as a primitive type of soul,” Wiener added in 1934.43
We owe the existence of high-speed digital computers to pilots who preferred to be shot down intentionally by their enemies rather than accidentally by their friends.
6J6 tubes, twenty in each bank, for a total of eighty tubes, were installed in a test rack so they were oriented up, down, and in the two horizontal positions (cathode edge-wise and cathode flat). The entire rack was mounted on a vibrating aluminum plate, and the tubes left to run for three thousand hours. “A total of six failed, four within the first few hours, one about 3 days and one after 10 days,” was the final report. “There were four heater failures, one grid short and one seal failure.”62 The problem wasn’t tubes that failed completely—built-in self-diagnostic routines made these easy to identify and replace—it was tubes that either were not up to specification in the first place, or that drifted off specification with age. How could you count on getting correct results? While von Neumann was beginning to formulate, from the top down, the ideas that would develop into his 1951 “Reliable Organizations of Unreliable Elements” and 1952 “Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components,” the technicians at the Institute were facing the same problem, from the bottom up. It took an engineer fluent in wartime electronics and wartime ingenuity to solve the problem of building a reliable computer from unreliable war surplus parts. Jack Rosenberg, from New Brunswick, New Jersey, was the first in his family to attend college, entering MIT in 1934 at age sixteen. As a senior in high school he had attended the Century of Progress exhibit in Chicago, and “spent nearly the whole week in what was called the Hall of Science. And I saw there a booth by MIT, and I talked to the man at the booth, and he said MIT is probably the toughest school to get into. So I applied to MIT.” Rosenberg started out in mathematics but switched to electrical engineering, graduating at the top of his class with two degrees. “When I went around for interviews in 1939, I saw many of my classmates already working,” he says. “I knew that I was more intelligent than them, but that’s the way it went.” So he took a job as a civilian engineer for the U.S. Army Signal Corps, becoming an officer when the United States joined the war. July of 1945 found Rosenberg on board an army troop ship sailing at eight knots across the Pacific to the Philippines, to prepare for the invasion of Japan. “As a radio ham, I spent most of my waking time in the radio room listening to the short wave,” he says. Since the slow transport was a sitting duck, transmission was not allowed. On August 6, 1945, he heard news of the atomic bombing of Hiroshima, followed by news of the bombing of Nagasaki on August 9. “The ship’s troop commander was as startled as I had been,” he says. “He told me to keep listening to the radio. His orders for the invasion had not been changed.” Then the news of Japan’s unconditional surrender arrived. “The bombs had saved our lives,” says Rosenberg, and however difficult von Neumann (and Oppenheimer) proved to be as his employers, he never forgot that.63 Rosenberg remained in the southern Philippines until April 1946. In the post exchange, he found a copy of Atomic Energy for Military Purposes, a swiftly declassified nontechnical account of the Manhattan Project by Henry Smyth, chairman of the physics department at Princeton University. Upon his discharge from the army, at Fort Dix, New Jersey, in July of 1946—having returned across the Pacific on a turbine-driven steamship at thirty knots—Rosenberg went to Princeton to seek a job in nuclear energy research. He was hired by the Physics Department to work on the instrumentation for the university’s new cyclotron, but, he says, “my enthusiasm lasted about a month.” “Early in 1947,” he continues, “I was informed that at the Institute for Advanced Study, a famous scientist was looking for an engineer to develop an electronic machine of a sort no one but he understood.” Rosenberg interviewed with Bigelow and von Neumann, and started work in July. “There was a lot of anti-Semitism in the army. But there wasn’t anti-Semitism with Johnny,” he says. “Johnny used to meet with each of us individually about once a week, asking what we had built, how it worked, what problems we had, what symptoms we observed, what causes we had diagnosed,” says Rosenberg. “Each question was precisely the best one based on the information he had uncovered so far. His logic was faultless—he never asked a question that was irrelevant or erroneous. His questions came in rapid-fire order, revealing a mind that was lightning-fast and error-free. In about an hour he led each of us to understand what we had done, what we had encountered, and where to search for the problem’s cause. It was like looking into a very accurate mirror with all unnecessary images eliminated, only the important details left.”
The part that is stable we are going to predict. And the part that is unstable we are going to control. —John von Neumann, 1948
Von Neumann appended a cover letter, adding that “the mathematical problem of predicting weather is one which can be tackled, and should be tackled, since the most conspicuous meteorological phenomena originate in unstable or metastable situations which could be controlled, or at least directed, by the release of perfectly practical amounts of energy.”22
practical method than ‘abstract thinking’ might not be to lay it out say one hundred times and simply observe and count the number of successful plays.” This, he noted, was a far easier way to arrive at an approximate answer than “to try to compute all the combinatorial possibilities which are an exponentially increasing number so great that, except in very elementary cases, there is no way to estimate it.”46
“You could see the numbers, and Johnny could see that numbers are numbers, whether data or orders. His insight was how the machine with fixed programming could be changed.”61
Instead of tabulating the statistics of human populations, Klári was tabulating the statistics of populations of neutrons, as they underwent scattering (equivalent to travel), fission (equivalent to reproduction), escape (equivalent to emigration), or absorption (equivalent to death). By following enough generations, it was possible to determine whether a given configuration would go critical or not. Klári could hardly have better prepared herself for bomb design than by her apprenticeship at the Office of Population Research.
The genius of Monte Carlo—and its search-engine descendants—lies in the ability to extract meaningful solutions, in the face of overwhelming information, by recognizing that meaning resides less in the data at the end points and more in the intervening paths.
Ulam’s self-reproducing cellular automata—patterns of information persisting across time—evolve by letting order in but not letting order out.
Ulam began imagining how, in a one-dimensional universe, cosmology might evolve. “Has anybody considered the following problem—which appears to me very pretty,” he wrote to von Neumann in February 1949. “Imagine that on the infinite line –∞ to +∞ I have occupied the integer points each with probability say ½ by material point masses—i.e. I have this situation,” and he sketched a random distribution of points on a line. “This is a distribution at time t=0.”
This puzzle—how life translates between sequence and structure, and in doing so not only tolerates but takes advantage of ambiguity—would hold Ulam’s interest for the rest of his life.
On one level, Barricelli was applying the powers of digital computing to evolution. On another level, he was applying the powers of evolution to digital computing. According to Julian Bigelow, “Barricelli was the only person who really understood the path toward genuine artificial intelligence at that time.”12
Information theorists, including Claude Shannon with his 1940 PhD thesis on “An Algebra for Theoretical Genetics” (which was followed by a year at IAS), had already built a framework into which the double helix neatly fit.
We now know that lateral gene transfer and other non-neo-Darwinian mechanisms are far more prevalent, especially in microbiology, than was evident in 1953. “If, instead of using living organisms, one could experiment with entities which, without any doubt could evolve exclusively by ‘mutations’ and selection,” Barricelli argued, “then and only then would a successful evolution experiment give conclusive evidence;
Make life difficult but not impossible,” Barricelli recommended. “Let the difficulties be various and serious but not too serious; let the conditions be changing frequently but not too radically and not in the whole universe at the same time.”19 Self-reproducing numerical coalitions rapidly evolved. “The conditions for an evolution process according to the principle of Darwin’s theory would appear to be present,”
body would be disintegrated as soon as its objective had been fulfilled.”35
Are they the beginning of, or some sort of, foreign life forms? Are they only models?” he asked. “They are not models, not any more than living organisms are models. They are a particular class of self-reproducing structures already defined.” As to whether they are living, “it does not make sense to ask whether symbioorganisms are living as long as no clear-cut definition of ‘living’ has been given.”40 A clear-cut definition of “living” remains elusive to this day.
He drew analogies between this language and the languages used by other collector societies, such as social insects, but warned against “trying to use the ant and bee languages as an explanation of the origin of the genetic code.”43
everything, from the point of view of the order codes, exactly as it was when they first came into existence, in 1951, among the 40 Williams tubes at the end of Olden Lane. Aggregations of order codes evolved into collector societies, bringing memory allocations and other resources back to the collective nest.
They learned how to divide into packets, traverse the network, correct any errors suffered along the way, and reassemble themselves at the other end. By representing music, images, voice, knowledge, friendship, status, money, and sex—the things people value most—they secured unlimited resources, forming complex metazoan organisms running on a multitude of individual processors the way a genome runs on a multitude of cells.
If humans, instead of transmitting to each other reprints and complicated explanations, developed the habit of transmitting computer programs allowing a computer-directed factory to construct the machine needed for a particular purpose, that would be the closest analogue to the communication methods among cells.”45 Twenty-five years later, much of the communication between computers is not passive data, but active instructions to construct specific machines, as needed, on the remote host.
This vulnerability is exploited by malevolent viruses—so why maintain a capability that has such costs? One reason is to facilitate the acquisition of new, useful genes that would otherwise remain the property of someone else. “The power of horizontal gene transfer is so great that it is a major puzzle to understand why it would be that the eukaryotic world would turn its back on such a wonderful source of genetic novelty and innovation,”
The origin of species was not the origin of evolution, and the end of species will not be its end. And the evening and the morning were the fifth day.
THE HISTORY OF DIGITAL computing can be divided into an Old Testament whose prophets, led by Leibniz, supplied the logic, and a New Testament whose prophets, led by von Neumann, built the machines. Alan Turing arrived in between.
Turing’s arrival in Princeton was followed, five days later, by the proofs of his “On Computable Numbers, with an Application to the Entscheidungsproblem.” These thirty-five pages would lead the way from logic to machines.
Beyond the way they speak there is only one (no two!) feature[s] of American life which I find really tiresome, the impossibility of getting a bath in the ordinary sense, and their ideas on room temperature,” Turing complained after he had settled in.
All Turing machines, and therefore all computable functions, can be encoded by strings of finite length. Since the number of possible machines is countable but the number of possible functions is not, noncomputable functions (and what Turing referred to as “uncomputable numbers”) must exist.
This put a halt to the Hilbert program, while Hitler’s purge of German universities put a halt to Göttingen’s position as the mathematical center of the world, leaving a vacuum for Turing’s Cambridge, and von Neumann’s Princeton, to fill.
Church’s thesis—equating computability with effective calculability—would be the Church-Turing thesis from then on.
It was no coincidence that the stored program computer came to fruition about ten years after … Post and Turing set the framework for this kind of thinking,” he confirms. Von Neumann “knew Gödel’s work, Post’s work, Church’s work very, very well.… So that’s how he knew that with these tools, and a fast method of doing it, you’ve got the universal tool.”21
The Universal Turing Machine of 1936 gets all the attention, but Turing’s O-machines of 1939 may be closer to the way intelligence (real and artificial) works: logical sequences are followed for a certain number of steps, with intuition bridging the intervening gaps.
Since the British did not know the constantly changing state of the Fish, they had to guess. Colossus, trained to sense the direction of extremely faint gradients that distinguished enciphered German from random alphabetic noise, was the distant progenitor of the search engine: scanning the Precambrian digital universe for fragments of the missing key, until the pieces fit.
Among the bound volumes of the Proceedings of the London Mathematical Society, on the shelves of the Institute for Advanced Study library, there is one volume whose binding is disintegrated from having been handled so many times: Volume 42, with Turing’s “On Computable Numbers,” on pages 230–65.
An unwillingness to admit the possibility that mankind can have any rivals in intellectual power,” Turing wrote in his sabbatical report submitted to the NPL in 1948, “occurs as much amongst intellectual people as amongst others: they have more to lose.”46
Turing summarized the essence (and weakness) of this convoluted argument in 1947, saying that “in other words then, if a machine is expected to be infallible, it cannot also be intelligent.”47 Instead of trying to build infallible machines, we should be developing fallible machines able to learn from their mistakes.
An Internet search engine is a finite-state, deterministic machine, except at those junctures where people, individually and collectively, make a nondeterministic choice as to which results are selected as meaningful and given a click. These clicks are then immediately incorporated into the state of the deterministic machine, which grows ever so incrementally more knowledgeable with every click. This is what Turing defined as an oracle machine.
Von Neumann believed that conflicted loyalties during the development of the atomic bomb should be left behind. “We were all little children with respect to the situation which had developed, namely, that we suddenly were dealing with something with which one could blow up the world,” he had testified, in defense of Oppenheimer, in 1954. “We had to make our rationalization and our code of conduct as we went along.”17
“Someone should write a novel for the future which is in the past,” says von Neumann’s Los Alamos colleague Harris Mayer. “And that is: what would science and mathematics be if Fermi and Johnny von Neumann didn’t die young?”20
Lewis Strauss would later recall “the extraordinary picture, of sitting beside the bed of this man, in his f[if]ties, who had been an immigrant, and there surrounding him, were the Secretary of Defense, the Deputy Secretary of Defense, the Secretaries of Air, Army, Navy, and the Chiefs of Staff.”24
A few days before he died,” adds Ulam, “I was reading to him in Greek from his worn copy of Thucydides a story he liked especially about the Athenians’ attack on Melos, and also the speech of Pericles. He remembered enough to correct an occasional mistake or mispronunciation on my part.”
the illness had gone to his brain and that he could no longer think, and he asked me to test him on really simple arithmetic problems, like seven plus four, and I did this for a few minutes, and then I couldn’t take it anymore; I left the room,” she remembers, overcome by “the mental anguish of recognizing that that by which he defined himself had slipped away.”27
Nicholas Vonneumann believes that his brother asked for a Catholic priest because he wanted someone he could discuss the classics with. “With our background it would have been inconceivable to turn overnight into a devout Catholic,” he says.
Ulam was now left alone to witness the revolutions in both biology and computing that von Neumann had launched but would not see fulfilled. “He died so prematurely, seeing the promised land but hardly entering it,” Ulam wrote in 1976.
you examined the structure of a computer, “you could not possibly tell what it is doing at any moment,” Bigelow explained. “The importance of structure to how logical processes take place is beginning to diminish as the complexity of the logical process increases.” Bigelow then pointed out that the significance of Turing’s 1936 result was “to show in a very important, suggestive way how trivial structure really is.”36 Structure can always be replaced by code.
In a digital computer, the instructions are in the form of COMMAND (ADDRESS) where the address is an exact (either absolute or relative) memory location, a process that translates informally into “DO THIS with what you find HERE and go THERE with the result.” Everything depends not only on precise instructions, but also on HERE, THERE, and WHEN being exactly defined. In biology, the instructions say, “DO THIS with the next copy of THAT which comes along.”
Search engines and social networks are just the beginning—the Precambrian phase. “If the only demerit of the digital expansion system were its greater logical complexity, nature would not, for this reason alone, have rejected it,” von Neumann admitted in 1948.48 Search engines and social networks are analog computers of unprecedented scale.
His automata are purely computing machines. Their output is a piece of tape with zeros and ones on it. What is needed … is an automaton whose output is other automata.”7
The triplicate appearance of “Turing!” reflects how central Turing’s proof of universality was to any theory of self-reproduction, whether applied to mathematics, biology, or machines.
There are a number of offspring of the Princeton machine, not all of them closely resembling the parent,” Willis Ware reported in March 1953. “From about 1949 on, engineering personnel visited us rather steadily and took away designs and drawings for duplicate machines.”15
They exist as precisely defined entities in the digital universe, but have no fixed existence in ours. And they are proliferating so fast that real machines are struggling to catch up with the demand. Physical machines spawn virtual machines that in turn spawn demand for more physical machines. Evolution in the digital universe now drives evolution in our universe, rather than the other way around.
The new theory would apply to biological systems, technological systems, and every conceivable and inconceivable combination of the two. It would apply to automata, whether embodied in the physical world, the digital universe, or both, and would extend beyond existing life and technology on Earth.
Over long distances, it is expensive to transport structures, and inexpensive to transmit sequences. Turing machines, which by definition are structures that can be encoded as sequences, are already propagating themselves, locally, at the speed of light. The notion that one particular computer resides in one particular location at one time is obsolete. If life, by some chance, happens to have originated, and survived, elsewhere in the universe, it will have had time to explore an unfathomable diversity of forms.
masterpiece Dr. Strangelove, nuclear weapons in the hands of Teller are, to me, less terrifying than they are in the hands of a new generation of nuclear weaponeers who have never witnessed an atmospheric test firsthand.
And that’s why von Neumann and you other Martians got us to build all these computers, to create a home for this kind of life.” There was a long, drawn-out pause. “Look,” Teller finally said, lowering his voice to a raspy whisper, “may I suggest that instead of explaining this, which would be hard … you write a science-fiction book about it.” “Probably someone has,” I said. “Probably,” answered Teller, “someone has not.”
The unspoken agreement between the military and the scientists was that the military would not tell the scientists how to do science, and the scientists would not tell the military how to use the bombs. Oppenheimer had stepped out of bounds.
was all of it a large system of on and off, binary gates,” Bigelow reiterated fifty years later. “No clocks. You don’t need clocks. You only need counters. There’s a difference between a counter and a clock. A clock keeps track of time. A modern general purpose computer keeps track of events.”15 This distinction separates the digital universe from our universe, and is one of the few distinctions left.
Among the computers populating this network, most processing cycles are going to waste. Most processors, most of the time, are waiting for instructions. Even within an active processor, as Bigelow explained, most computational elements are waiting around for something to do next. The global computer, for all its powers, is perhaps the least efficient machine that humans have ever built. There is a thin veneer of instructions, and then there is a dark, empty 99.9 percent.
Facebook defines who we are, Amazon defines what we want, and Google defines what we think. Teletotal was the personal computer; minitotal is the iPhone; neurototal will be next. “How much human life can we absorb?” answers one of Facebook’s founders, when asked what the goal of the company really is.19 “We want Google to be the third half of your brain,” says Google cofounder Sergey Brin.20
The power of the genetic code, as both Barricelli and von Neumann immediately recognized, lies in its ambiguity: exact transcription but redundant expression. In this lies the future of digital code.
How can this be intelligence, since we are just throwing statistical, probabilistic horsepower at the problem, and seeing what sticks, without any underlying understanding? There’s no model. And how does a brain do it? With a model? These are not models of intelligent processes. They are intelligent processes.
Instead of human beings having to learn to write code in machine language, machines began learning to read codes written in human language, a trend that has continued ever since.
There was a clear but not formally stated understanding,” explained Paul Baran, a RAND colleague who helped develop the communication architecture now known as packet switching, “that a survivable communications network is needed to stop, as well as to help avoid, a war.”46
A similar decision was made not to patent or classify the work. “We felt that it properly belonged in the public domain,” explained Baran. “Not only would the US be safer with a survivable command and control system, the US would be even safer if the USSR also had a survivable command and control system as well!”48
The wilderness, even if only a digital wilderness, will always win. There are codes, and machines, that can do almost anything that can be given an exact description, but it will never be possible to determine, simply by looking at a code, what that code will do.