📚

Complexity - Mitchel Waldrop

Type
book
Created on

January 4 2021

Summary

Highlights

a great many independent agents are interacting with each other in a great many ways.

the very richness of these interactions allows the system as a whole to undergo spontaneous self-organization.

all these complex systems have somehow acquired the ability to bring order and chaos into a special kind of balance. This balance point—often called the edge of chaos—is were the components of a system never quite lock into place, and yet never quite dissolve into turbulence, either.

Economics had to take that ferment into account. And now he believed he’d found the way to do that, using a principle known as “increasing returns”—or in the King James translation, “To them that hath shall be given.”

Winner takes all

“All the Irish heroes were revolutionaries. The highest peak of heroism is to lead an absolutely hopeless revolution, and then give the greatest speech of your life from the dock—the night before you’re hanged.

neoclassical theorists had embroidered the basic model with all sorts of elaborations to cover things like uncertainty about the future, or the transfer of property from one generation to the next.

The theory still didn’t describe the messiness and the irrationality of the human world that Arthur had seen in the valley of the Ruhr—or,

The standard advice at the time tended to place a heavy reliance on economic determinism: to achieve its “optimum” population, all a country had to do was give its people the right economic incentives to control their reproduction, and they would automatically follow their own rational self-interest. In particular, many economists were arguing that when and if a country became a modern industrial state—organized along Western lines, of course—its citizenry would naturally undergo a “demographic transition,” automatically lowering their birthrates to match those that prevailed in European countries.

the quantitative engineering approach—the idea that human beings will respond to abstract economic incentives like machines—was highly limited at best. Economics, as any historian or anthropologist could have told him instantly, was hopelessly intertwined with politics and culture.

Life develops. It has a history. Maybe, he thought, maybe that’s why this biological world seems so spontaneous, organic, and—well, alive. Come to think of it, maybe that was also why the economists’ imaginary world of perfect equilibrium had always struck him as static, machinelike, and dead. Nothing much could ever happen there; tiny chance

real economy was not a machine but a kind of living system, with all the spontaneity and complexity that Judson was showing him in the world of molecular biology.

the other kinds of cells that make up a newborn baby. Each different

An entire sprawling set of self-consistent patterns that formed and evolved and changed in response to the outside world. It reminded him of nothing so much as a kaleidoscope, where a handful of beads will lock in to one pattern and hold it—until a slow turn of the barrel causes them to suddenly cascade into a new configuration. A handful of pieces and an infinity of possible patterns. Somehow, in a way he couldn’t quite express, this seemed to be the essence of

the DNA residing in a cell’s nucleus was not just a blueprint for the cell—a catalog of how to make this protein or that protein. DNA was actually the foreman in charge of construction. In effect, DNA was a kind of molecular-scale computer that directed how the cell was to build itself and repair itself and interact with the outside world.

Nature seems to be less interested in creating structures than in tearing structures apart and mixing things up into a kind of average.

Left to themselves, says the second law, atoms will mix and randomize themselves as much as possible.

Yet for all of that, we do see plenty of order and structure around.

In the real world, atoms and molecules are almost never left to themselves, not completely; they are almost always exposed to a certain amount of energy and material flowing in from the outside. And if that flow of energy and material is strong enough, then the steady degradation demanded by the second law can be partially reversed. Over a limited region, in fact, a system can spontaneously organize itself into a whole series of complex structures.

Eg boiling soup

In mathematical terms, Prigogine’s central point was that self-organization depends upon self-reinforcement: a tendency for small effects to become magnified when conditions are right, instead of dying away. It was precisely the same message that had been implicit in Jacob and Monod’s work on DNA.

yet, positive feedback is precisely what conventional economics didn’t have, Arthur realized. Quite the opposite. Neoclassical theory assumes that the economy is entirely dominated by negative feedback:

The dying-away tendency was implicit in the economic doctrine of “diminishing returns”:

negative feedback/diminishing returns is what underlies the whole neoclassical vision of harmony, stability, and equilibrium in the economy.

And now that QWERTY is a standard used by millions of people, it’s essentially locked in forever.

the video stores hated having to stock everything in two different formats, and consumers hated the idea of being stuck with obsolete VCRs. So everyone had a big incentive to go with the market leader. That pushed up VHS’s market share even more, and the small initial difference grew rapidly.

Increasing returns can take a trivial happenstance—who bumped into whom in the hallway, where the wagon train happened to stop for the night, where trading posts happened to be set up, where Italian shoemakers happened to emigrate—and magnify it into something historically irreversible.

The point was that you have to look at the world as it is, not as some elegant theory says it ought to be.

“The important thing is to observe the actual living economy out there,” he says. “It’s path-dependent, it’s complicated, it’s evolving, it’s open, and it’s organic.”

Predictions are nice, if you can make them. But the essence of science lies in explanation, laying bare the fundamental mechanisms of nature.

Increasing returns isn’t an isolated phenomenon at all: the principle applies to everything in high technology.

High technology could almost be defined as “congealed knowledge,” says Arthur. “The marginal cost is next to zilch, which means that every copy you produce makes the product cheaper and cheaper.”

More than that, every copy offers a chance for learning: getting the yield up on microprocessor chips, and so on. So there’s a tremendous reward for increasing production—in short, the system is governed by increasing returns.

increasing returns isn’t displacing the standard theory at all,” says Arthur, “It’s helping complete the standard theory. It just applies in a different domain.”

They were successful because increasing returns make high-tech markets unstable, lucrative, and possible to corner—and because Japan understood this better and earlier than other countries. The Japanese are very quick at learning from other nations. And they are very good at targeting markets, going in with huge volume, and taking advantage of the dynamics of increasing returns to lock in their advantage.”

What this means in practical terms, he adds, is that U.S. policy-makers ought to be very careful about their economic assumptions regarding, say, trade policy vis-Ă -vis Japan.

he suspects that one of the main reasons the United States has had such a big problem with “competitiveness” is that government policy-makers and business executives alike were very slow to recognize the winner-take-all nature of high-tech markets.

“If we want to continue manufacturing our wealth from knowledge,” he says, “we need to accommodate the new rules.”

“Wherever there is more than one equilibrium point possible, the outcome was deemed to be indeterminate,” he says. “End of story. There was no theory of how an equilibrium point came to be selected.

shortly before his year there drew to a close he got a call from the dean: “What would it take to keep you here?” “Well,” Arthur replied, secure in the knowledge that he already had a fistful of job offers from the World Bank, the London School of Economics, and Princeton, “I see there’s this endowed chair coming open.
”

“We don’t negotiate with endowed chairs!” she declared. “I wasn’t negotiating,” said Arthur. “You just asked me what it would take to keep me here.”

the free-market ideal had become bound up with American ideals of individual rights and individual liberty: both are grounded in the notion that society works best when people are left alone to do what they want.

increasing returns cut to the heart of that myth. If small chance events can lock you in to any of several possible outcomes, then the outcome that’s actually selected may not be the best. And that means that maximum individual freedom—and the free market—might not produce the best of all possible worlds.

“The royal road to a Nobel Prize has generally been through the reductionist approach,” he says—dissecting the world into the smallest and simplest pieces you can. “You look for the solution of some more or less idealized set of problems, somewhat divorced from the real world, and constrained sufficiently so that you can find a solution,” he says. “And that leads to more and more fragmentation of science. Whereas the real world demands—though I hate the word—a more holistic approach.” Everything affects everything else, and you have to understand that whole web of connections.

even some of the hard-core physical scientists were getting fed up with mathematical abstractions that ignored the real complexities of the world. They seemed to be half-consciously groping for a new approach—and in the process, he thought, they were cutting across the traditional boundaries in a way they hadn’t done in years. Maybe centuries.

physicists have been deeply involved with molecular biology from the beginning. Many of the pioneers in the field had actually started out as physicists; one of their big motivations to switch was a slim volume entitled What Is Life?, a series of provocative speculations about the physical and chemical basis of life published in 1944 by the Austrian physicist Erwin Schrödinger, a coinventor of quantum mechanics.

equations can sometimes produce astonishing behavior. The mathematics of a thunderstorm actually describes how each puff of air pushes on its neighbors, how each bit of water vapor condenses and evaporates, and other such small-scale matters;

But when the computer integrates those equations over miles of space and hours of time, that is exactly the behavior they produce.

except for the very simplest physical systems, virtually everything and everybody in the world is caught up in a vast, nonlinear web of incentives and constraints and connections. The slightest change in one place causes tremors everywhere else. We can’t help but disturb the universe, as T. S. Eliot almost said.

they started to take advantage of that fact, applying that computer power to more and more kinds of nonlinear equations, they began to find strange, wonderful behaviors that their experience with linear systems had never prepared them for.

In part because of their computer simulations, and in part because of new mathematical insights, physicists had begun to realize by the early 1980s that a lot of messy, complicated systems could be described by a powerful theory known as “nonlinear dynamics.” And in the process, they had been forced to face up to a disconcerting fact: the whole really can be greater than the sum of its parts.

a flap of that butterfly’s wings a millimeter to the left might have deflected the hurricane in a totally different direction.

Butterfly effect in chaos theory

the message was the same: everything is connected, and often with incredible sensitivity. Tiny perturbations won’t always remain tiny. Under the right circumstances, the slightest uncertainty can grow until the system’s future becomes utterly unpredictable—or, in a word, chaotic.

If the world can organize itself into many, many possible patterns, they asked, and if the pattern it finally chooses is a historical accident, then how can you predict anything? And if you can’t predict anything, then how can what you’re doing be called science?

it became apparent that what was really getting his critics riled up was this concept of the economy locking itself in to an unpredictable outcome.

Conversely, researchers began to realize that even some very simple systems could produce astonishingly rich patterns of behavior. All that was required was a little bit of nonlinearity.

The drip-drip-drip of water from a leaky faucet, for example, could be as maddeningly regular as a metronome—so long as the leak was slow enough. But if you ignored the leak for a while and let the flow rate increase ever so slightly, then the drops would soon start to alternate between large and small: DRIP-drip-DRIP-drip. If you ignored it a while longer and let the flow increase still more, the drops would soon start to come in sequences of 4—and then 8, 16, and so forth. Eventually, the sequence would become so complex that the drops would seem to come at random—again, chaos.

Cowan had a suspicion that they were only the beginning. It was more a gut feeling than anything else. But he sensed that there was an underlying unity here, one that would ultimately encompass not just physics and chemistry, but biology, information processing, economics, political science, and every other aspect of human affairs.

The Manhattan Project started with a specific research challenge—building the bomb—and brought together scientists from every relevant specialty to tackle that challenge as a team.

this institute ought to be a place where you could take very good scientists—people who would really know what they were talking about in their own fields—and offer them a much broader curriculum than they usually get.

After thirty years as an administrator, he was convinced that the only way to make something like this happen was to get a lot of people excited about it. “You have to persuade very good people that this is an important thing to do,” he says. “And by the way, I’m not talking about a democracy, I’m talking about the top one-half of one percent. An elite. But once you do that, then the money is—well, not easy, but a smaller part of the problem.”

“I said I felt that what we should look for were great syntheses that were emerging today, that were highly interdisciplinary,” says Gell-Mann. Some were already well on their way: Molecular biology. Nonlinear science. Cognitive science. But surely there were other emerging syntheses out there, he said, and this new institute should seek them out.

These narrow conceptions weren’t grand enough, he told the fellows. “We had to set ourselves a really big task. And that was to tackle the great, emerging syntheses in science—ones that involve many, many disciplines.”

he jotted down three or four pages of suggestions for how to organize the institute so as to avoid the pitfalls. (The main point: Don’t have separate departments!)

needed people who had demonstrated real expertise and creativity in an established discipline, but who were also open to new ideas. That turned out to be a depressingly rare combination, even among (or especially among) the most prestigious scientists.

Millions of species have gotten along just fine without brains as large as ours. Why was our species different?

Douglas Schwartz of the School for American Research, the Santa Fe–based archeology center that was hosting the workshop, argued that archeology was a subject that was especially ripe for interactions with other disciplines.

Researchers in the field were confronted with three fundamental mysteries,

Second, said Schwartz, why did agriculture and fixed settlements replace nomadic hunting and gathering?

third, what forces triggered the development of cultural complexity, including specialization of crafts, the rise of elites, and the emergence of power based on factors such as economics and religion?

Increasingly, he said, he was beginning to think of the rise and fall of civilizations as a kind of selforganizing phenomenon, in which human beings chose different clusters of cultural alternatives at different moments in response to different perceptions of environment.

This self-organization theme was also taken up in a quite different form by Stephen Wolfram of the Institute for Advanced Study, a twenty-five-year-old wunderkind from England who was trying to investigate the phenomenon of complexity at the most fundamental level.

Whenever you look at very complicated systems in physics or biology, he said, you generally find that the basic components and the basic laws are quite simple; the complexity arises because you have a great many of these simple components interacting simultaneously. The complexity is actually in the organization—the myriad possible ways that the components of the system can interact.

The challenge for theorists, he said, is to formulate universal laws that describe when and how such complexities emerge in nature.

Louis Branscomb, chief scientist of IBM, strongly endorsed the idea of an institute without departmental walls, where people could talk and interact creatively. “It’s important to have people who steal ideas!” he said.

the founding workshops made it clear that every topic of interest had at its heart a system composed of many, many “agents.” These agents might be molecules or neurons or species or consumers or even corporations. But whatever their nature, the agents were constantly organizing and reorganizing themselves into larger structures through the clash of mutual accommodation and mutual rivalry.

Complexity, in other words, was really a science of emergence. And the challenge that Cowan had been trying to articulate was to find the fundamental laws of emergence.

The existing neoclassical theory and the computer models based on it simply did not give him the kind of information he needed to make real-time decisions in the face of risk and uncertainty.

yet none of the models really dealt with social and political factors, which were often the most important variables of all. Most of them assumed that the modelers would put in interest rates, currency exchange rates, and other such variables by hand—even though these are precisely the quantities that a banker wants to predict.

virtually all of them tended to assume that the world was never very far from static economic equilibrium, when in fact the world was constantly being shaken by economic shocks and upheavals.

A case in point was the most recent world economic upheaval, which was symbolized by President Carter’s 1979 appointment of Paul Volker to head the Federal Reserve Board. The story of that upheaval actually began in the 1940s, explained Reed, at a time when governments around the world found themselves struggling to cope with the economic consequences of two World Wars and a Great Depression in between. Their efforts, which culminated in the Bretton Woods agreements of 1944, led to a widespread recognition that the world economy had become far more interconnected than ever before. Under the new regime, nations shifted away from isolationism and protectionism as instruments of national policy; instead, they agreed to operate through international institutions such as the World Bank, the International Monetary Fund, and the General Agreements on Tariffs and Trade. And it worked, said Reed. In financial terms, at least, the world remained remarkably stable for a quarter of a century.

But then came the 1970s. The oil shocks of 1973 and 1979, the Nixon administration’s decision to let the price of the dollar float on the world currency market, rising unemployment, rampant “stagflation”—the system cobbled together at Bretton Woods began to unravel, said Reed. Money began flowing around the world at an ever-increasing rate. And Third World countries that had once been starving for investment capital now began borrowing heavily to build their own economies—helped along by U.S. and European companies that were moving their production offshore to minimize costs. Following the advice of their in-house economists, said Reed, Citicorp and many other international banks had happily lent billions of dollars to these developing countries. No one had really believed it when Paul Volker came to the Fed vowing to reign in inflation no matter what it took, even if it meant raising interest rates through the roof and causing a recession. In fact, the banks and their economists had failed to appreciate the similar words being voiced in ministerial offices all over the world. No democracy could tolerate that kind of pain. Could it? And so, said Reed, Citicorp and the other banks had continued loaning money to the developing nations throughout the early 1980s—right up until 1982, when first Mexico, and then Argentina, Brazil, Venezuela, the Philippines, and many others revealed that the worldwide recession triggered by the anti-inflation fight would make it impossible to meet their loan payments. Since becoming CEO in 1984, said Reed, he’d spent the bulk of his time cleaning up this mess. It had already cost Citibank several billion dollars—so far—and had caused

Reed didn’t expect that any new economic theory would be able to predict the appointment of a specific person such as Paul Volker. But a theory that was better attuned to social and political realities might have predicted the appointment of someone like Volker—who, after all, was just doing the politically necessary job of inflation control superbly well.

He’d urged economists to pay more attention to real human psychology, for example, and most recently he had gotten intrigued with the possibility of using the mathematics of nonlinear science and chaos theory in economics.

Kenneth arrow

For all that Arrow was one of the founders of establishment economics, he had also, like Anderson, remained a bit of an iconoclast himself. He knew full well what the drawbacks of the standard theory were. In fact, he could articulate them better than most of the critics could. Occasionally he even published what he called his “dissident” papers, calling for new approaches. He’d urged economists to pay more attention to real human psychology, for example, and most recently he had gotten intrigued with the possibility of using the mathematics of nonlinear science and chaos theory in economics.

He didn’t mind people criticizing the standard model, but they’d better damn well understand what it was they were criticizing.

He also displayed a very high ratio of talking to listening. Indeed, that seemed to be the way he thought things through: by talking about his ideas out loud. And talking about them. And talking about them.

For Kauffman, order told us how we could indeed be an accident of nature—and yet be very much more than just an accident.

Charles Darwin was absolutely right: human beings and all other living things are undoubtedly the heirs of four billion years of random mutation, random catastrophes, and random struggles for survival;

Nor did Darwin know that the forces of order and self-organization apply to the creation of living systems just as surely as they do to the formation of snowflakes or the appearance of convection cells in a simmering pot of soup.

But it is also the story of order: a kind of deep, inner creativity that is woven into the very fabric of nature.

the thrill of being taken seriously by a real adult—Todd was twenty-four at the time—was for Kauffman a crucial step in his intellectual awakening.

“They say that time heals,” he adds. “But that’s not quite true. It’s simply that the grief erupts less often.”

technology isn’t really like a commodity at all. It is much more like an evolving ecosystem.

“So there’d be a network of possible technologies, all interconnected and growing as more things became possible. And therefore the economy could become more complex.”

this process is an excellent example of what he meant by increasing returns: once a new technology starts opening up new niches for other goods and services, the people who fill those niches have every incentive to help that technology grow and prosper.

“There was something too marvelous about DNA. I simply didn’t want it to be true that the origin of life depended on something quite as special as that. The way I phrased it to myself was, ‘What if God had hung another valence bond on nitrogen? [Nitrogen atoms are abundant in DNA molecules.] Would life be impossible?’ And it seemed to me to be an appalling conclusion that life should be that delicately poised.” But then, thought Kauffman, who says that the critical thing about life is DNA? For that matter, who says that the origin of life was a random event? Maybe there was another way to get a self-replicating system started, a way that would have allowed living systems to bootstrap their way into existence from simple reactions.

In other words, Kauffman realized, if the conditions in your primordial soup were right, then you wouldn’t have to wait for random reactions at all. The compounds in the soup could have formed a coherent, self-reinforcing web of reactions.

Taken as a whole, in short, the web would have catalyzed its own formation. It would have been an “autocatalytic set.” Kauffman was in awe when he realized all this. Here it was again: order. Order for free. Order arising naturally from the laws of physics and chemistry. Order emerging spontaneously from molecular chaos and manifesting itself as a system that grows. The idea was indescribably beautiful.

to Kauffman, this autocatalytic set story was far and away the most plausible explanation for the origin of life that he had ever heard. If it were true, it meant the origin of life didn’t have to wait for some ridiculously improbable event to produce a set of enormously complicated molecules; it meant that life could indeed have bootstrapped its way into existence from very simple molecules. And it meant that life had not been just a random accident, but was part of nature’s incessant compulsion for self-organization.

an autocatalytic set can bootstrap its own evolution in precisely the same way that an economy can, by growing more and more complex over time.

If innovations result from new combinations of old technologies, then the number of possible innovations would go up very rapidly as more and more technologies became available. In fact, he argued, once you get beyond a certain threshold of complexity you can expect a kind of phase transition analogous to the ones he had found in his autocatalytic sets. Below that level of complexity you would find countries dependent upon just a few major industries, and their economies would tend to be fragile and stagnant.

if a country ever managed to diversify and increase its complexity above the critical point, then you would expect it to undergo an explosive increase in growth and innovation—what some economists have called an “economic takeoff.”

The existence of that phase transition would also help explain why trade is so im portant to prosperity, Kauffman told Arthur. Suppose you have two different countries, each one of which is subcritical by itself. Their economies are going nowhere. But now suppose they start trading, so that their economies become interlinked into one large economy with a higher complexity. “I expect that trade between such systems will allow the joint system to become supercritical and explode outward.”

an autocatalytic set can undergo exactly the same kinds of evolutionary booms and crashes that an economy does. Injecting one new kind of molecule into the soup could often transform the set utterly, in much the same way that the economy was transformed when the horse was replaced by the automobile. This was the part of autocatalysis that really captivated Arthur. It had the same qualities that had so fascinated him when he first read about molecular biology: upheaval and change and enormous consequences flowing from trivial-seeming events—and yet with deep law hidden beneath.

a network analysis wouldn’t help anybody predict precisely what new technologies are going to emerge next week. But it might help economists get statistical and structural measures of the process. When you introduce a new product, for example, how big an avalanche does it typically cause? How many other goods and services does it bring with it, and how many old ones go out? And how do you recognize when a good has become central to an economy, as opposed to being just another hula-hoop?

“The point is that the phase transitions may be lawful, but the specific details are not. So maybe we have the starts of models of historical, unfolding processes for such things as the Industrial Revolution, for example, or the Renaissance as a cultural transformation, and why it is that an isolated society, or ethos, can’t stay isolated when you start plugging some new ideas into it.”

In this sense a spin glass was quite a good metaphor for the economy. “It naturally has a mixture of positive and negative feedbacks, which gives it an extremely high number of natural ground states, or equilibria.” That’s exactly the point he’d been trying to make all along with his increasing-returns economics.

“It seemed as though they were dazzling themselves with fancy mathematics, until they really couldn’t see the forest for the trees. So much time was being spent on trying to absorb the mathematics that I thought they often weren’t looking at what the models were for, and what they did, and whether the underlying assumptions were any good.

what most of the economists didn’t know—and were startled to find out—was that physicists are comparatively casual about their math. “They use a little rigorous thinking, a little intuition, a little back-of-the-envelope calculation—so their style is really quite different,”

And the reason is that physical scientists are obsessive about founding their assumptions and their theories on empirical fact.

the physicists were nonetheless disconcerted at how seldom the economists seemed to pay attention to the empirical data that did exist. Again and again, for example, someone would ask a question like “What about noneconomic influences such as political motives in OPEC oil pricing, and mass psychology in the stock market? Have you consulted sociologists, or psychologists, or anthropologists, or social scientists in general?” And the economists—when they weren’t curling their lips at the thought of these lesser social sciences, which they considered horribly mushy—would come back with answers like Such noneconomic forces really aren’t important”; “They are important, but they are too hard to treat”; “They aren’t always too hard to treat, and in fact, we’re doing so in specific cases”; and “We don’t need to treat them because they’re automatically satisfied through economic effects.”

“Our particles in economics are smart, whereas yours in physics are dumb.” In physics, an elementary particle has no past, no experience, no goals, no hopes or fears about the future. It just is. That’s why physicists can talk so freely about “universal laws”: their particles respond to forces blindly, with absolute obedience. But in economics, said Arthur, “Our particles have to think ahead, and try to figure out how other particles might react if they were to undertake certain actions. Our particles have to act on the basis of expectations and strategies. And regardless of how you model that, that’s what makes economics truly difficult.”

The only problem, of course, is that real human beings are neither perfectly rational nor perfectly predictable—as the physicists pointed out at great length. Furthermore, as several of them also pointed out, there are real theoretical pitfalls in assuming perfect predictions, even if you do assume that people are perfectly rational. In nonlinear systems—and the economy is most certainly nonlinear—chaos theory tells you that the slightest uncertainty in your knowledge of the initial conditions will often grow inexorably. After a while, your predictions are nonsense.

“The physicists were shocked at the assumptions the economists were making—that the test was not a match against reality, but whether the assumptions were the common currency of the field. I can just see Phil Anderson, laid back with a smile on his face, saying, ‘You guys really believe that?’” The economists, backed into a corner, would reply, “Yeah, but this allows us to solve these problems. If you don’t make these assumptions, then you can’t do anything.” And the physicists would come right back, “Yeah, but where does that get you—you’re solving the wrong problem if that’s not reality.”

Holland started by pointing out that the economy is an example par excellence of what the Santa Fe Institute had come to call “complex adaptive systems.”

Once you learned how to recognize them, in fact, these systems were everywhere. But wherever you found them, said Holland, they all seemed to share certain crucial properties.

First, he said, each of these systems is a network of many “agents” acting in parallel. In a brain the agents are nerve cells, in an ecology the agents are species, in a cell the agents are organelles such as the nucleus and the mitochondria, in an embryo the agents are cells, and so on. In an economy, the agents might be individuals or households. Or if you were looking at business cycles, the agents might be firms. And if you were looking at international trade, the agents might even be whole nations. But regardless of how you define them, each agent finds itself in an environment produced by its interactions with the other agents in the system. It is constantly acting and reacting to what the other agents are doing. And because of that, essentially nothing in its environment is fixed.

Furthermore, said Holland, the control of a complex adaptive system tends to be highly dispersed. There is no master neuron in the brain, for example, nor is there any master cell within a developing embryo. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. This is true even in an economy. Ask any president trying to cope with a stubborn recession: no matter what Washington does to fiddle with interest rates and tax policy and the money supply, the overall behavior of the economy is still the result of myriad economic decisions made every day by millions of individual people.

Second, said Holland, a complex adaptive system has many levels of organization, with agents at any one level serving as the building blocks for agents at a higher level.

Furthermore, said Holland—and this was something he considered very important—complex adaptive systems are constantly revising and rearranging their building blocks as they gain experience

Third, he said, all complex adaptive systems anticipate the future.

More generally, said Holland, every complex adaptive system is constantly making predictions based on its various internal models of the world—its implicit or explicit assumptions about the way things are out there.

Finally, said Holland, complex adaptive systems typically have many niches, each one of which can be exploited by an agent adapted to fill that niche.

Moreover, the very act of filling one niche opens up more niches—for

And that, in turn, means that it’s essentially meaningless to talk about a complex adaptive system being in equilibrium: the system can never get there. It is always unfolding, always in transition. In fact, if the system ever does reach equilibrium, it isn’t just stable. It’s dead. And by the same token, said Holland, there’s no point in imagining that the agents in the system can ever “optimize” their fitness, or their utility, or whatever. The space of possibilities is too vast; they have no practical way of finding the optimum. The most they can ever do is to change and improve themselves relative to what the other agents are doing. In short, complex adaptive systems are characterized by perpetual novelty.

it’s no wonder that complex adaptive systems were so hard to analyze with standard mathematics.

to really get a deep understanding of the economy, or complex adaptive systems in general, what you need are mathematics and computer simulation techniques that emphasize internal models, the emergence of new building blocks, and the rich web of interactions between multiple agents.

It wasn’t simply that Holland’s point about perpetual novelty was exactly what he’d been trying to say for the past eight years with his increasing-returns economics. Nor was it that Holland’s point about niches was exactly what he and Stuart Kauffman had been thrashing out for the past two weeks in the context of autocatalytic sets. It was that Holland’s whole way of looking at things had a unity, a clarity, a rightness that made you slap your forehead and say, “Of course! Why didn’t I think of that?”

However new Holland’s thinking might have seemed to Arthur and the other visitors at the economics meeting, he had long since become a familiar and profoundly influential figure among the Santa Fe regulars. His first contact with the institute had come in 1985 during a conference entitled “Evolution, Games, and Learning,” which had been organized at Los Alamos by Doyne Farmer and Norman Packard. (As it happens, this was the same meeting in which Farmer, Packard, and Kauffman first reported the results of their autocatalytic set simulation.) Holland’s talk there was on the subject of emergence, and it seemed to go quite well. But he remembers being peppered with sharp-edged questions from this person out in the audience—a white-haired guy with an intent, slightly cynical face peering out from behind dark-rimmed glasses. “I was fairly flip in my answers,” says Holland. “I didn’t know him—and I’d probably have been scared to death if I had!” Flip answers or not, however, Murray Gell-Mann clearly liked what Holland had to say. Shortly thereafter, Gell-Mann called him up and asked him to serve on what was then called the Santa Fe Institute’s advisory board, which was just being formed. Holland agreed. “And as soon as I saw the place, I really liked it,” he says. “What they were talking about, the way they went at things—my immediate response was, ‘I sure hope these guys like me, because this is for me!’” The feeling was mutual. When Gell-Mann speaks of Holland he uses words like “brilliant”—not a term he throws around casually. But, then, it’s not often that Gell-Mann has had his eyes opened quite so abruptly. In the early days, Gell-Mann, Cowan, and most of the other founders of the institute had thought about their new science of complexity almost entirely in terms of the physical concepts they were already familiar with, such as emergence, collective behavior, and spontaneous organization. Moreover, these concepts already seemed to promise an exceptionally rich research agenda, if only as metaphors for studying the same ideas—emergence, collective behavior, and spontaneous organization—in realms such as economics and biology. But then Holland came along with his analysis of adaptation—not to mention his working computer models. And suddenly Gell-Mann and the others realized that they’d left a gaping hole in their agenda: What do these emergent structures actually do? How do they respond and adapt to their environment? Within months they were talking about the institute’s program being not just complex systems, but complex adaptive systems.

So there you have the economic problem in a nutshell, he told Holland. How do we make a science out of imperfectly smart agents exploring their way into an essentially infinite space of possibilities?

if the underlying rules of evolution of the themes are in control and not me, then I’ll be surprised. And if I’m not surprised, then I’m not very happy, because I know I’ve built everything in from the start.” Nowadays, of course, this sort of thing is called “emergence.”

What captivated him wasn’t that science allowed you to reduce everything in the universe to a few simple laws. It was just the opposite: that science showed you how a few simple laws could produce the enormously rich behavior of the world.

be renamed the IBM 701. At the time the machine represented a major and rather dubious gamble for the

Those heady, early days of computers were a ferment of new ideas about information, cybernetics, automata—concepts that hadn’t even existed ten years earlier. Who knew where the limits were? Almost anything you tried was liable to break new ground. And more than that, for the more philosophically minded pioneers like Holland, these big, clumsy banks of wire and vacuum tubes were opening up whole new ways to think about thinking.

At the time, of course, nobody knew to call this sort of thing “artificial intelligence” or “cognitive science.” But even so, the very act of programming computers—itself a totally new kind of endeavor—was forcing people to think much more carefully than ever before about what it meant to solve a problem.

clear at the time. (In fact, they are far from clear now.) But the questions were being asked with unprecedented clarity and precision.

For the past five years he had been trying to write a program that could play checkers—and not only play the game, but learn to play it better and better with experience. In retrospect, Samuel’s checker player is considered one of the milestones of artificial intelligence research; by the time he finally finished revising and refining it in 1967, it was playing at a world championship level. But even in the 701 days, it was doing remarkably well. Holland remembers being very impressed with it, particularly with its ability to adapt its tactics to what the other player was doing. In effect, the program was making a simple model of “opponent” and using that model to make predictions about the best line of play. And somehow, without being able to articulate it very well at the time, Holland felt that this aspect of the checker player captured something essential and right about learning and adaptation.

Through a microscope, most of the brain appears to be a study in chaos, with each nerve cell sending out thousands of random filaments that connect it willy-nilly to thousands of other nerve cells. And yet this densely interconnected network is obviously not random. A healthy brain produces perception, thought, and action quite coherently. Moreover, the brain is obviously not static. It refines and adapts its behavior through experience. It learns. The question is, How?

Hebb had published his answer in a book entitled The Organization of Behavior. His fundamental idea was to assume that the brain is constantly making subtle changes in the “synapses,” the points of connection where nerve impulses make the leap from one cell to the next.

he argued that these synaptic changes were in fact the basis of all learning and memory. A sensory impulse coming in from the eyes,

a network that started out at random would rapidly organize itself. Experience would accumulate through a kind of positive feedback: the strong, frequently used synapses would grow stronger, while the weak, seldom-used synapses would atrophy.

Licklider went on to explain Hebb’s second assumption: that the selective strengthening of the synapses would cause the brain to organize itself into “cell assemblies”—subsets of several thousand neurons in which circulating nerve impulses would reinforce themselves and continue to circulate. Hebb considered these cell assemblies to be the brain’s basic building blocks of information. Each one would correspond to a tone, a flash of light, or a fragment of an idea. And yet these assemblies would not be physically distinct. Indeed, they would overlap, with any given neuron belonging to several of them. And because of that, activating one assembly would inevitably lead to the activation of others, so that these fundamental building blocks would quickly organize themselves into larger concepts and more complex behaviors. The cell assemblies, in short, would be the fundamental quanta of thought. Sitting there in the audience, Holland was transfixed by all this. This wasn’t just the arid stimulus/response view of psychology being pushed at the time by behaviorists such as Harvard’s B. F. Skinner. Hebb was talking about what was going on inside the mind. His connectionist theory had the richness, the perpetual surprise that Holland responded so strongly to. It felt right. And Holland couldn’t wait to do something with it. Hebb’s theory was a window onto the essence of thought, and he wanted to watch. He wanted to see cell assemblies organize themselves out of random chaos and grow. He wanted to see them interact. He wanted to see them incorporate experience and evolve. He wanted to see the emergence of the mind itself. And he wanted to see all of it happen spontaneously, without external guidance. No sooner had Licklider finished his lecture on Hebb than Holland turned to his leader on the 701 team, Nathaniel Rochester, and said, “Well, we’ve got this prototype machine. Let’s program a neural network simulator.”

The basic idea would still look familiar enough. In their programs, Holland and Rochester modeled their artificial neurons as “nodes”—in effect, tiny computers that can remember certain things about their internal state. They modeled their artificial synapses as abstract connections between various nodes, with each connection having a certain “weight” corresponding to the strength of the synapse. And they modeled Hebb’s learning rule by adjusting the strengths as the network gained experience.

But in the end, by golly, the simulations worked. “There was a lot of emergence,” says Holland, still sounding excited about it. “You could start with a uniform substrate of neurons and see the cell assemblies form.”

“I’d like to be able to take themes from all over and see what emerges when I put them together,”

The Glass-Bead Game, but in English translations the book is usually called Master of the Game, or its Latin equivalent, Magister Ludi.

the novel describes a game that was originally played by musicians; the idea was to set up a theme on a kind of abacus with glass beads, and then try to weave all kinds of counterpoint and variation on the theme by moving the beads back and forth.

“The fact that you could take calculus and differential equations and all the other things I had learned in my math classes to start a revolution in genetics—that was a real eye-opener.

Fisher’s whole analysis of natural selection focused on the evolution of just one gene at a time, as if each gene’s contribution to the organism’s survival was totally independent of all the other genes.

A single gene for green eyes isn’t worth very much unless it’s backed up by the dozens or hundreds of genes that specify the structure of the eye itself. Each gene had to work as part of a team, realized Holland. And any theory that didn’t take that fact into account was missing a crucial part of the story. Come to think on it, that was also what Hebb had been saying in the mental realm. Hebb’s cell assemblies were a bit like genes, in that they were supposed to be the fundamental units of thought. But in isolation the cell assemblies were almost nothing.

it bothered Holland that Fisher kept talking about evolution achieving a stable equilibrium—that state in which a given species has attained its optimum size, its optimum sharpness of tooth, its optimum fitness to survive and reproduce. Fisher’s argument was essentially the same one that economists use to define economic equilibrium: once a species’ fitness is at a maximum, he said, any mutation will lower the fitness.

Fisher seemed to be talking about the attainment of some pristine, eternal perfection. “But with Darwin, you see things getting broader and broader with time, more diverse,” says Holland. “Fisher’s math didn’t touch on that.” And with Hebb, who was talking about learning instead of evolution, you saw the same thing: minds getting richer, more subtle, more surprising as they gained experience with the world.

Holland couldn’t help thinking of Art Samuel’s checker-playing program, which took advantage of exactly this kind of feedback: the program was constantly updating its tactics as it gained experience and learned more about the other player.

To Holland, evolution and learning seemed much more like—well, a game.

trying to win enough of what it needed to keep going. In evolution that payoff is literally survival, and a chance for the agent to pass its genes on to the next generation.

the payoff (or lack of it) gives agents the feedback they need to improve their performance: if they’re going to be “adaptive” at all, they somehow have to keep the strategies that pay off well, and let the others die out.

This game analogy seemed to be true of any adaptive system.

And that meant, in turn, that all of them are fundamentally like checkers or chess: the space of possibilities is vast beyond imagining. An agent can learn to play the game better—that’s what adaptation is, after all. But it has just about as much chance of finding the optimum, stable equilibrium point of the game as you or I have of solving chess.

Equilibrium implies an endpoint. But to Holland, the essence of evolution lay in the journey, the endlessly unfolding surprise:

postdoc—he set himself the goal of turning his vision into a complete and rigorous theory of adaptation. “The belief was that if I looked at genetic adaptation as the longest-term adaptation, and the nervous system as the shortest term,” he says, “then the general theoretical framework would be the same.”

he was determined to crack this problem of selection based on more than one gene—and not just because Fisher’s independent-gene assumption had bugged him more than anything else about that book. Moving to multiple genes was also the key to moving away from this obsession with equilibrium.

That’s 2 1000 , or about 10 300 —a number so vast that it makes even the number of moves in chess seem infinitesimal. “Evolution can’t even begin to try out that many things,” says Holland. “And no matter how good we get with computers, we can’t do it.” Indeed, if every elementary particle in the observable universe were a supercomputer that had been number-crunching away since the Big Bang, they still wouldn’t be close. And remember, that’s just for seaweed. Humans and other mammals have roughly 100 times as many genes—and most of those genes come in many more than two varieties.

But now, says Holland, look what happens with that 1000-gene seaweed when you assume that the genes are not independent.

So once again, says Holland, you have a system exploring its way into an immense space of possibilities, with no realistic hope of ever finding the single “best” place to be. All evolution can do is look for improvements, not perfection. But that, of course, was precisely the question he had resolved to answer back in 1962: How? Understanding evolution with multiple genes obviously wasn’t just a trivial matter of replacing Fisher’s one-variable equations with many-variable equations. What Holland wanted to know was how evolution could explore this immense space of possibilities and find useful combinations of genes—without having to search over every square inch of territory. As it happens, a similar explosion of possibilities was already well known to mainstream artificial intelligence researchers. At Carnegie Tech (now Carnegie Mellon University) in Pittsburgh, for example, Allen Newell and Herbert Simon had been conducting a landmark study of human problem-solving since the mid-1950s. By asking experimental subjects to verbalize their thoughts as they struggled through a wide variety of puzzles and games, including chess, Newell and Simon had concluded that problem-solving always involves a step-by-step mental search through a vast “problem space” of possibilities, with each step guided by a heuristic rule of thumb: “If this is the situation, then that step is worth taking.” By building their theory into a program known as General Problem Solver, and by putting that program to work on those same puzzles and games, Newell and Simon had shown that the problem-space approach could reproduce human-style reasoning remarkably well. Indeed, their concept of heuristic search was already well on its way to

the Newell-Simon approach didn’t help him with biological evolution. The whole point of evolution is that there are no heuristic rules, no guidance of any sort; succeeding generations explore the space of possibilities by mutations and random reshuffling of genes among the sexes—in short, by trial and error. Furthermore, those succeeding generations don’t conduct their search in a step-by-step manner. They explore it in parallel: each member of the population has a slightly different set of genes and explores a slightly different region of the space. And yet, despite these differences, evolution produces just as much creativity and surprise as mental activity does, even if it takes a little longer. To Holland, this meant that the real unifying principles in adaptation had to be found at a deeper level.

certain sets of genes worked well together, forming coherent, self-reinforcing wholes. An example might be the cluster of genes that tells a cell how to extract energy from glucose molecules, or the cluster that controls cell division, or the cluster that governs how a cell combines with other cells to form a certain kind of tissue. Holland could also see analogs in Hebb’s theory of the brain, where a set of resonating cell assemblies might form a coherent concept such as “car,” or a coordinated motion such as lifting your arm. But the more Holland thought about this idea of coherent, self-reinforcing clusters, the more subtle it began to seem. For one thing, you could find analogous examples almost anywhere you looked. Subroutines in a computer program. Departments in a bureaucracy. Gambits in the larger strategy of a chess game. Furthermore, you could find examples at every level of organization. If a cluster is coherent enough and stable enough, then it can usually serve as a building block for some larger cluster. Cells make tissues, tissues make organs, organs make organisms, organisms make ecosystems—on and on. Indeed, thought Holland, that’s what this business of “emergence” was all about: building blocks at one level combining into new building blocks at a higher level. It seemed to be one of the fundamental organizing principles of the world. It certainly seemed to appear in every complex, adaptive system that you looked. But why? This hierarchical, building-block structure of things is as commonplace as air. It’s so widespread that we never think much about it. But when you do think about it, it cries out for an explanation: Why is the world structured this way? Well, there are actually any number of reasons. Computer programmers are taught to break things

As Holland thought about it, however, he became convinced that the most important reason lay deeper still, in the fact that a hierarchical, building-block structure utterly transforms a system’s ability to learn, evolve, and adapt.

Certainly that’s a much more efficient way to create something new than starting all over from scratch. And that fact, in turn, suggests a whole new mechanism for adaptation in general. Instead of moving through that immense space of possibilities step by step, so to speak, an adaptive system can reshuffle its building blocks and take giant leaps.

“So if I have a process that can discover building blocks,” says Holland, “the combinatorics start working for me instead of against me. I can describe a great many

The idea was to divide the face up into, say, 10 building blocks: hairline, forehead, eyes, nose, and so on down to the chin. Then the artist would have strips of paper with a variety of options for each:

the artist could talk to the witness, assemble the appropriate pieces, and produce a sketch of the suspect very quickly. Of course, the artist couldn’t reproduce every conceivable face that way. But he or she could almost always get pretty close: by shuffling those 100 pieces of paper, the artist could make a total of 10 billion different faces, enough to sample the space of possibilities quite widely. “So if I have a process that can discover building blocks,” says Holland, “the combinatorics start working for me instead of against me. I can describe a great many complicated things with relatively few building blocks.”

And that, he realized, was the key to the multiple-gene puzzle. “The cut and try of evolution isn’t just to build a good animal, but to find good building blocks that can be put together to make many good animals.” His challenge now was to show precisely and rigorously how that could happen. And the first step, he decided, was to make a computer model, a “genetic algorithm” that would both illustrate the process and help him clarify the issues in his own mind.

the genetic algorithm that Holland finally came up with was weird. Except in the most literal sense, in fact, it wasn’t really a computer program at all. In its inner workings it was more like a simulated ecosystem—a kind of digital Serengeti in which whole populations of programs would compete and have sex and reproduce for generation after generation, always evolving their way toward the solution of whatever problem the programmer might set for them. This wasn’t the way programs were usually written, to put it mildly.

The upshot is that the genetic algorithm will converge to the solution of the problem at hand quite rapidly—without ever having to know beforehand what the solution is.

the whole art of programming is to make sure that you’ve written precisely the right instructions in precisely the right order. And that’s obviously the most effective way to do it—if you already know precisely what you want the computer to do. But suppose you don’t know, said Holland. Suppose, for example, that you’re trying to find the maximum value of some complicated mathematical function. The function could represent profit, or factory output, or vote counts, or almost anything else; the world is full of things that need to be maximized. Indeed, programmers have devised any number of sophisticated computer algorithms for doing so. And yet, not even the best of those algorithms is guaranteed to give you the correct maximum value in every situation. At some level, they always have to rely on old-fashioned trial and error—guessing. But if that’s the case, Holland told his colleagues, if you’re going to be relying on trial and error anyway, maybe it’s worth seeing what you can do with nature’s method of trial and error—namely, natural selection. Instead of trying to write your programs to perform a task you don’t quite know how to do, evolve them. The genetic algorithm was a way of doing that. To see how it works, said Holland, forget about the FORTRAN code and go down into the guts of the computer, where the program is represented as a string of binary ones and zeros: 11010011110001100100010100111011 
, et cetera. In that form the program looks a heck of a lot like a chromosome, he said, with each binary digit being a single “gene.” And once you start thinking of the binary code in biological terms, then you can use that same biological analogy to make it evolve. First, said Holland, you have the computer generate a population of maybe 100 of these digital chromosomes, with lots of random variation from one to the next.

test each individual chromosome on the problem at hand by running it as a computer program, and then giving it a score that measures how well it does.

you take those individuals you’ve selected as being fit enough to reproduce, and create a new generation of individuals through sexual reproduction.

Reproduction and crossover provided the mechanism for building blocks of genes to emerge and evolve together—and, not incidentally, provided a mechanism for a population of individuals to explore the space of possibilities with impressive efficiency.

Published in 1975, Adaptation in Natural and Artificial Systems was dense with equations and analysis. It summarized two decades of Holland’s thinking about the deep interrelationships of learning, evolution, and creativity. It laid out the genetic algorithm in exquisite detail. And in the wider world of computer science outside Michigan, it was greeted with resounding silence. In a community of people who like their algorithms to be elegant, concise, and provably correct, this genetic algorithm stuff was still just too weird. The artificial intelligence community was a little more receptive, enough to keep the book selling at the rate of 100 to 200 copies per year. But even so, when there were any comments on the book at all, they were most often along the lines of, “John’s a real bright guy, but 
”

he simply didn’t play the game of academic self-promotion.

“I think it would have bothered me if nobody had been willing to listen,” he adds. “But I’ve always been very lucky in having bright, interested graduate students to bounce ideas off of.”

“Some of them have been really brilliant—and great fun for that reason,” he says. Holland deliberately took a rather hands-off approach to guidance, having seen too many professors build up a huge publication list by publishing “joint” research papers that were in fact written entirely by their graduate students. “So they all followed their noses and did things they thought were interesting. Then we’d all meet around a table about once a week, one of them would tell where he stood on his dissertation, and we’d all critique it. That was usually a lot of fun for everybody involved.”

so, he couldn’t help but feel that the genetic algorithm’s bare-bones version of evolution was just too bare.

The genetic algorithm was all very nice. But by itself, it simply wasn’t an adaptive agent.

Nearly twenty-five years after he’d first heard about Donald Hebb’s ideas, he was still convinced that adaptation in the mind and adaptation in nature were just two different aspects of the same thing. Moreover, he was still convinced that if they really were the same thing, they ought to be describable by the same theory.

what actually has to happen for game-playing agents to survive and prosper? Two things, Holland decided: prediction and feedback.

Prediction is what helps you seize an opportunity or avoid getting suckered into a trap. An agent that can think ahead has an obvious advantage over one that can’t.

Very often, moreover, the models are literally inside our head,

We use these “mental models” so often, in fact, that many psychologists are convinced they are the basis of all conscious thought.

Holland, the concept of prediction and models actually ran far deeper than conscious thought—or for that matter, far deeper than the existence of a brain. “All complex, adaptive systems—economies, minds, organisms—build models that allow them to anticipate the world,” he declares. Yes, even bacteria.

standard operating procedures are often taught by rote, without a lot of whys and wherefores.

as the standard operating procedure collectively unfolds, the company as a whole will behave as if it understood that model perfectly.

Instead, they just set up the production run by invoking a “standard operating procedure”—a set of rules of the form, “If the situation is ABC, then take action XYZ.” And just as with a bacterium or the viceroy, says Holland, those rules encode a model of the company’s world and a prediction:

In the cognitive realm, says Holland, anything we call a “skill” or “expertise” is an implicit model—or more precisely, a huge, interlocking set of standard operating procedures that have been inscribed on the nervous system and refined by years of experience.

Holland’s favorite example of implicit expertise is the skill of the medieval architects who created the great Gothic cathedrals. They had no way to calculate forces or load tolerances, or anything else that a modern architect might do. Modern physics and structural analysis didn’t exist in the twelfth century. Instead, they built those high, vaulted ceilings and massive flying buttresses using standard operating procedures passed down from master to apprentice—rules of thumb that gave them a sense of which structures would stand up and which would collapse. Their model of physics was completely implicit and intuitive.

Ordinarily, for example, we think of prediction as being something that humans do consciously, based on some explicit model of the world.

DNA itself is an implicit model: “Under the conditions we expect to find,” say the genes, “the creature we specify has a chance of doing well. Human culture is an implicit model, a rich complex of myths and symbols that implicitly define a people’s beliefs about their world and their rules for correct behavior.

where do the models come from? How can any system, natural or artificial, learn enough about its universe to forecast future events?

Most models are quite obviously not conscious: witness the nutrient-seeking bacterium, which doesn’t even have a brain.

feedback from the environment. This was Darwin’s great insight, that an agent can improve its internal models without any paranormal guidance whatsoever. It simply has to try the models out, see how well their predictions work in the real world, and—if it survives the experience—adjust the models to do better the next time. In biology, of course, the agents are individual organisms, the feedback is provided by natural selection, and the steady improvement of the models is called evolution. But in cognition, the process is essentially the same: the agents are individual minds, the feedback comes from teachers and direct experience, and the improvement is called learning.

there was only one way to pin the ideas down: he would have to build a computer-simulated adaptive agent, just as he had done fifteen years earlier with genetic algorithms.

Learning was as fundamental to cognition as evolution was to biology. And that meant that learning had to be built into the cognitive architecture from the beginning,

Holland’s ideal was still the Hebbian neural network, where the neural impulses from every thought strengthen and reinforce the connections that make thinking possible in the first place. Thinking and learning were just two aspects of the same thing in the brain, Holland was convinced. And he wanted to capture that fundamental insight in his adaptive agent.

He was still convinced that concepts had to be understood in Hebbian terms, as emergent structures growing from some deeper neural substrate that is constantly being adjusted and readjusted by input from the environment.

if the system has been told what to do in advance, then it’s a fraud to call the thing artificial intelligence: the intelligence isn’t in the program but in the programmer. No, Holland wanted control to be learned. He wanted to see it emerging from the bottom up, just as it did from the neural substrate of the brain.

“In contrast to mainstream artificial intelligence, I see competition as much more essential than consistency,”

Consistency is a chimera, because in a complicated world there is no guarantee that experience will be consistent.

“despite all the work in economics and biology, we still haven’t extracted what’s central in competition.” There’s a richness there that we’ve only just begun to fathom. Consider the magical fact that competition can produce a very strong incentive for cooperation, as certain players spontaneously forge alliances and symbiotic relationships with each other for mutual support. It happens at every level and in every kind of complex, adaptive system, from biology to economics to politics. “Competition and cooperation may seem antithetical,” he says, “but at some very deep level, they are two sides of the same coin.”

When you thought about it, in fact, the Darwinian metaphor and the Adam Smith metaphor fit together quite nicely: Firms evolve over time, so why shouldn’t classifiers?

The upshot was that the population of rules would change and evolve over time, constantly exploring new regions of the space of possibilities. And there you would have it: by adding the genetic algorithm as a third layer on top of the bucket brigade and the basic rule-based system, Holland could make an adaptive agent that not only learned from experience but could be spontaneous and creative.

general cognitive theory of learning, reasoning, and intellectual discovery. As they later recounted in their 1986 book, Induction, all four of them had independently come to believe that such a theory had to be founded on the three basic principles that happened to be the same three that underlay Holland’s classifier system: namely, that knowledge can be expressed in terms of mental structures that behave very much like rules; that these rules are in competition, so that experience causes useful rules to grow stronger and unhelpful rules to grow weaker; and that plausible new rules are generated from combinations of old rules.

In psychology this kind of knowledge organization is known as a default hierarchy,

they argued that these three principles ought to cause the spontaneous emergence of default hierarchies as the basic organizational structure of all human knowledge—as indeed they appear to do. The cluster of rules forming a default hierarchy is essentially synonymous with what Holland calls an internal model. We use weak general rules with stronger exceptions to make predictions about how things should be assigned to categories: “If it’s streamlined and has fins and lives in the water, then it’s a fish”—but “If it also has hair and breathes air and is big, then it’s a whale.”

The classifier system had started with nothing. Its initial set of rules had been totally random, the computer equivalent of primordial chaos. And yet, here was this marvelous structure emerging out of the chaos to astonish and surprise them. “We were elated,” says Holland. “It was the first case of what someone could really call an emergent model.”

the two of them had rather sleepily begun to bat around an approach that might just crack this problem of rational expectations in economics: instead of assuming that your economic agents are perfectly rational, why not just model a bunch of them with Holland-style classifier systems and let them learn from experience like real economic agents?

“You know,” he adds, “it’s perfectly possible for a scientist to feel that he has what it takes—but that he isn’t accepted in the community. John Holland went through that for decades. I certainly felt like that—until I walked into the Santa Fe Institute, and all these incredibly smart people, people I’d only read about, were giving me the impression of ‘What took you so long to get here?” For ten days, he had been talking and listening nonstop. His head was so full of ideas that it hurt. He was exhausted. He needed to catch up on about three weeks of sleep. And he felt as though he were in heaven.

artificial life was analogous to artificial intelligence. The difference was that, instead of using computers to model thought processes, you used computers to model the basic biological mechanisms of evolution and life itself.

something right out of the Vietnam-era counterculture.

the Game of Life wasn’t actually a game that you played; it was more like a miniature universe that evolved as you watched. You started out with the computer screen showing a snapshot of this universe: a two-dimensional grid full of black squares that were “alive” and white squares that were “dead.” The initial pattern could be anything you liked. But once you set the game going, the squares would live or die from then on according to a few simple rules. Each square in each generation would first look around at its immediate neighbors. If too many of those neighbors were already alive, then in the next generation the square would die of overcrowding. And if too few neighbors were alive, then the square would die of loneliness. But if the number of neighbors was just right, with either two living squares or three living squares, then in the next generation that central square would be alive—either by surviving if it were already alive or by being “born” if it weren’t. That was all. The rules were nothing but a kind of cartoon biology. But what made the Game of Life wonderful was that when you turned these simple rules into a program, they really did seem to make the screen come alive.

You could start up the game with a random scattering of live squares, and watch them instantly organize themselves into all manner of coherent structures.

realized that it must have been the Game of Life. There was something alive on that screen.

It was one of those clear, frosty nights when the stars were sort of sparkling. Across the Charles River in Cambridge you could see the Science Museum and all the cars driving around. I thought about the patterns of activity, all the things going on out there. The city was sitting there, just living. And it seemed to be the same sort of thing as the Game of Life. It was certainly much more complex. But it was not necessarily different in kind.”

that night of epiphany changed his life. But at the time it was little more than an intuition, a certain feeling he had. “It was one of those things where you have this flash of insight, and then it’s gone. Like a thunderstorm, or a tornado, or a tidal wave that comes through and changes the landscape, and then it’s past. The actual mental image itself was no longer really there, but it had set me up to feel certain ways about certain things.

among the corals and fishes, he had come to love moving in that third dimension. It was intoxicating. But once he was back in Boston, he’d soon discovered that scuba diving in the cold, brown waters of New England just wasn’t the same. So as a substitute he’d tried hang-gliding. And he’d become hooked the first day. Sailing over the world, riding upward from thermal to thermal—this was the ultimate in three dimensions. He became a fanatic, buying his own hang-glider and spending every spare minute aloft. All of which explains why, at the beginning of the summer of 1975, Langton set out for Tucson along with a couple of hang-gliding buddies who were moving to San Diego and who had a truck. Their plan was to spend the next few months making their way across the country at the slowest speed possible, while they went hang-gliding off any hill that looked halfway inviting. And that’s exactly what they started to do, working their way down the Appalachians until they came to Grandfather Mountain, North Carolina.

“I had this weird experience of watching my mind come back,” he says. “I could see myself as this passive observer back there somewhere. And there were all these things happening in my mind that were disconnected from my consciousness. It was very reminiscent of virtual machines, or like watching the Game of Life. I could see these disconnected patterns self-organize, come together, and merge with me in some way. I don’t know how to describe it in any objectively verifiable way, and maybe it was just a figment of all these funny drugs they were giving me, but it was as if you took an ant colony and tore it up, and then watched the ants come back together, reorganize, and rebuild the colony.

I did a lot of non-specific, nondirected generic thinking about biology, physical science, and ideas of the universe, and about how those ideas changed with time. Then there was this scent I talk about. Through all of this, I was always following it, but without any direction.

I very rapidly realized that I was much more interested, not in our specific, current understanding of the universe, but in how our world view had changed through time. What I was really interested in was the history of ideas.

One of Langton’s favorite cartoons is a panel of Gary Larson’s The Far Side, which shows a fully equipped mountaineer about to descend into an immense hole in the ground. As a reporter holds up a microphone, he proclaims, “Because it is not there! “That’s how I felt,” laughs Langton. The more he studied anthropology, he says, the more he sensed that the subject had a gaping hole. “It was a fundamental dichotomy.

the time he left physics he was pursuing what eventually turned into a philosophy-anthropology double major.

So on every side, says Langton, “I was just immersed in this idea of the evolution of information. That quickly became my chief interest. It just smelled right.” Indeed, the scent was overpowering. Somehow, he says, he knew he was getting very close.

If you could create a real theory of cultural evolution, as opposed to some pseudoscientific justification for the status quo, he reasoned, then you might be able to understand how cultures really worked—and among other things, actually do something about war and social inequity.

wasn’t just cultural evolution, Langton realized. It was biological evolution, intellectual evolution, cultural evolution, concepts combining and recombining and leaping from mind to mind over miles and generations—all wrapped together.

There was a unity here, a common story that involved elements coming together, structures evolving, and complicated systems acquiring the capacity to grow and be alive. And if he could only learn to look at that unity in the right way, if he could only abstract its laws of operation into the right kind of computer program, then he would have captured everything that was important about evolution.

His basic argument was that biological and cultural evolution were simply two aspects of the same phenomenon, and that the “genes” of culture were beliefs—which in turn were recorded in the basic “DNA” of culture: language. In retrospect it was a pretty naive attempt, he says. But it was his manifesto—and

“I kept getting this look you get when they think you’re a crackpot,” he says. “It was very discouraging—especially coming as it did after the accident, when I felt unsure of what I was or who I was.” Objectively, Langton had made enormous progress by this point; he could concentrate, he was strong, and he could run five miles at a stretch. But to himself, he still felt bizarre, grotesque, and mentally impaired. “I couldn’t tell. Because of this neurological scrambling, I couldn’t be sure of any of my thoughts anymore. So I couldn’t be sure of this one. And it wasn’t helping that nobody understood what I was trying to say.”

And I kept seeing things out there that related to it. I didn’t know anything about nonlinear dynamics at the time, but there were all these intuitions for emergent properties, the interaction of lots of parts, the kinds of things that the group could do collectively that the individual couldn’t.”

“By now I’d had the epiphany and I was a religious convert,” he says. “This was clearly my life from now on. I knew I wanted to go on and do a Ph.D. in this general area. It’s just that the path to take wasn’t obvious.”

Not knowing even how to begin, Langton decided it was time to go to the University of Arizona library, where he could do a computerized literature search. He tried the key words “self-reproduction.” “I got zillions of things back!” he says.

“This was right. When I found all that, I said, ‘Hey, I may be crazy, but these people are at least as crazy as I am!’”

was all there: evolution, the Game of Life, self-assembly, emergent reproduction, everything. Von Neumann, he discovered, had gotten interested in the issue of self-reproduction back in the late 1940s, in the aftermath of his work with Burks and Goldstine on the design of a programmable digital computer.

To get a feel for the issues, von Neumann started out with a thought experiment. Imagine a machine that floats around on the surface of a pond, he said, together with lots of machine parts. Furthermore, imagine that this machine is a universal constructor: given a description of any machine, it will paddle around the pond until it locates the proper parts, and then construct that machine. In particular, given a description of itself, it will construct a copy of itself. Now that sounds like self-reproduction, said von Neumann. But it isn’t—at least, not quite. The newly created copy of the first machine will have all the right parts. But it won’t have a description of itself, which means that it won’t be able to make any further copies of itself. So von Neumann also postulated that the original machine should have a description copier: a device that will take the original description, duplicate it, and then attach the duplicate description to the offspring machine. Once that happens, he said, the offspring will have everything it needs to continue reproducing indefinitely. And then that will be self-reproduction. As a thought experiment, von Neumann’s analysis of self-reproduction was simplicity itself. To restate it in a slightly more formal way, he was saying that the genetic material of any self-reproducing system, natural or artificial, has to play two fundamentally different roles. On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring. But as a scientific prediction, that analysis turned out to be breathtaking: when Watson and Crick finally unraveled the molecular structure of DNA a few years later, in 1953, they discovered that it fulfilled von Neumann’s two requirements precisely.

Thought experiment

To restate it in a slightly more formal way, he was saying that the genetic material of any self-reproducing system, natural or artificial, has to play two fundamentally different roles. On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring.

the genetic material of any self-reproducing system, natural or artificial, has to play two fundamentally different roles. On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring.

One of the highlights was von Neumann’s proof that there existed at least one cellular automaton pattern that could indeed reproduce itself. The pattern he’d found was immensely complicated, requiring a huge lattice and 29 different states per cell. It was far beyond the simulation capacity of any existing computer. But the very fact of its existence settled the essential question of principle: self-reproduction, once considered to be an exclusive characteristic of living things, could indeed be achieved by machines.

Langton wasted no time. By that point he’d learned that Michigan’s Computer and Communication Sciences program was famous for exactly the kind of perspective that he was after. “To them,” says Langton, “information processing was writ large across all of nature. However information is processed is worthwhile understanding. So I applied under that philosophy.” By and by he got a letter back from Professor Gideon Frieder, the chairman of the department. “Sorry,” it said, “you don’t have the proper background.” Application denied. Langton was enraged. He fired back a seven-page letter, the gist of which was, What the hell!? “Here is your whole philosophy, the purpose you claim to exist, live, and breathe for. This is exactly what I’ve been pursuing. And you’re telling me no?” A few weeks afterward Frieder wrote again, saying in effect, “Welcome aboard.” As he told Langton later, “I just liked the idea of having somebody around who would say that to the chairman.”

Second-order phase transitions are much less common in nature, Langton learned. (At least, they are at the temperatures and pressures humans are used to.) But they are much less abrupt, largely because the molecules in such a system don’t have to make that either-or choice. They combine chaos and order. Above the transition temperature, for example, most of the molecules are tumbling over one another in a completely chaotic, fluid phase. Yet tumbling among them are myriads of submicroscopic islands of orderly, latticework solid, with molecules constantly dissolving and recrystallizing around the edges. These islands are neither very big nor very long-lasting, even on a molecular scale. So the system is still mostly chaos. But as the temperature is lowered, the largest islands start to get very big indeed, and they begin to live for a correspondingly long time. The balance between chaos and order has begun to shift. Of course, if the temperature were taken all the way past the transition, the roles would reverse: the material would go from being a sea of fluid dotted with islands of solid, to being a continent of solid dotted with lakes of fluid. But right at the transition, the balance is perfect: the ordered structures fill a volume precisely equal to that of the chaotic fluid. Order and chaos intertwine in a complex, ever-changing dance of submicroscopic arms and fractal filaments. The largest ordered structures propagate their fingers across the material for arbitrarily long distances and last for an arbitrarily long time. And nothing ever really settles down.

“It reminded me of the feelings I experienced when I learned to scuba dive in Puerto Rico,” he explains. “For most of our dives we were fairly close to shore, where the water was crystal clear and you could see the bottom perfectly about 60 feet down. However, one day our instructor took us to the edge of the continental shelf, where the 60-foot bottom gave way to an 80-degree slope that disappeared into the depths—I believe at that point the transition was to about 2000 feet. It made you realize that all the diving you had been doing, which had certainly seemed adventurous and daring, was really just playing around on the beach. The continental shelves are like puddles compared to ‘The Ocean.’ “Well, life emerged in the oceans,” he adds, “so there you are at the edge, alive and appreciating that enormous fluid nursery. And that’s why ‘the edge of chaos’ carries for me a very similar feeling: because I believe life also originated at the edge of chaos. So here we are at the edge, alive and appreciating the fact that physics itself should yield up such a nursery. 
”

the only rules that allow you to build a universal computer are those that are in Class IV, like the Game of Life. These are the only rules that provide enough stability to store information and enough fluidity to send signals over arbitrary distances—the two things that seem essential for computation. And, of course, these are also the rules that sit right in this phase transition at the edge of chaos.

one of the interesting questions we can ask about living things is, Under what conditions do systems whose dynamics are dominated by information processing arise from things that just respond to physical forces? When and where does the processing of information and the storage of information become important?”

phase transitions, complexity, and computation were all wrapped together, Langton realized. Or, at least, they were in the von Neumann universe. But Langton was convinced that the connections held true in the real world as well—in everything from social systems to economies to living cells. Because once you got to computation, you were getting awfully close to the essence of life itself. “Life is based to an incredible degree on its ability to process information,” he says. “It stores information. It maps sensory information. And it makes some complex transformations on that information to produce action.

In the von Neumann universe, likewise, the Class IV rules might eventually produce a frozen configuration, or they might not. But either way, Langton says, the phase transition at the edge of chaos would correspond to what computer scientists call “undecidable” algorithms. These are the algorithms that might halt very quickly with certain inputs—equivalent to starting off the Game of Life with a known stable structure. But they might run on forever with other inputs. And the point is, you can’t always tell ahead of time which it will be—even in principle. In fact, says Langton, there’s even a theorem to that effect: the “undecidability theorem” proved by the British logician Alan Turing back in the 1930s. Paraphrased, the theorem essentially says that no matter how smart you think you are, there will always be algorithms that do things you can’t predict in advance. The only way to find out what they will do is to run them. And, of course, those are exactly the kind of algorithms you want for modeling life and intelligence. So it’s no wonder the Game of Life and other Class IV cellular automata seem so lifelike. They exist in the only dynamical regime where complexity, computation, and life itself are possible: the edge of chaos.

Langton now had four very detailed analogies— Cellular Automata Classes: I & II → “IV” → III Dynamical Systems: Order → “Complexity” → Chaos Matter: Solid → “Phase Transition” → Fluid Computation: Halting → “Undecidable” → Nonhalting —along with a fifth and far more hypothetical one: Life: Too static → “Life/Intelligence” → Too noisy But what did they all add up to? Just this, Langton decided: “solid” and “fluid” are not just two fundamental phases of matter, as in water versus ice. They are two fundamental classes of dynamical behavior in general—including dynamical behavior in such utterly nonmaterial realms as the space of cellular automaton rules or the space of abstract algorithms. Furthermore, he realized, the existence of these two fundamental classes of dynamical behavior implies the existence of a third fundamental class: “phase transition” behavior at the edge of chaos, where you would encounter complex computation and quite possibly life itself.

he did have this irresistible vision of life as eternally trying to keep its balance on the edge of chaos, always in danger of falling off into too much order on the one side, and too much chaos on the other. Maybe that’s what evolution is, he thought: just a process of life’s learning how to seize control of more and more of its own parameters, so that it has a better and better chance to stay balanced on the edge.

“What are you working on?” asked Doyne Farmer. “I don’t really know how to describe it,” admitted Langton. “I’ve been calling it artificial life.” “Artificial life!” exclaimed Farmer. “Wow, we gotta talk!” So they had talked. A lot. After the conference, moreover, they had kept on talking via electronic mail. And Farmer had made it a point to bring Langton out to Los Alamos on several occasions to give talks and seminars. (Indeed, it was at the “Evolution, Games, and Learning” conference in May 1985 that Langton gave his first public discussion of his lambda parameter and the phase transition work. Farmer, Wolfram, Norman Packard, and company were profoundly impressed.) This was the same period when Farmer was busy with Packard and Stuart Kauffman on the autocatalytic set simulation for the origin of life—not to mention helping to get the Santa Fe Institute up and running—and he was getting deeply involved with issues of complexity himself.

Langton had no idea what most of the speakers would say until they got up to say it. “The meeting was a very emotional experience for me,” he says. “You’ll never re-create that feeling. Everybody had been doing artificial life on his own, on the side, often at home. And everybody had had this feeling of, There must be something here.’ But they didn’t know who to turn to. It was a whole collection of people who’d had the same uncertainties, the same doubts, who’d wondered if they were crazy. And at the meeting we almost embraced each other. There was this real camaraderie, this sense of ‘I may be crazy—but so are all these other people.’”

Langton had no idea what most of the speakers would say until they got up to say it. “The meeting was a very emotional experience for me,” he says. “You’ll never re-create that feeling. Everybody had been doing artificial life on his own, on the side, often at home. And everybody had had this feeling of, There must be something here.’ But they didn’t know who to turn to. It was a whole collection of people who’d had the same uncertainties, the same doubts, who’d wondered if they were crazy. And at the meeting we almost embraced each other. There was this real camaraderie, this sense of ‘I may be crazy—but so are all these other people.’” There were no breakthroughs in any of the presentations, he says. But you could see the potential in almost all of them. The talks ranged from the collective behavior of a simulated ant colony, to the evolution of digital ecosystems made out of assembly-language computer code, to the power of sticky protein molecules to assemble themselves into a virus.

“I was so hyped up, it was like an altered state of consciousness,” he says. “I have this image of a sea of gray matter, with ideas swimming around, ideas recombining, ideas leaping from mind to mind.” For that space of five days, he says, “it was like being incredibly alive.”

had been enormously impressed by Holland’s genetic algorithms, and classifier systems, and the boids, and so on. I thought a good deal about them, and the possibilities they opened up. My instinct was that this was an answer. The problem was, What was the question in economics?

In particular, he says, he wanted to see the program take some of the classical problems in economics, the hoary old chestnuts of the field, and see how they changed when you looked at them in terms of adaptation, evolution, learning, multiple equilibria, emergence, and complexity—all the Santa Fe themes. Why, for example, are there speculative bubbles and crashes in the stock market? Or why is there money? (That is, how does one particular good such as gold or wampum become widely accepted as a medium of exchange?)

“Either someone gets glassy-eyed or the communication begins,” he says. “And if it does, then you’re exercising a form of power that’s extremely compelling: intellectual power. If you can get a person who understands the concept somewhere down in the bowels of the brain, where that same idea’s been sitting forever, then you have a grasp on that person. You don’t do it by physical coercion, but by a kind of intellectual appeal that amounts to a coercion. You grab them by the brains instead of by the balls.”

we would thrash out what economists could do about bounded rationality.” That is, what would really happen to economic theory if they quit assuming that people could instantaneously compute their way to the solution of any economic problem—even if that problem were just as hard as chess?

the way to set the dial of rationality was to leave it alone. Let the agents set it by themselves.

there is only one way to be perfectly rational, while there are an infinity of ways to be partially rational. So which way is correct for human beings? “Where,” he asked, “do you set the dial of rationality?”

If people are perfectly rational, then theorists can say exactly how they will react. But what would perfect irrationality be like? Hahn wondered.

No matter where you put them, in fact, the agents would try to do something. So unlike the neoclassical theory, which has almost nothing at all to say about dynamics and change in the economy, a model full of adaptive agents would come with dynamics already built in.

So all the agents could start off as perfectly stupid. That is, they would just make random, blundering decisions. But they would get smarter and smarter as they reacted to one another.”

Instead of emphasizing decreasing returns, static equilibrium, and perfect rationality, as in the neoclassical view, the Santa Fe team would emphasize increasing returns, bounded rationality, and the dynamics of evolution and learning. Instead of basing their theory on assumptions that were mathematically convenient, they would try to make models that were psychologically realistic. Instead of viewing the economy as some kind of Newtonian machine, they would see it as something organic, adaptive, surprising, and alive.

“In fact,” Arthur adds, “the key intellectual influence that whole first year was machine learning in general and John Holland in particular—not condensed-matter physics, not increasing returns, not computer science, but learning and adaptation.

“Economics, as it is usually practiced, operates in a purely deductive mode,” he says. “Every economic situation is first translated into a mathematical exercise, which the economic agents are supposed to solve by rigorous, analytical reasoning. But then here were Holland, the neural net people, and the other machine-learning theorists. And they were all talking about agents that operate in an inductive mode, in which they try to reason from fragmentary data to a useful internal model.” Induction is what allows us to infer the existence of a cat from the glimpse of a tail vanishing around a corner. Induction is what allows us to walk through the zoo and classify some exotic feathered creature as a bird, even though we’ve never seen a scarlet-crested cockatoo before. Induction is what allows us to survive in a messy, unpredictable, and often incomprehensible world.

They try to fill in the gaps on the fly by forming hypotheses, by making analogies, by drawing from past experience, by using heuristic rules of thumb. Whatever works, works—even if they don’t understand why. And for that very reason, induction cannot depend upon precise, deductive logic.

Holland’s answer was essentially that you learn in that environment because you have to: “Evolution doesn’t care whether problems are well defined or not.” Adaptive agents are just responding to a reward,

they could operate in an environment that was not well defined at all.

Moreover, because the system was always testing those hypotheses to find out which ones were useful and led to rewards, it could continue to learn even in the face of crummy, incomplete information—and even while the environment was changing in unexpected ways.

<<Action produces information>>

“But its behavior isn’t optimal!” the economists complained, having convinced themselves that a rational agent is one who optimizes his “utility function.” “Optimal relative to what?” Holland replied. Talk about your ill-defined criterion: in any real environment, the space of possibilities is so huge that there is no way an agent can find the optimum—or even recognize it. And that’s before you take into account the fact that the environment might be changing in unforeseen ways.

“This whole induction business fascinated me,” says Arthur. “Here you could think about doing economics where the problem facing the economic agent was not even well defined, where the environment is not well defined, where the environment might be changing, where the changes were totally unknown. And, of course, you just had to think for about a tenth of a second to realize, that’s what life is all about. People routinely make decisions in contexts that are not well defined, without even realizing it. You muddle through, you adapt ideas, you copy, you try what worked in the past, you try out things. And, in fact, economists had talked about this kind of behavior before. But we were finding ways to make it analytically precise, to build it into the heart of the theory.”

“Arrow, Hahn, Holland, myself, maybe half a dozen of us. We had just begun to realize that if you do economics this way—if there was this Santa Fe approach—then there might be no equilibrium in the economy at all. The economy would be like the biosphere: always evolving, always changing, always exploring new territory. “Now, what worried us was that it didn’t seem possible to do economics in that case,” says Arthur. “Because economics had come to mean the investigation of equilibria.

Frankie Hahn said, ‘If things are not repeating, if things are not in equilibrium, what can we, as economists, say? How could you predict anything? How could you have a science?’”

Look at meteorology, he told them. The weather never settles down. It never repeats itself exactly. It’s essentially unpredictable more than a week or so in advance. And yet we can comprehend and explain almost everything that we see up there.

we have a real science of weather—without full prediction. And we can do it because prediction isn’t the essence of science. The essence is comprehension and explanation.

“Well, Holland’s answer was to me a revelation,” says Arthur. “It left me almost gasping. I had been thinking for almost ten years that much of the economy would never be in equilibrium. But I couldn’t see how to ‘do’ economics without equilibrium. John’s comment cut through the knot for me. After that it seemed—straightforward.”

“A lot of people, including myself, had naively assumed that what we’d get from the physicists and the machine-learning people like Holland would be new algorithms, new problem-solving techniques, new technical frameworks. But what we got was quite different—what we got was very often a new attitude, a new approach, a whole new world view.”

Evolution, of course, was a lot more than just random mutation and natural selection. It was also emergence and self-organization. And that, despite the best efforts of Stuart Kauffman, Chris Langton, and a great many other people, was something that no one understood very well.

Holland also realized that he was going to have to face up to the major philosophical flaw in classifier systems. In the spontaneous emergence paper, he says, the spontaneity had been real, and the emergence had been completely intrinsic. But in classifier systems, for all their learning ability and for all their power to discover emergent clusters of rules, there was still a deus ex machina; the systems still depended on the shadowy hand of the programmer. “A classifier system gets a payoff only because I assign winning or losing,” says Holland. It was something that had always bugged him. Leaving aside questions of religion, he says, the real world seems to get along just fine without a cosmic referee.

Any given organism’s ability to survive and reproduce depends on what niche it is filling, what other organisms are around, what resources it can gather, even what its past history has been.

Indeed, evolutionary biologists consider it so important that they’ve made up a special word for it: organisms in an ecosystem don’t just evolve, they coevolve. Organisms don’t change by climbing uphill to the highest peak of some abstract fitness landscape, the way biologists of R. A. Fisher’s generation had it. (The fitness-maximizing organisms of classical population genetics actually look a lot like the utility-maximizing agents of neoclassical economics.) Real organisms constantly circle and chase one another in an infinitely complex dance of coevolution.

In the human world, moreover, the dance of coevolution has produced equally exquisite webs of economic and political dependencies—alliances, rivalries, customer-supplier relationships, and on and on. It is the dynamic that underlay Arthur’s vision of an economy under glass, in which artificial economic agents would adapt to each other as you watched. It is the dynamic that underlay Arthur and Kauffman’s analysis of autocatalytic technology change. It is the dynamic that underlies the affairs of nations in a world that has no central authority.

he was ever going to understand these phenomena at the deepest level, he was going to have to start by eliminating this business of outside reward.

he wanted to understand a deep paradox in evolution: the fact that the same relentless competition that gives rise to evolutionary arms races can also give rise to symbiosis and other forms of cooperation. Indeed, it was no accident that cooperation in its various guises actually underlay quite a few items on Holland’s list. It was a fundamental problem in evolutionary biology—not to mention economics, political science, and all of human affairs. In a competitive world, why do organisms cooperate at all? Why do they leave themselves open to “allies” who could easily turn on them?

Or in the natural world, consider that an overly trusting creature might very well get eaten. So once again: Why should any organism ever dare to cooperate with another?

Nice guys—or more precisely, nice, forgiving, tough, and clear guys—can indeed finish first.

TIT FOR TAT would start out by cooperating on the first move, and from there on out would do exactly what the other program had done on the move before. That is, the TIT FOR TAT strategy incorporated the essence of the carrot and the stick. It was “nice” in the sense that it would never defect first. It was “forgiving” in the sense that it would reward good behavior by cooperating the next time. And yet it was “tough” in the sense that it would punish uncooperative behavior by defecting the next time. Moreover, it was “clear” in the sense that its strategy was so simple that the opposing programs could easily figure out what they were dealing with.

In his 1984 book, The Evolution of Cooperation, Axelrod pointed out that TIT FOR TAT interaction can lead to cooperation in a wide variety of social settings—including some of the most unpromising situations imaginable. His favorite example was the “live-and-let-live” system that spontaneously developed during World War I,

Axelrod also pointed out that TIT FOR TAT interactions lead to cooperation in the natural world even without the benefit of intelligence.

This TIT FOR TAT mechanism for the origin of cooperation was exactly the sort of thing Holland meant when he said that people at the institute ought to be looking for the analog of “weather fronts” in the social sciences.

With it he’s been able to demonstrate both the evolution of cooperation and the evolution of predator-prey relationships simultaneously, in the same ecosystem. And that success has inspired him to start work on still more sophisticated variations of Echo: “There’s a

Axelrod produced a computer simulation of this scenario in collaboration with Holland’s then-graduate student Stephanie Forrest. The question was whether a population of individuals coevolving via the genetic algorithm could discover TIT FOR TAT. And the answer was yes: in the computer runs, either TIT FOR TAT or a strategy very much like it would appear and spread through the population very quickly.

They all had ‘trade,’ in that there were goods being exchanged in one way or another. They all had ‘resource transformation,’ such as might be produced by enzymes or production processes. And they all had ‘mate selection,’ which acted as a source of technological innovation.

physicists were telling us that there were three ways now to proceed in science: mathematical theory, laboratory experiment, and computer modeling. You have to go back and forth. You discover something with a computer model that seems out of whack, and then you go and do theory to try to understand it. And then with the theory, you go back to the computer or to the laboratory for more experiments. To many of us, it seemed as though we could do the same thing in economics with great profit. We began to realize that we’d been restricting ourselves in economics unnaturally, by exploring only problems that might yield to mathematical analysis. But now that we were getting into this inductive world, where things started to get very complicated, we could extend ourselves to problems that might only be studied by computer experiment. I saw this as a necessary development—and a liberation.”

neoclassical theory finds Wall Street utterly incomprehensible. Since all economic agents are perfectly rational, goes the argument, then all investors must be perfectly rational. Moreover, since these perfectly rational investors also have exactly the same information about the expected earnings of all stocks infinitely far into the future, they will always agree about what every stock is worth—namely, the “net present value” of its future earnings when they are discounted by the interest rate. So this perfectly rational market will never get caught up in speculative bubbles and crashes; at most it will go up or down a little bit as new information becomes available about the various stocks’ future earnings. But either way, the logical conclusion is that the floor of the New York Stock Exchange must be a very quiet place. In reality, of course, the floor of the New York Stock Exchange is a barely controlled riot. The place is wracked by bubbles and crashes all the time, not to mention fear, uncertainty, euphoria, and mob psychology in every conceivable combination.

Its credo is that life is not a property of matter per se, but the organization of that matter.

Its promise is that by exploring other possible biologies in a new medium—computers and perhaps robots—artificial life researchers can achieve what space scientists have achieved by sending probes to other planets: a new understanding of our own world through a cosmic perspective on what happened on other worlds. “Only when we are able to view life-as-we-know-it in the context of life-as-it-could-be will we really understand the nature of the beast,” Langton declared. The idea of viewing life in terms of its abstract organization is perhaps the single most compelling vision to come out of the workshop, he said. And it’s no accident that this vision is so closely associated with computers: they share many of the same intellectual roots. Human beings have been searching for the secret of automata—machines that can generate their own behavior—at least since the time of the Pharaohs, when Egyptian craftsmen created clocks based on the steady drip of water through a small orifice.

the first century A.D., Hero of Alexandria produced his treatise Pneumatics, in which he described (among other things) how pressurized air could generate simple movements in various gadgets shaped like animals and humans. In Europe, during the great age of clockworks more than a thousand years later, medieval and Renaissance craftsmen devised increasingly elaborate figures known as “jacks,” which would emerge from the interior of the clock to strike the hours; some of their public clocks eventually grew to include large numbers of figures that acted out entire plays. And during the Industrial Revolution, the technology of clockwork automata gave rise to the still more sophisticated technology of process control, in which factory machines were guided by intricate sets of rotating cams and interlinked mechanical arms.

That effort culminated in the early decades of the twentieth century with the work of Alonzo Church, Kurt Gödel, Alan Turing, and others, who pointed out that the essence of a mechanical process—the “thing” responsible for its behavior—is not a thing at all. It is an abstract control structure, a program that can be expressed as a set of rules without regard to the material the machine is made of. Indeed, said Langton, this abstraction is what allows you to take a piece of software from one computer and run it on another computer: the “machineness” of the machine is in the software, not the hardware. And once you’ve accepted that, he said, echoing his own epiphany at Massachusetts General Hospital nearly eighteen years before, then it’s a very small step to say that the “aliveness” of an organism is also in the software—in the organization of the molecules, not the molecules themselves.

living systems are machines, all right, but machines with a very different kind of organization from the ones we’re used to. Instead of being designed from the top down, the way a human engineer would do it, living systems always seem to emerge from the bottom up, from a population of much simpler systems.

“The most surprising lesson we have learned from simulating complex physical systems on computers is that complex behavior need not have complex roots,” he wrote, complete with italics. “Indeed, tremendously interesting and beguilingly complex behavior can emerge from collections of extremely simple components.”

the statement applied equally well to one of the most vivid demonstrations at the artificial life workshop: Craig Reynolds’ flock of “boids.” Instead of writing global, top-down specifications for how the flock should behave, or telling his creatures to follow the lead of one Boss Boid, Reynolds had used only the three simple rules of local, boid-to-boid interaction. And it was precisely that locality that allowed his flock to adapt to changing conditions so organically. The rules always tended to pull the boids together, in somewhat the same way that Adam Smith’s Invisible Hand tends to pull supply into balance with demand. But just as in the economy, the tendency to converge was only a tendency, the result of each boid reacting to what the other boids were doing in its immediate neighborhood. So when a flock encountered an obstacle such as a pillar, it had no trouble splitting apart and flowing to either side as each boid did its own thing.

Try doing that with a single set of top-level rules, said Langton. The system would be impossibly cumbersome and complicated, with the rules telling each boid precisely what to do in every conceivable situation.

since it’s effectively impossible to cover every conceivable situation, top-down systems are forever running into combinations of events they don’t know how to handle. They tend to be touchy and fragile, and they all too often grind to a halt in a dither of indecision.

The theme was heard over and over again at the workshop, said Langton: the way to achieve lifelike behavior is to simulate populations of simple units instead of one big complex unit. Use local control instead of global control. Let the behavior emerge from the bottom up, instead of being specified from the top down. And while you’re at it, focus on ongoing behavior instead of the final result. As Holland loved to point out, living systems never really settle down.

there was a third great idea to be distilled from the workshop presentations: the possibility that life isn’t just like a computation, in the sense of being a property of the organization rather than the molecules. Life literally is a computation.

For example, Why is life quite literally full of surprises? Because, in general, it is impossible to start from a given set of GTYPE rules and predict what their PTYPE behavior will be—even in principle. This is the undecidability theorem, one of the deepest results of computer science: unless a computer program is utterly trivial, the fastest way to find out what it will do is to run it and see. There is no general-purpose procedure that can scan the code and the input and give you the answer any faster than that. That’s why the old saw about computers only doing what their programmers tell them to do is both perfectly true and virtually irrelevant; any piece of code that’s complex enough to be interesting will always surprise its programmers. That’s why any decent software package has to be endlessly tested and debugged before it is released—and that’s why the users always discover very quickly that the debugging was never quite perfect. And, most important for artificial life purposes, that’s why a living system can be a biochemical machine that is completely under the control of a program, a GTYPE, and yet still have a surprising, spontaneous behavior in the PTYPE.

in the poorly defined, constantly changing environments faced by living systems, said Langton, there seems to be only one way to proceed: trial and error, also known as Darwinian natural selection. The process may seem terribly cruel and wasteful, he pointed out. In effect, nature does its programming by building a lot of different machines with a lot of randomly differing GTYPES, and then smashing the ones that don’t work very well. But, in fact, that messy, wasteful process may be the best that nature can do. And by the same token, John Holland’s genetic algorithm approach may be the only realistic way of programming computers to cope with messy, ill-defined problems. “It is quite likely that this is the only efficient, general procedure that could find GTYPEs with specific PTYPE traits,”

True, computer viruses lived their lives entirely within the cyberspace of computers and computer networks. They didn’t have any independent existence out in the material world. But that didn’t necessarily rule them out as living things. If life was really just a question of organization, as Langton claimed, then a properly organized entity would literally be alive, no matter what it was made of.

The future effect of the changes we make now are unpredictable, he pointed out, even in principle. Yet we are responsible for the consequences, nonetheless. And that, in turn, meant that the implications of artificial life had to be debated in the open, with public input.

“By the middle of this century,” he wrote, “mankind had acquired the power to extinguish life on Earth. By the middle of the next century, he will be able to create

Furthermore, he said, suppose that you could create life. Then suddenly you would be involved in something a lot bigger than some technical definition of living versus nonliving. Very quickly, in fact, you would find yourself engaged in a kind of empirical theology. Having created a living creature, for example, would you then have the right to demand that it worship you and make sacrifices to you? Would you have the right to act as its god? Would you have the right to destroy it if it didn’t behave the way you wanted it to?

“Chris was definitely worth it,” he says. “People like him, who have a real dream, a vision of what they want to do, are rare. Chris hadn’t learned to be very efficient. But I think he had a good vision, one that was really needed. And I think he was doing a really good job carrying it out. He wasn’t afraid to tackle the details.”

thereby becoming an exquisitely tuned ecosystem. Atoms search for a minimum energy state by forming chemical

If entropy is always increasing, he asked himself, and if atomic-scale randomness and disorder are inexorable, then why is the universe still able to bring forth stars and planets and clouds and trees? Why is matter constantly becoming more and more organized on a large scale, at the same time that it is becoming more and more disorganized on a small scale? Why hasn’t everything in the universe long since dissolved into a formless miasma?

this sensitivity to initial conditions could be described by the emerging science of “chaos,” more generally known as “dynamical systems theory”;

“I’m of the school of thought that life and organization are inexorable,” he says, “just as inexorable as the increase in entropy. They just seem more fluky because they proceed in fits and starts, and they build on themselves. Life is a reflection of a much more general phenomenon that I’d like to believe is described by some counterpart of the second law of thermodynamics—some law that would describe the tendency of matter to organize itself, and that would predict the general properties of organization we’d expect to see in the universe.”

chaos theory actually had very little to say about the fundamental principles of living systems or of evolution. It didn’t explain how systems starting out in a state of random nothingness could then organize themselves into complex wholes. Most important, it didn’t answer his old question about the inexorable growth of order and structure in the universe.

people have recently been finding so many hints about things like emergence, adaptation, and the edge of chaos that they can begin to sketch at least a broad outline of what this hypothetical new second law might be like.

this putative law would have to give a rigorous account of emergence: What does it really mean to say that the whole is greater than the sum of its parts?

Flying boids (and real birds) adapt to the actions of their neighbors, thereby becoming a flock. Organisms cooperate and compete in a dance of coevolution, thereby becoming an exquisitely tuned ecosystem. Atoms search for a minimum energy state by forming chemical bonds with each other, thereby becoming the emergent structures known as molecules. Human beings try to satisfy their material needs by buying, selling, and trading with each other, thereby creating an emergent structure known as a market. Humans likewise interact with each other to satisfy less quantifiable goals, thereby forming families, religions, and cultures. Somehow, by constantly seeking mutual accommodation and self-consistency, groups of agents manage to transcend themselves and become something more. The trick is to figure out how, without falling back into sterile philosophizing or New Age mysticism.

connectionism: the idea of representing a population of interacting agents as a network of “nodes” linked by “connections.”

Exhibit A has to be the neural network movement, in which researchers use webs of artificial neurons to model such things as perception and memory retrieval—and, not incidentally, to mount a radical attack on the symbol-processing methods of mainstream artificial intelligence. But close behind are many of the models that have found a home at the Santa Fe Institute, including John Holland’s classifier systems, Stuart Kauffman’s genetic networks, the autocatalytic set model for the origin of life, and the immune system model that he and Packard did in the mid-1980s with Los Alamos’ Alan Perelson.

In John Holland’s classifier system the node-and-connection structure is considerably less obvious, says Farmer, but it’s there. The set of nodes is just the set of all possible internal messages, such as 1001001110111110. And the connections are just the classifier rules, each of which looks for a certain message on the system’s internal bulletin board, and then responds to it by posting another. By activating certain input nodes—that is, by posting the corresponding input messages on the bulletin board—the programmer can cause the classifiers to activate more messages, and then still more. The result will be a cascade of messages analogous to the spreading activation in a neural network. And, just as the neural net eventually settles down into a self-consistent state, the classifier system will eventually settle down into a stable set of active messages and classifiers that solves the problem at hand—or, in Holland’s picture, that represents an emergent mental model.

a common framework should help the people working on these models to communicate a lot more easily than they usually do, without the babel of different jargons. “The thing I considered important in that paper was that I hammered out the actual translation machinery for going from one model to another. I could take a model of the immune system and say, ‘If that were a neural net, here’s what it would look like.’” But perhaps the most important reason for having a common framework, says Farmer, is that it helps you distill out the essence of the models, so that you can focus on what they actually have to say about emergence. And in this case, the lesson is clear: the power really does lie in the connections. That’s what gets so many people so excited about connectionism. You can start with very, very simple nodes—linear “polymers,” “messages” that are just binary numbers, “neurons” that are essentially just on-off switches—and still generate surprising and sophisticated outcomes just from the way they interact.

you can change them in two different ways. The first way is to leave the connections in place but modify their “strength.” This corresponds to what Holland calls exploitation learning: improving what you already have.

The second, more radical way of adjusting the connections is to change the network’s whole wiring diagram. Rip out some of the old connections and put in new ones. This corresponds to what Holland calls exploration learning: taking the risk of screwing up big in return for the chance of winning big. In Holland’s classifier systems, for example, this is exactly what happens when the genetic algorithm mixes rules together through its inimitable version of sexual recombination; the new rules that result will often link messages that have never been linked before. This is also what happens in the autocatalytic set model when occasional new polymers are allowed to form spontaneously—as they do in the real world. The resulting chemical connections can give the autocatalytic set an opening to explore a whole new realm of polymer space.

the connectionist idea shows how the capacity for learning and evolution can emerge even if the nodes, the individual agents, are brainless and dead. More generally, by putting the power in the connections and not the nodes, it points the way toward a very precise theory of what Langton and the artificial lifers mean when they say that the essence of life is in the organization and not the molecules. And it likewise points the way toward a deeper understanding of how life and mind could have gotten started in a universe that began with neither. The Edge of Chaos However, says Farmer, as beautiful as that prospect may be, connectionist models are a long way from telling you everything you’d like to know about the new second law.

connectionist models are a long way from telling you everything you’d like to know about the new second law. To begin with, they don’t tell you much about how emergence works in economies, societies, or ecosystems, where the nodes are “smart” and constantly adapting to each other. To understand systems like that, you have to understand the coevolutionary dance of cooperation and competition. And that means studying them with coevolutionary models such as Holland’s Echo,

More important, says Farmer, neither connectionist models nor coevolutionary models tell you what makes life and mind possible in the first place. What is it about the universe that allows these things to happen? It isn’t enough to say “emergence”; the cosmos is full of emergent structures like galaxies and clouds and snowflakes that are still just physical objects; they have no independent life whatsoever. Something more is required. And this hypothetical new second law will have to tell us what that something is. Clearly, this is a job for models that try to get at the basic physics and chemistry of the world, such as the cellular automata that Chris Langton is so fond of. And by no coincidence, says Farmer, Langton’s discovery of this weird, edge-of-chaos phase transition in cellular automata seems to be a big part of the answer.

Langton is basically saying that the mysterious “something” that makes life and mind possible is a certain kind of balance between the forces of order and the forces of disorder. More precisely, he’s saying that you should look at systems in terms of how they behave instead of how they’re made. And when you do, he says, then what you find are the two extremes of order and chaos.

right in between the two extremes, he says, at a kind of abstract phase transition called “the edge of chaos,” you also find complexity: a class of behaviors in which the components of the system never quite lock into place, yet never quite dissolve into turbulence, either. These are the systems that are both stable enough to store information, and yet evanescent enough to transmit it. These are the systems that can be organized to perform complex computations, to react to the world, to be spontaneous, adaptive, and alive.

in the mid-1980s, says Farmer, it was much the same story with the autocatalytic set model. The model had a number of parameters such as the catalytic strength of the reactions, and the rate at which “food” molecules are supplied. He, Packard, and Kauffman had to set all these parameters by hand, essentially by trial and error. And one of the first things they discovered was that nothing much happened in the model until they got those parameters into a certain range—whereupon the autocatalytic sets would take off and develop very quickly.

the totalitarian, centralized approach to the organization of society doesn’t work very well.” In the long run, the system that Stalin built was just too stagnant, too locked in, too rigidly controlled to survive. Or look at the Big Three automakers in Detroit in the 1970s. They had grown so big and so rigidly locked in to certain ways of doing things that they could barely recognize the growing challenge from Japan, much less respond to it. On the other hand, says Farmer, anarchy doesn’t work very well, either—as certain parts of the former Soviet Union seemed determined to prove in the aftermath of the breakup. Nor does an unfettered laissez-faire system: witness the Dickensian horrors of the Industrial Revolution in England or, more recently, the savings and loan debacle in the United States. Common sense, not to mention recent political experience, suggests that healthy economies and healthy societies alike have to keep order and chaos in balance—and not just a wishy-washy, average, middle-of-the road kind of balance, either. Like a living cell, they have to regulate themselves with a dense web of feedbacks and regulation, at the same time that they leave plenty of room for creativity, change, and response to new conditions. “Evolution thrives in systems with a bottom-up organization, which gives rise to flexibility,” says Farmer. “But at the same time, evolution has to channel the bottom-up approach in a way that doesn’t destroy the organization. There has to be a hierarchy of control—with information flowing from the bottom up as well as from the top down.” The dynamics of complexity at the edge of chaos, he says, seems to be ideal for this kind of behavior.

the hypothetical new second law will still have to explain how emergent systems get there, how they keep themselves there, and what they do there.

it’s easy to persuade yourself that those first two questions have already been answered by Charles Darwin (as generalized by John Holland). Since the systems that are capable of the most complex, sophisticated responses will always have the edge in a competitive world, goes the argument, then frozen systems can always do better by loosening up a bit, and turbulent systems can always do better by getting themselves a little more organized. So if a system isn’t on the edge of chaos already, you’d expect learning and evolution to push it in that direction. And if it is on the edge of chaos, then you’d expect learning and evolution to pull it back if it ever starts to drift away. In other words, you’d expect learning and evolution to make the edge of chaos stable, the natural place for complex, adaptive systems to be.

In the space of all possible dynamical behaviors, the edge of chaos is like an infinitesimally thin membrane, a region of special, complex behaviors separating chaos from order. But then, the surface of the ocean is only one molecule thick, too; it’s just a boundary separating water from air. And the edge of chaos region, like the surface of the ocean, is still vast beyond all imagining. It contains a near-infinity of ways for an agent to be both complex and adaptive. Indeed, when John Holland talks about “perpetual novelty,” and adaptive agents exploring their way into an immense space of possibilities, he may not say it this way—but he’s talking about adaptive agents moving around on this immense edge-of-chaos membrane.

at heart it will not be about mechanism so much as direction: the deceptively simple fact that evolution is constantly coming up with things that are more complicated, more sophisticated, more structured than the ones that came before.

It seems that learning and evolution don’t just pull agents to the edge of chaos; slowly, haltingly, but inexorably, learning and evolution move agents along the edge of chaos in the direction of greater and greater complexity. Why? “It’s a thorny question,” says Farmer. “It’s very hard to articulate a notion of ‘progress’ in biology.” What does it mean for one creature to be more advanced than another? Cockroaches, for example, have been around for several hundred million years longer than human beings, and they are very, very good at being cockroaches. Are we more advanced than they are, or just different? Were our mammalian ancestors of 65 million years ago more advanced than Tyrannosaurus rex, or just luckier in surviving the impact of a marauding comet? With no objective definition of fitness, says Farmer, “survival

One of the wonderful things about autocatalysis is that you can follow emergence from the ground up, he says. The concentration of a few chemicals gets spontaneously pumped up by orders of magnitude over their equilibrium concentration because they can collectively catalyze each other’s formation. And that means that the set as a whole is now like a new, emergent individual sticking up from the equilibrium background—exactly what you want for explaining the origin of life.

subjected the model autocatalytic sets to fluctuations in their “food” supply: the stream of small molecules that served as raw material for the sets. “What was really cool was that some sets were like Panda bears, which can only digest bamboo,” says Farmer. “If you changed their food supply they just collapsed. But others were like omnivores; they had lots of metabolic pathways that allowed them to substitute one food molecule for another. So when you played around with the food supply they were virtually unchanged.” Such robust sets, presumably, would have been the kind that survived on the early Earth.

made another modification in the autocatalytic model to allow for occasional spontaneous reactions, which are known to happen in real chemical systems. These spontaneous reactions caused many of the autocatalytic sets to fall apart. But the ones that crashed paved the way for an evolutionary leap. “They triggered avalanches of novelty,” he says. “Certain variations would get amplified, and then would stabilize again until the next crash. We saw a succession of autocatalytic metabolisms, each replacing the other.” Maybe that’s a clue, says Farmer. “It will be interesting to see if we can articulate a notion of ‘progress’ that would involve emergent structures having certain feedback loops [for stability] that weren’t present in what went before. The key is that there would be a sequence of evolutionary events structuring the matter in the universe in the Spencerian sense, in which each emergence sets the stage and makes it easier for the emergence of the next level.”

People are thrashing around trying to define things like ‘complexity’ and ‘tendency for emergent computation.’ I can only evoke vague images in your brain with words that aren’t precisely defined in mathematical terms. It’s like the advent of thermodynamics—but we’re where they were in about 1820. They knew there was something called ‘heat,’ but they were talking about it in terms that would later sound ridiculous.”

Only a minority thought that heat might represent some kind of microscopic motion in the poker’s atoms. (The minority was right.) Moreover, no one at the time seems to have imagined that messy, complicated things like steam engines, chemical reactions, and electric batteries could all be governed by simple, general laws.

we now have a good understanding of chaos and fractals, which show us how simple systems with simple parts can generate very complex behaviors. We know quite a bit about gene regulation in the fruit fly, Drosophila. In a few very specific contexts we have some hints as to how self-organization is achieved in the brain. And in artificial life we are creating a new repertoire of ‘toy universes.’ Their behavior is a pale reflection of what actually goes on in natural systems. But we can simulate them completely, we can alter them at will, and we can understand exactly what makes them do what they do. The hope is that we will eventually be able to stand back and assemble all these fragments into a comprehensive theory of evolution and self-organization.

“But what makes it exciting is the very fact that things aren’t laid in stone. It’s still happening. I don’t see anybody with a clear path to an answer. But there are lots of little hints flying around. Lots of little toy systems and vague ideas. So it’s conceivable to me that in twenty or thirty years we will have a real theory.”

What we’re really looking for in the science of complexity is the general law of pattern formation in non-equilibrium systems throughout the universe.

“It’s so annoying because I can almost taste it, almost see it. I’m not being a careful scientist. Nothing’s finished. I’ve only had a first glance at a bunch of things. I feel more like a howitzer shell piercing through wall after wall, leaving a mess behind. I feel that I’m rushing through topic after topic, trying to see where the end of the arc of the howitzer shell is, without knowing how to clean up anything on the way back.”

Self-organization couldn’t do it all alone. After all, mutant genes can self-organize themselves just as easily as normal ones can. And when the result is, say, a fruit fly monstrosity with legs where its antennae should be, or no head, then you still need natural selection to sort out what’s viable from what’s hopeless.

Langton had recognized what Kauffman had not: that the edge of chaos was much more than just a simple boundary separating purely ordered systems from purely chaotic systems. Indeed, it was Langton who finally got Kauffman to understand the point after several long conversations: the edge of chaos was a special region unto itself, the place where you could find systems with lifelike, complex behaviors.

It had crossed my mind that you could get complex computation at the phase transition. But the thought that I hadn’t had, which was silly, was that selection would get you there. The thought just didn’t cross my mind.” Now that it had crossed his mind, however, his old problem of self-organization versus natural selection took on a wonderful new clarity. Living systems are not deeply entrenched in the ordered regime, which was essentially what he’d been saying for the past twenty-five years with his claim that self-organization is the most powerful force in biology. Living systems are actually very close to this edge-of-chaos phase transition, where things are much looser and more fluid. And natural selection is not the antagonist of self-organization. It’s more like a law of motion—a force that is constantly pushing emergent, self-organizing systems toward the edge of chaos.

Big avalanches are rare, and small ones are frequent. But the steadily drizzling sand triggers cascades of all sizes—a fact that manifests itself mathematically as the avalanches’ “power-law” behavior: the average frequency of a given size of avalanche is inversely proportional to some power of its size. Now the point of all this, says Bak, is that power-law behavior is very common in nature. It’s been seen in the activity of the sun, in the light from galaxies, in the flow of current through a resistor, and in the flow of water through a river. Large pulses are rare, small ones are common, but all sizes occur with this power-law relationship in frequency.

The sand pile metaphor suggests an answer, he says. Just as a steady trickle of sand drives a sand pile to organize itself into a critical state, a steady input of energy or water or electrons drives a great many systems in nature to organize themselves the same way. They become a mass of intricately interlocking subsystems just barely on the edge of criticality—with breakdowns of all sizes ripping through and rearranging things just often enough to keep them poised on the edge. A prime example is the distribution of earthquakes, says Bak. Anyone who lives in California knows that little earthquakes that rattle the dishes are far more common than the big earthquakes that make international headlines. In 1956, geologists Beno Gutenberg and Charles Richter (of the famous Richter scale) pointed out that these tremors in fact follow a power law: in any given area, the number of earthquakes each year that release a certain amount of energy is inversely proportional to a certain power of the energy.

The standard earthquake model says that the rocks on either side are locked together by enormous pressure and friction; they resist the motion until suddenly they slip catastrophically. In Bak and Tang’s version, however, the rocks on either side bend and deform until they are just ready to slip past each other—whereupon the fault undergoes a steady cascade of little slips and bigger slips that are just sufficient to keep the tension at that critical point. So a power law for earthquakes is exactly what you would expect, they argued; it’s just a statement that the earth has long since tortured all its fault zones into a state of self-organized criticality. And indeed, their simulated earthquakes follow a power law very similar to the one found by Gutenberg and Richter.

Unfortunately, he adds, self-organizing criticality only tells you about the overall statistics of avalanches; it tells you nothing about any particular avalanche. This is yet another case where understanding is not the same thing as prediction.

it doesn’t even make any difference if the sand pile scientists try to prevent the collapse they’ve predicted. They can certainly do so by putting up braces and support structures and such. But they just end up shifting the avalanche somewhere else. The global power law stays the same.

It was one thing to talk about individual agents being on the edge of chaos. That’s precisely the dynamical region that allows them to think and be alive. But what about a collection of agents taken as a whole? The economy, for example: people talk as if it had moods and responses and passing fevers. Is it at the edge of chaos? Are ecosystems? Is the immune system? Is the global community of nations?

arguing by analogy, it seems reasonable to think that each new level is “alive” in the same sense—by virtue of being at or very near the edge of chaos.

how could you even test such a notion? Langton had been able to recognize a phase transition by watching for cellular automata that showed manifestly complex behavior on a computer screen. Yet it was not at all obvious how to do that for economies or ecosystems out in the real world. How are you supposed to tell what’s simple and what’s complex when you’re looking at the behavior of Wall Street? Precisely what does it mean to say that global politics or the Brazilian rain forest is on the edge of chaos?

Bak’s self-organized criticality suggested an answer, Kauffman realized. You can tell that a system is at the critical state and/or the edge of chaos if it shows waves of change and upheaval on all scales and if the size of the changes follows a power law.

For the best and most vivid metaphor, he says, imagine a pile of sand on a tabletop, with a steady drizzle of new sand grains raining down from above. (This experiment has actually been done, by the way, both in computer simulations and with real sand.) The pile grows higher and higher until it can’t grow any more: old sand is cascading down the sides and off the edge of the table as fast as the new sand dribbles down.

the resulting sand pile is self-organized, in the sense that it reaches the steady state all by itself without anyone explicitly shaping it. And it’s in a state of criticality, in the sense that sand grains on the surface are just barely stable.

imagine a stable ecosystem or a mature industrial sector where all the agents have gotten themselves well adapted to each other. There is little or no evolutionary pressure to change. And yet the agents can’t stay there forever, he says, because eventually one of the agents is going to suffer a mutation large enough to knock him out of equilibrium. Maybe the aging founder of a firm finally dies and a new generation takes over with new ideas. Or maybe a random genetic crossover gives a species the ability to run much faster than before. “So that agent starts changing,” says Kauffman, “and then he induces changes in one of his neighbors, and you get an avalanche of changes until everything stops changing again.” But then someone else mutates. Indeed, you can expect the population to be showered with a steady rain of random mutations, much as Bak’s sand pile is showered with a steady drizzle of sand grains—which means that you can expect any closely interacting population of agents to get themselves into a state of self-organized criticality, with avalanches of change that follow a power law.

you can argue that these avalanches lie behind the great extinction events in the earth’s past, where whole groups of species vanish from the fossil record and are replaced by totally new ones.

terms of human organizations, it’s as if the jobs are so subdivided that no one has any latitude; all they can do is learn how to perform the one job they’ve been hired for, and nothing else. Whatever the metaphor, however, it’s clear that if each individual in the various organizations is allowed a little more freedom to march to a different drummer, then everyone will benefit. The deeply frozen system will become a little more fluid, says Kauffman, the aggregate fitness will go up, and the agents will collectively move a bit closer to the edge of chaos.

In organizational terms, it’s as if the lines of command in each firm are so screwed up that nobody has the slightest idea what they’re supposed to do—and half the time they are working at cross-purposes anyway. Either way, it obviously pays for individual agents to tighten up their couplings a bit, so that they can begin to adapt to what other agents are doing. The chaotic system will become a little more stable, says Kauffman, the aggregate fitness will go up, and once again, the ecosystem as a whole will move a bit closer to the edge of chaos.

the evidence consists of a sort of power law in the fossil record suggesting that the global biosphere is near the edge of chaos; a couple of computer models showing that systems can adapt their way to the edge of chaos through natural selection; and now one computer model showing that ecosystems may be able to get to the edge of chaos through coevolution.

Fontana started with one of those cosmic observations that sound so deceptively simple. When we look at the universe on size scales ranging from quarks to galaxies, he pointed out, we find the complex phenomena associated with life only at the scale of molecules. Why? Well, said Fontana, one answer is just to say “chemistry”: life is clearly a chemical phenomenon, and only molecules can spontaneously undergo complex chemical reactions with one another. But again, Why? What is it that allows molecules to do what quarks and quasars can’t? Two things, he said. The first source of chemistry’s power is simple variety: unlike quarks, which can only combine to make protons and neutrons in groups of three, atoms can be arranged and rearranged to form a huge number of structures. The space of molecular possibilities is effectively limitless. The second source of power is reactivity: structure A can manipulate structure B to form something new—structure C.

“So when we ask questions like how life emerges, and why living systems are the way they are—these are the kind of questions that are really fundamental to understanding what we are and what makes us different from inanimate matter. The more we know about these things, the closer we’re going to get to fundamental questions like, ‘What is the purpose of life?’ Now, in science we can never even attempt to make a frontal assault on questions like that. But by addressing a different question—like, Why is there an inexorable growth in complexity?—we may be able to learn something fundamental about life that suggests its purpose, in the same way that Einstein shed light on what space and time are by trying to understand gravity. The analogy I think of is averted vision in astronomy: if you want to see a very faint star, you should look a little to the side because your eye is more sensitive to faint light that way—and as soon as you look right at the star, it disappears.” Likewise, says Farmer, understanding the inexorable growth of complexity isn’t going to give us a full scientific theory of morality. But if a new second law helps us understand who and what we are, and the processes that led to us having brains and a social structure, then it might tell us a lot more about morality than we know now. “Religions try to impose rules of morality by writing them on stone tablets,” he says. “We do have a real problem now, because when we abandon conventional religion, we don’t know what rules to follow anymore. But when you peel it all back, religion and ethical rules provide a way of structuring human behavior in a way that allows a functioning society. My feeling is that all of morality operates at that level. It’s an evolutionary process in which societies constantly perform experiments, and whether or not those experiments succeed determines which cultural ideas and moral precepts propagate into the future.” If so, he says, then a theory that rigorously explains how coevolutionary systems are driven to the edge of chaos might tell us a lot about cultural dynamics, and how societies reach that elusive, ever-changing balance between freedom and control. “I draw a lot of fairly speculative conclusions about the implications of all this,” says Chris Langton. “It comes from viewing the world very much through these phase transition glasses: you can apply the idea to a lot of things and it kind of fits.” Witness the collapse of communism in the former Soviet Union and its Eastern European satellites, he says: the whole situation seems all too reminiscent of the power-law distribution of stability and upheaval at the edge of chaos. “When you think of it,” he says, “the Cold War was one of these long periods where not much changed. And although we can find fault with the U. S. and Soviet governments for holding a gun to the world’s head—the only thing that kept it from blowing up was Mutual Assured Destruction—there was a lot of stability. But now that period of stability is ending. We’ve seen upheaval in the Balkans and all over the place. I’m more scared about what’s coming in the immediate future. Because in the models, once you get out of one of these metastable periods, you get into one of these chaotic periods where a lot of change happens. The possibilities for war are much higher—including the kind that could lead to a world war. It’s much more sensitive now to initial conditions. “So what’s the right course of action?” he asks. “I don’t know, except that this is like punctuated equilibrium in evolutionary history. It doesn’t happen without a great deal of extinction. And it’s not necessarily a step for the better. There are models where the species that dominate in the stable period after the upheaval may be less fit than the species that dominated beforehand. So these periods of evolutionary change can be pretty nasty times. This is the kind of era when the United States could disappear as a world power. Who knows what’s going to come out the other end?

“The thing to do is to try to determine whether we can apply this sort of thing to history—and if so, whether we also see this kind of punctuated equilibrium. Things like the fall of Rome. Because in that case, we really are part of the evolutionary process. And if we really study that process, we may be able to incorporate this thinking into our political, social, and economic theories, where we realize that we have to be very careful and put some global agreements and treaties in place to get us through. But then the question is, do we want to gain control of our own evolution or not? If so, does that stop evolution? It’s good to have evolution progress.

If single-celled things had found a way to stop evolution to maintain themselves as dominant life-forms, then we wouldn’t be here. So you don’t want to stop it. On the other hand, maybe you want to understand how it can keep going without the massacres and the extinctions. “So maybe the lesson to be learned is that evolution hasn’t stopped,” says Langton. “It’s still going on, exhibiting many of the same phenomena it did in biological history—except that now it’s taking place on the social-cultural plane. And we may be seeing a lot of the same kinds of extinctions and upheaval.”

suppose that these models about the origin of life are correct. Then life doesn’t hang in the balance. It doesn’t depend on whether some warm little pond just happens to produce template-replicating molecules like DNA or RNA. Life is the natural expression of complex matter. It’s a very deep property of chemistry and catalysis and being far from equilibrium. And that means that we’re at home in the universe. We’re to be expected.

that reminds us that we make the world we live in with one another. We’re participants in the story as it unfolds. We aren’t victims and we aren’t outsiders. We are part of the universe, you and me, and the goldfish. We make our world with one another. “And now suppose it’s really true that coevolving, complex systems get themselves to the edge of chaos,” he says. “Well, that’s very Gaia-like. It says that there’s an attractor, a state that we collectively maintain ourselves in, an ever-changing state where species are always going extinct and new ones are coming into existence. Or if we imagine that this really carries over into economic systems, then it’s a state where technologies come into existence and replace others, et cetera. But if this is true, it means that the edge of chaos is, on average, the best that we can do. The ever-open and ever-changing world that we must make for ourselves is in some sense as good as it possibly can be. “Well, that’s a story about ourselves,” says Kauffman. “Matter has managed to evolve as best it can. And we’re at home in the universe. It’s not Panglossian, because there’s a lot of pain. You can go extinct, or broke. But here we are on the edge of chaos because that’s where, on average, we all do the best.”

“By about 1985,” says Arthur, “it seems to me that all sorts of economists were getting antsy, starting to look around and sniff the air. They sensed that the conventional neoclassical framework that had dominated over the past generation had reached a high water mark. It had allowed them to explore very thoroughly the domain of problems that are treatable by static equilibrium analysis. But it had virtually ignored the problems of process, evolution, and pattern formation—problems where things were not at equilibrium, where there’s a lot of happenstance, where history matters a great deal, where adaptation and evolution might go on forever.

We can deal with inductive learning rather than deductive logic, we can cut the Gordian knot of equilibrium and deal with open-ended evolution, because many of these problems have been dealt with by other disciplines. Santa Fe provided the jargon, the metaphors, and the expertise that you needed in order to get the techniques started in economics. But more than that, Santa Fe legitimized this different vision of economics.

It wasn’t that the standard formulation was wrong, he said, but that we were exploring into a new way of looking at parts of the economy that are not amenable to conventional methods. So this new approach was complementary to the standard ones. He also said that we didn’t know where this new sort of economics was taking us. It was the beginnings of a research program. But he found it very interesting and exciting.

He said, ‘I think we can safely say we have another type of economics here. One type is the standard stuff that we’re all familiar with’—he was too modest to call it the Arrow-Debreu system, but he basically meant the neoclassical, general equilibrium theory—‘and then this other type, the Santa Fe–style evolutionary economics.’

“Nonscientists tend to think that science works by deduction,” he says. “But actually science works mainly by metaphor. And what’s happening is that the kinds of metaphor people have in mind are changing.” To put it in perspective, he says, think of what happened to our view of the world with the advent of Sir Isaac Newton. “Before the seventeenth century,” he says, “it was a world of trees, disease, human psyche, and human behavior. It was messy and organic. The heavens were also complex. The trajectories of the planets seemed arbitrary. Trying to figure out what was going on in the world was a matter of art. But then along comes Newton in the 1660s. He devises a few laws, he devises the differential calculus—and suddenly the planets are seen to be moving in simple, predictable orbits!

So in the Enlightenment, which lasted from about 1680 all through the 1700s, the era shifted to a belief in the primacy of nature: if you just left things alone, nature would see to it that everything worked out for the common good.” The metaphor of the age, says Arthur, became the clockwork motion of the planets: a simple, regular, predictable Newtonian machine that would run of itself. And the model for the next two and a half centuries of reductionist science became Newtonian physics. “Reductionist science tends to say, ‘Hey, the world out there is complicated and a mess—but look! Two or three laws reduce it all to an incredibly simple system!’ “So all that remained was for Adam Smith, at the height of the Scottish Enlightenment around Edinburgh, to understand the machine behind the economy,” says Arthur. “In 1776, in The Wealth of Nations, he made the case that if you left people alone to pursue their individual interests, the ‘Invisible Hand’ of supply and demand would see to it that everything worked out for the common good.” Obviously, this was not the whole story: Smith himself pointed to such nagging problems as worker alienation and exploitation. But there was so much about his Newtonian view of the economy that was simple and powerful and right that it has dominated Western economic thought ever since. “Smith’s idea was so brilliant that it just dazzled us,” says Arthur.

“People realized that logic and philosophy are messy, that language is messy, that chemical kinetics is messy, that physics is messy, and finally that the economy is naturally messy. And it’s not that this is a mess created by the dirt that’s on the microscope glass. It’s that this mess is inherent in the systems themselves. You can’t capture any of them and confine them to a neat box of logic.” The result, says Arthur, has been the revolution in complexity. “In a sense it’s the opposite of reductionism. The complexity revolution began the first time someone said, ‘Hey, I can start with this amazingly simple system, and look—it gives rise to these immensely complicated and unpredictable consequences.’” Instead of relying on the Newtonian metaphor of clockwork predictability, complexity seems to be based on metaphors more closely akin to the growth of a plant from a tiny seed, or the unfolding of a computer program from a few lines of code, or perhaps even the organic, self-organized flocking of simpleminded birds.

“If I had a purpose, or a vision, it was to show that the messiness and the liveliness in the economy can grow out of an incredibly simple, even elegant theory. That’s why we created these simple models of the stock market where the market appears moody, shows crashes, takes off in unexpected directions, and acquires something that you could describe as a personality.”

When you look at the subject through Chris Langton’s phase transition glasses, for example, all of neoclassical economics is suddenly transformed into a simple assertion that the economy is deep in the ordered regime, where the market is always in equilibrium and things change slowly if at all. The Santa Fe approach is likewise transformed into a simple assertion that the economy is at the edge of chaos, where agents are constantly adapting to each other and things are always in flux. Arthur always knew which assertion he thought was more realistic.

If you think that you’re a steamboat and can go up the river, you’re kidding yourself. Actually, you’re just the captain of a paper boat drifting down the river. If you try to resist, you’re not going to get anywhere. On the other hand, if you quietly observe the flow, realizing that you’re part of it, realizing that the flow is ever-changing and always leading to new complexities, then every so often you can stick an oar into the river and punt yourself from one eddy to another.

“So what’s the connection with economic and political policy? Well, in a policy context, it means that you observe, and observe, and observe, and occasionally stick your oar in and improve something for the better. It means that you try to see reality for what it is, and realize that the game you are in keeps changing, so that it’s up to you to figure out the current rules of the game as it’s being played. It means that you observe the Japanese like hawks, you stop being naive, you stop appealing for them to play fair, you stop adhering to standard theories that are built on outmoded assumptions about the rules of play, you stop saying, ‘Well, if only we could reach this equilibrium we’d be in fat city.’ You just observe. And where you can make an effective move, you make a move.” Notice that this is not a recipe for passivity, or fatalism, says Arthur. “This is a powerful approach that makes use of the natural nonlinear dynamics of the system. You apply available force to the maximum effect. You don’t waste it. This is exactly the difference between Westmoreland’s approach in South Vietnam versus the North Vietnamese approach. Westmoreland would go in with heavy forces and artillery and barbed wire and burn the villages. And the North Vietnamese would just recede like a tide. Then three days later they’d be back, and no one knew where they came from. It’s also the principle that lies behind all of Oriental martial arts. You don’t try to stop your opponent, you let him come at you—and then give him a tap in just the right direction as he rushes by. The idea is to observe, to act courageously, and to pick your timing extremely well.” Arthur is reluctant to get into the implications of all this for policy issues. But he does remember one small workshop that Murray Gell-Mann persuaded him to cochair in the fall of 1989, shortly before he left the institute. The purpose of the workshop was to look at what complexity might have to say about the interplay of economics, environmental values, and public policy in a region such as Amazonia, where the rain forest is being cleared for roads and farms at an alarming rate. The answer Arthur gave during his own talk was that you can approach policy-making for the rain forest (or for any other subject) on three different levels. The first level, he says, is the conventional cost-benefit approach: What are the costs of each specific course of action, what are the benefits, and how do you achieve the optimum balance between the two? “There is a place for that kind of science,” says Arthur. “It does force you to think through the implications of the alternatives. And certainly at that meeting we had a number of people arguing the costs and benefits of rain forests. The trouble is that this approach generally assumes that the problems are well defined, that the options are well defined, and that the political wherewithal is there, so that the analyst’s job is simply to put numbers on the costs and benefits of each alternative. It’s as though the world were a railroad switch yard: We’re going down this one track, and we have switches we can turn to guide the train onto other tracks.” Unfortunately for the standard theory, however, the real world is almost never that well defined—particularly when it comes to environmental issues. All too often, the apparent objectivity of cost-benefit analyses is the result of slapping arbitrary numbers on subjective judgments, and then assigning the value of zero to the things that nobody knows how to evaluate. “I ridicule some of these cost-benefit analyses in my classes,” he says. “The ‘benefit’ of having spotted owls is defined in terms of how many people visit the forest, how many will see a spotted owl, and what’s it worth to them to see a spotted owl, et cetera. It’s all the greatest rubbish. This type of environmental cost-benefit analysis makes it seem as though we’re in front of the shop window of nature looking in, and saying, ‘Yes, we want this, or this, or this’—but we’re not inside, we’re not part of it. So these studies have never appealed to me. By asking only what is good for human beings, they are being presumptuous and arrogant.” The second level of policy-making is a full institutional-political analysis, says Arthur: figuring out who’s doing what, and why. “Once you start to do that for, say, the Brazilian rain forest, you find that there are various players: landowners, settlers, squatters, politicians, rural police, road builders, indigenous peoples. They aren’t out to get the environment, but they are all playing this elaborate, interactive Monopoly game, in which the environment is being deeply affected. Moreover, the political system isn’t some exogenous thing that stands outside the game. The political system is actually an outcome of the game—the alliances and coalitions that form as a result of it.” In short, says Arthur, you look at the system as a system, the way a Taoist in his paper boat would observe the complex, ever-changing river. Of course, a historian or a political scientist would look at the situation this way instinctively. And some beautiful studies in economics have recently started to take this approach. But at the time of the workshop in 1989, he says, the idea still seemed to be a revelation to many economists.

“If you really want to get deeply into an environmental issue, I told them, you have to ask these questions of who has what at stake, what alliances are likely to form, and basically understand the situation. Then you might find certain points at which intervention may be possible. “So all of that is leading up to the third level of analysis,” says Arthur. “At this level we might look at what two different world views have to say about environmental issues. One of these is the standard equilibrium viewpoint that we’ve inherited from the Enlightenment—the idea that there’s a duality between man and nature, and that there’s a natural equilibrium between them that’s optimal for man. And if you believe this view, then you can talk about ‘the optimization of policy decisions concerning environmental resources,’ which was a phrase I got from one of the earlier speakers at the workshop. “The other viewpoint is complexity, in which there is basically no duality between man and nature,” says Arthur. “We are part of nature ourselves. We’re in the middle of it. There’s no division between doers and done-to because we are all part of this interlocking network. If we, as humans, try to take action in our favor without knowing how the overall system will adapt—like chopping down the rain forest—we set in motion a train of events that will likely come back and form a different pattern for us to adjust to, like global climate change. “So once you drop the duality,” he says, “then the questions change. You can’t then talk about optimization, because it becomes meaningless. It would be like parents trying to optimize their behavior in terms of ‘us versus the kids,’ which is a strange point of view if you see yourself as a family. You have to talk about accommodation and coadaptation—what-would be good for the family as a whole. “Basically, what I’m saying is not at all new to Eastern philosophy. It’s never seen the world as anything else but a complex system. But it’s a world view that, decade by decade, is becoming more important in the West—both in science and in the culture at large. Very, very slowly, there’s been a gradual shift from an exploitative view of nature—man versus nature—to an approach that stresses the mutual accommodation of man and nature. What has happened is that we’re beginning to lose our innocence, or naivete, about how the world works. As we begin to understand complex systems, we begin to understand that we’re part of an ever-changing, interlocking, nonlinear, kaleidoscopic world. “So the question is how you maneuver in a world like that. And the answer is that you want to keep as many options open as possible. You go for viability, something that’s workable, rather than what’s ‘optimal.’ A lot of people say to that, ‘Aren’t you then accepting second best?’ No, you’re not, because optimization isn’t well defined anymore. What you’re trying to do is maximize robustness, or survivability, in the face of an ill-defined future. And that, in turn, puts a premium on becoming aware of nonlinear relationships and causal pathways as best we can. You observe the world very, very carefully, and you don’t expect circumstances to last.” So what is the role of the Santa Fe Institute in all this? Certainly not to become another policy think tank, says Arthur, although there always seem to be a few people who expect it to. No, he says, the institute’s role is to help us look at this ever-changing river and understand what we’re seeing. “If you have a truly complex system,” he says, “then the exact patterns are not repeatable. And yet there are themes that are recognizable. In history, for example, you can talk about ‘revolutions,’ even though one revolution might be quite different from another. So we assign metaphors. It turns out that an awful lot of policy-making has to do with finding the appropriate metaphor. Conversely, bad policy-making almost always involves finding inappropriate metaphors. For example, it may not be appropriate to think about a drug ‘war,’ with guns and assaults. “So from this point of view, the purpose of having a Santa Fe Institute is that it, and places like it, are where the metaphors and a vocabulary are being created in complex systems.

“So I would argue that a wise use of the SFI is to let it do science,” he says. “To make it into a policy shop would be a great mistake. It would cheapen the whole affair. And in the end it would be counterproductive, because what we’re missing at the moment is any precise understanding of how complex systems operate. This is the next major task in science for the next 50 to 100 years.”

“I think there’s a personality that goes with this kind of thing,” Arthur says. “It’s people who like process and pattern, as opposed to people who are comfortable with stasis and order. I know that every time in my life that I’ve run across simple rules giving rise to emergent, complex messiness, I’ve just said, ‘Ah, isn’t that lovely!’ And I think that sometimes, when other people run across it, they recoil.”

The brutal fact was that there was very little research funding out there even for mainstream economics, much less for this speculative Santa Fe stuff. “It turns out that economics is very poorly supported in this country,” says Cowan. “Individual economists are paid very well, but they don’t get paid for doing basic research. They get paid from corporate sources, for doing programmatic things. At the same time, the field gets remarkably little in the way of research money from the National Science Foundation and other government agencies because it’s a social science, and the government is not a big patron of the social sciences. It smacks of ‘planning,’ which is a bad word.”

“adaptive computation”: an effort to develop a set of mathematical and computational tools that could be applied to all the sciences of complexity—including economics.

John Holland’s ideas about genetic algorithms and classifier systems had long since permeated the institute, and would presumably form the backbone of adaptive computation.

A lively cross-fertilization was well under way—witness Doyne Farmer’s “Rosetta Stone for Connectionism” paper, in which he pointed out that neural networks, the immune system model, autocatalytic sets, and classifier systems were essentially just variations on the same underlying theme. Indeed, Mike Simmons had invented the phrase “adaptive computation” one day in 1989, as he and Cowan had been sitting in Cowan’s office kicking around names that would be broad enough to cover all these ideas—but that wouldn’t carry the intellectual baggage of a phrase like “artificial intelligence.”

he was also hoping that the program would give economists, sociologists, political scientists, and even historians some of the same precision and rigor that Newton brought to physics when he invented calculus. “What we’re still waiting for—it may take ten or fifteen years—is a really rich, vigorous, general set of algorithmic approaches for quantifying the way complex adaptive agents interact with one another,” he says. “The usual way debates are conducted now in the social sciences is that each person takes a two-dimensional slice through the problem, and then argues that theirs is the most important slice. ‘My slice is more important than your slice, because I can demonstrate that fiscal policy is much more important than monetary policy,’ and so forth. But you can’t demonstrate that, because in the end it’s all words, whereas a computer simulation provides a catalog of explicitly identified parameters and variables, so that people at least talk about the same things. And a computer lets you handle many more variables. So if a simulation has both fiscal policy and monetary policy in it, then you can start to say why one turns out to be more important than the other. The results may be right or they may be wrong. But it’s a much more structured debate. Even when the models are wrong, they have an enormous advantage of structuring the discussions.”

the institute might actually be much better off without a permanent faculty. “The virtue was that we were more flexible than we would have been,” says Cowan. After all, he’d realized, once you hire a bunch of people full time, your research program is pretty well cast in concrete until those people leave or die. So why not just keep the institute going in its catalyst role? It had certainly worked beautifully so far. Keep going with a rotating cast of visiting academics who would stay for a while, mix it up in the intellectual donnybrook, and then go back to their home universities to continue their collaborations long distance—and, not incidentally, to spread the revolution among their stay-at-home colleagues.

I feel as though I’ve taken a new lease on life, at the cranium part of my body. And that, to me, is a major accomplishment. It makes everything I’ve ever done here worth-while.”

The issue that grabs him the hardest, he says, is adaptation—or, more precisely, adaptation under conditions of constant change and unpredictability. Certainly he considers it one of the central issues in the elusive quest for global sustainability. And, not incidentally, he finds it to be an issue that’s consistently slighted in all this talk about “transitions” to a sustainable world. “Somehow,” he says, “the agenda has been put into the form of talking about a set of transitions from state A, the present, to a state B that’s sustainable. The problem is that there is no such state. You have to assume that the transitions are going to continue forever and ever and ever. You have to talk about systems that remain continuously dynamic, and that are embedded in environments that themselves are continuously dynamic.” Stability, as John Holland says, is death; somehow, the world has to adapt itself to a condition of perpetual novelty, at the edge of chaos.

Of course, adds Cowan, it may be that concepts such as the edge of chaos and self-organized criticality are telling us that Class A catastrophes are inevitable no matter what we do. “Per Bak has shown that it’s a fairly fundamental phenomenon to have upheavals and avalanches on all scales, including the largest,” he says. “And I’m prepared to believe that.” But he also finds reason for optimism in this mysterious, seemingly inexorable increase in complexity over time. “The systems that Per Bak looks at don’t have memory or culture,” he says. “And for me it’s an article of faith that if you can add memory and accurate information from generation to generation—in some better way than we have in the past—then you can accumulate wisdom. I doubt very much whether the world is going to be transformed into a wonderful paradise free of trauma and tragedy. But I think it’s a necessary part of a human vision to believe we can shape the future. Even if we can’t shape it totally, I think that we can exercise some kind of damage control. Perhaps we can get the probability of catastrophe to decrease in each generation.