Summary
Highlights
a great many independent agents are interacting with each other in a great many ways.
the very richness of these interactions allows the system as a whole to undergo spontaneous self-organization.
all these complex systems have somehow acquired the ability to bring order and chaos into a special kind of balance. This balance pointâoften called the edge of chaosâis were the components of a system never quite lock into place, and yet never quite dissolve into turbulence, either.
Economics had to take that ferment into account. And now he believed heâd found the way to do that, using a principle known as âincreasing returnsââor in the King James translation, âTo them that hath shall be given.â
Winner takes all
âAll the Irish heroes were revolutionaries. The highest peak of heroism is to lead an absolutely hopeless revolution, and then give the greatest speech of your life from the dockâthe night before youâre hanged.
neoclassical theorists had embroidered the basic model with all sorts of elaborations to cover things like uncertainty about the future, or the transfer of property from one generation to the next.
The theory still didnât describe the messiness and the irrationality of the human world that Arthur had seen in the valley of the Ruhrâor,
The standard advice at the time tended to place a heavy reliance on economic determinism: to achieve its âoptimumâ population, all a country had to do was give its people the right economic incentives to control their reproduction, and they would automatically follow their own rational self-interest. In particular, many economists were arguing that when and if a country became a modern industrial stateâorganized along Western lines, of courseâits citizenry would naturally undergo a âdemographic transition,â automatically lowering their birthrates to match those that prevailed in European countries.
the quantitative engineering approachâthe idea that human beings will respond to abstract economic incentives like machinesâwas highly limited at best. Economics, as any historian or anthropologist could have told him instantly, was hopelessly intertwined with politics and culture.
Life develops. It has a history. Maybe, he thought, maybe thatâs why this biological world seems so spontaneous, organic, andâwell, alive. Come to think of it, maybe that was also why the economistsâ imaginary world of perfect equilibrium had always struck him as static, machinelike, and dead. Nothing much could ever happen there; tiny chance
real economy was not a machine but a kind of living system, with all the spontaneity and complexity that Judson was showing him in the world of molecular biology.
the other kinds of cells that make up a newborn baby. Each different
An entire sprawling set of self-consistent patterns that formed and evolved and changed in response to the outside world. It reminded him of nothing so much as a kaleidoscope, where a handful of beads will lock in to one pattern and hold itâuntil a slow turn of the barrel causes them to suddenly cascade into a new configuration. A handful of pieces and an infinity of possible patterns. Somehow, in a way he couldnât quite express, this seemed to be the essence of
the DNA residing in a cellâs nucleus was not just a blueprint for the cellâa catalog of how to make this protein or that protein. DNA was actually the foreman in charge of construction. In effect, DNA was a kind of molecular-scale computer that directed how the cell was to build itself and repair itself and interact with the outside world.
Nature seems to be less interested in creating structures than in tearing structures apart and mixing things up into a kind of average.
Left to themselves, says the second law, atoms will mix and randomize themselves as much as possible.
Yet for all of that, we do see plenty of order and structure around.
In the real world, atoms and molecules are almost never left to themselves, not completely; they are almost always exposed to a certain amount of energy and material flowing in from the outside. And if that flow of energy and material is strong enough, then the steady degradation demanded by the second law can be partially reversed. Over a limited region, in fact, a system can spontaneously organize itself into a whole series of complex structures.
Eg boiling soup
In mathematical terms, Prigogineâs central point was that self-organization depends upon self-reinforcement: a tendency for small effects to become magnified when conditions are right, instead of dying away. It was precisely the same message that had been implicit in Jacob and Monodâs work on DNA.
yet, positive feedback is precisely what conventional economics didnât have, Arthur realized. Quite the opposite. Neoclassical theory assumes that the economy is entirely dominated by negative feedback:
The dying-away tendency was implicit in the economic doctrine of âdiminishing returnsâ:
negative feedback/diminishing returns is what underlies the whole neoclassical vision of harmony, stability, and equilibrium in the economy.
And now that QWERTY is a standard used by millions of people, itâs essentially locked in forever.
the video stores hated having to stock everything in two different formats, and consumers hated the idea of being stuck with obsolete VCRs. So everyone had a big incentive to go with the market leader. That pushed up VHSâs market share even more, and the small initial difference grew rapidly.
Increasing returns can take a trivial happenstanceâwho bumped into whom in the hallway, where the wagon train happened to stop for the night, where trading posts happened to be set up, where Italian shoemakers happened to emigrateâand magnify it into something historically irreversible.
The point was that you have to look at the world as it is, not as some elegant theory says it ought to be.
âThe important thing is to observe the actual living economy out there,â he says. âItâs path-dependent, itâs complicated, itâs evolving, itâs open, and itâs organic.â
Predictions are nice, if you can make them. But the essence of science lies in explanation, laying bare the fundamental mechanisms of nature.
Increasing returns isnât an isolated phenomenon at all: the principle applies to everything in high technology.
High technology could almost be defined as âcongealed knowledge,â says Arthur. âThe marginal cost is next to zilch, which means that every copy you produce makes the product cheaper and cheaper.â
More than that, every copy offers a chance for learning: getting the yield up on microprocessor chips, and so on. So thereâs a tremendous reward for increasing productionâin short, the system is governed by increasing returns.
increasing returns isnât displacing the standard theory at all,â says Arthur, âItâs helping complete the standard theory. It just applies in a different domain.â
They were successful because increasing returns make high-tech markets unstable, lucrative, and possible to cornerâand because Japan understood this better and earlier than other countries. The Japanese are very quick at learning from other nations. And they are very good at targeting markets, going in with huge volume, and taking advantage of the dynamics of increasing returns to lock in their advantage.â
What this means in practical terms, he adds, is that U.S. policy-makers ought to be very careful about their economic assumptions regarding, say, trade policy vis-Ă -vis Japan.
he suspects that one of the main reasons the United States has had such a big problem with âcompetitivenessâ is that government policy-makers and business executives alike were very slow to recognize the winner-take-all nature of high-tech markets.
âIf we want to continue manufacturing our wealth from knowledge,â he says, âwe need to accommodate the new rules.â
âWherever there is more than one equilibrium point possible, the outcome was deemed to be indeterminate,â he says. âEnd of story. There was no theory of how an equilibrium point came to be selected.
shortly before his year there drew to a close he got a call from the dean: âWhat would it take to keep you here?â âWell,â Arthur replied, secure in the knowledge that he already had a fistful of job offers from the World Bank, the London School of Economics, and Princeton, âI see thereâs this endowed chair coming open.âŠâ
âWe donât negotiate with endowed chairs!â she declared. âI wasnât negotiating,â said Arthur. âYou just asked me what it would take to keep me here.â
the free-market ideal had become bound up with American ideals of individual rights and individual liberty: both are grounded in the notion that society works best when people are left alone to do what they want.
increasing returns cut to the heart of that myth. If small chance events can lock you in to any of several possible outcomes, then the outcome thatâs actually selected may not be the best. And that means that maximum individual freedomâand the free marketâmight not produce the best of all possible worlds.
âThe royal road to a Nobel Prize has generally been through the reductionist approach,â he saysâdissecting the world into the smallest and simplest pieces you can. âYou look for the solution of some more or less idealized set of problems, somewhat divorced from the real world, and constrained sufficiently so that you can find a solution,â he says. âAnd that leads to more and more fragmentation of science. Whereas the real world demandsâthough I hate the wordâa more holistic approach.â Everything affects everything else, and you have to understand that whole web of connections.
even some of the hard-core physical scientists were getting fed up with mathematical abstractions that ignored the real complexities of the world. They seemed to be half-consciously groping for a new approachâand in the process, he thought, they were cutting across the traditional boundaries in a way they hadnât done in years. Maybe centuries.
physicists have been deeply involved with molecular biology from the beginning. Many of the pioneers in the field had actually started out as physicists; one of their big motivations to switch was a slim volume entitled What Is Life?, a series of provocative speculations about the physical and chemical basis of life published in 1944 by the Austrian physicist Erwin Schrödinger, a coinventor of quantum mechanics.
equations can sometimes produce astonishing behavior. The mathematics of a thunderstorm actually describes how each puff of air pushes on its neighbors, how each bit of water vapor condenses and evaporates, and other such small-scale matters;
But when the computer integrates those equations over miles of space and hours of time, that is exactly the behavior they produce.
except for the very simplest physical systems, virtually everything and everybody in the world is caught up in a vast, nonlinear web of incentives and constraints and connections. The slightest change in one place causes tremors everywhere else. We canât help but disturb the universe, as T. S. Eliot almost said.
they started to take advantage of that fact, applying that computer power to more and more kinds of nonlinear equations, they began to find strange, wonderful behaviors that their experience with linear systems had never prepared them for.
In part because of their computer simulations, and in part because of new mathematical insights, physicists had begun to realize by the early 1980s that a lot of messy, complicated systems could be described by a powerful theory known as ânonlinear dynamics.â And in the process, they had been forced to face up to a disconcerting fact: the whole really can be greater than the sum of its parts.
a flap of that butterflyâs wings a millimeter to the left might have deflected the hurricane in a totally different direction.
Butterfly effect in chaos theory
the message was the same: everything is connected, and often with incredible sensitivity. Tiny perturbations wonât always remain tiny. Under the right circumstances, the slightest uncertainty can grow until the systemâs future becomes utterly unpredictableâor, in a word, chaotic.
If the world can organize itself into many, many possible patterns, they asked, and if the pattern it finally chooses is a historical accident, then how can you predict anything? And if you canât predict anything, then how can what youâre doing be called science?
it became apparent that what was really getting his critics riled up was this concept of the economy locking itself in to an unpredictable outcome.
Conversely, researchers began to realize that even some very simple systems could produce astonishingly rich patterns of behavior. All that was required was a little bit of nonlinearity.
The drip-drip-drip of water from a leaky faucet, for example, could be as maddeningly regular as a metronomeâso long as the leak was slow enough. But if you ignored the leak for a while and let the flow rate increase ever so slightly, then the drops would soon start to alternate between large and small: DRIP-drip-DRIP-drip. If you ignored it a while longer and let the flow increase still more, the drops would soon start to come in sequences of 4âand then 8, 16, and so forth. Eventually, the sequence would become so complex that the drops would seem to come at randomâagain, chaos.
Cowan had a suspicion that they were only the beginning. It was more a gut feeling than anything else. But he sensed that there was an underlying unity here, one that would ultimately encompass not just physics and chemistry, but biology, information processing, economics, political science, and every other aspect of human affairs.
The Manhattan Project started with a specific research challengeâbuilding the bombâand brought together scientists from every relevant specialty to tackle that challenge as a team.
this institute ought to be a place where you could take very good scientistsâpeople who would really know what they were talking about in their own fieldsâand offer them a much broader curriculum than they usually get.
After thirty years as an administrator, he was convinced that the only way to make something like this happen was to get a lot of people excited about it. âYou have to persuade very good people that this is an important thing to do,â he says. âAnd by the way, Iâm not talking about a democracy, Iâm talking about the top one-half of one percent. An elite. But once you do that, then the money isâwell, not easy, but a smaller part of the problem.â
âI said I felt that what we should look for were great syntheses that were emerging today, that were highly interdisciplinary,â says Gell-Mann. Some were already well on their way: Molecular biology. Nonlinear science. Cognitive science. But surely there were other emerging syntheses out there, he said, and this new institute should seek them out.
These narrow conceptions werenât grand enough, he told the fellows. âWe had to set ourselves a really big task. And that was to tackle the great, emerging syntheses in scienceâones that involve many, many disciplines.â
he jotted down three or four pages of suggestions for how to organize the institute so as to avoid the pitfalls. (The main point: Donât have separate departments!)
needed people who had demonstrated real expertise and creativity in an established discipline, but who were also open to new ideas. That turned out to be a depressingly rare combination, even among (or especially among) the most prestigious scientists.
Millions of species have gotten along just fine without brains as large as ours. Why was our species different?
Douglas Schwartz of the School for American Research, the Santa Feâbased archeology center that was hosting the workshop, argued that archeology was a subject that was especially ripe for interactions with other disciplines.
Researchers in the field were confronted with three fundamental mysteries,
Second, said Schwartz, why did agriculture and fixed settlements replace nomadic hunting and gathering?
third, what forces triggered the development of cultural complexity, including specialization of crafts, the rise of elites, and the emergence of power based on factors such as economics and religion?
Increasingly, he said, he was beginning to think of the rise and fall of civilizations as a kind of selforganizing phenomenon, in which human beings chose different clusters of cultural alternatives at different moments in response to different perceptions of environment.
This self-organization theme was also taken up in a quite different form by Stephen Wolfram of the Institute for Advanced Study, a twenty-five-year-old wunderkind from England who was trying to investigate the phenomenon of complexity at the most fundamental level.
Whenever you look at very complicated systems in physics or biology, he said, you generally find that the basic components and the basic laws are quite simple; the complexity arises because you have a great many of these simple components interacting simultaneously. The complexity is actually in the organizationâthe myriad possible ways that the components of the system can interact.
The challenge for theorists, he said, is to formulate universal laws that describe when and how such complexities emerge in nature.
Louis Branscomb, chief scientist of IBM, strongly endorsed the idea of an institute without departmental walls, where people could talk and interact creatively. âItâs important to have people who steal ideas!â he said.
the founding workshops made it clear that every topic of interest had at its heart a system composed of many, many âagents.â These agents might be molecules or neurons or species or consumers or even corporations. But whatever their nature, the agents were constantly organizing and reorganizing themselves into larger structures through the clash of mutual accommodation and mutual rivalry.
Complexity, in other words, was really a science of emergence. And the challenge that Cowan had been trying to articulate was to find the fundamental laws of emergence.
The existing neoclassical theory and the computer models based on it simply did not give him the kind of information he needed to make real-time decisions in the face of risk and uncertainty.
yet none of the models really dealt with social and political factors, which were often the most important variables of all. Most of them assumed that the modelers would put in interest rates, currency exchange rates, and other such variables by handâeven though these are precisely the quantities that a banker wants to predict.
virtually all of them tended to assume that the world was never very far from static economic equilibrium, when in fact the world was constantly being shaken by economic shocks and upheavals.
A case in point was the most recent world economic upheaval, which was symbolized by President Carterâs 1979 appointment of Paul Volker to head the Federal Reserve Board. The story of that upheaval actually began in the 1940s, explained Reed, at a time when governments around the world found themselves struggling to cope with the economic consequences of two World Wars and a Great Depression in between. Their efforts, which culminated in the Bretton Woods agreements of 1944, led to a widespread recognition that the world economy had become far more interconnected than ever before. Under the new regime, nations shifted away from isolationism and protectionism as instruments of national policy; instead, they agreed to operate through international institutions such as the World Bank, the International Monetary Fund, and the General Agreements on Tariffs and Trade. And it worked, said Reed. In financial terms, at least, the world remained remarkably stable for a quarter of a century.
But then came the 1970s. The oil shocks of 1973 and 1979, the Nixon administrationâs decision to let the price of the dollar float on the world currency market, rising unemployment, rampant âstagflationââthe system cobbled together at Bretton Woods began to unravel, said Reed. Money began flowing around the world at an ever-increasing rate. And Third World countries that had once been starving for investment capital now began borrowing heavily to build their own economiesâhelped along by U.S. and European companies that were moving their production offshore to minimize costs. Following the advice of their in-house economists, said Reed, Citicorp and many other international banks had happily lent billions of dollars to these developing countries. No one had really believed it when Paul Volker came to the Fed vowing to reign in inflation no matter what it took, even if it meant raising interest rates through the roof and causing a recession. In fact, the banks and their economists had failed to appreciate the similar words being voiced in ministerial offices all over the world. No democracy could tolerate that kind of pain. Could it? And so, said Reed, Citicorp and the other banks had continued loaning money to the developing nations throughout the early 1980sâright up until 1982, when first Mexico, and then Argentina, Brazil, Venezuela, the Philippines, and many others revealed that the worldwide recession triggered by the anti-inflation fight would make it impossible to meet their loan payments. Since becoming CEO in 1984, said Reed, heâd spent the bulk of his time cleaning up this mess. It had already cost Citibank several billion dollarsâso farâand had caused
Reed didnât expect that any new economic theory would be able to predict the appointment of a specific person such as Paul Volker. But a theory that was better attuned to social and political realities might have predicted the appointment of someone like Volkerâwho, after all, was just doing the politically necessary job of inflation control superbly well.
Heâd urged economists to pay more attention to real human psychology, for example, and most recently he had gotten intrigued with the possibility of using the mathematics of nonlinear science and chaos theory in economics.
Kenneth arrow
For all that Arrow was one of the founders of establishment economics, he had also, like Anderson, remained a bit of an iconoclast himself. He knew full well what the drawbacks of the standard theory were. In fact, he could articulate them better than most of the critics could. Occasionally he even published what he called his âdissidentâ papers, calling for new approaches. Heâd urged economists to pay more attention to real human psychology, for example, and most recently he had gotten intrigued with the possibility of using the mathematics of nonlinear science and chaos theory in economics.
He didnât mind people criticizing the standard model, but theyâd better damn well understand what it was they were criticizing.
He also displayed a very high ratio of talking to listening. Indeed, that seemed to be the way he thought things through: by talking about his ideas out loud. And talking about them. And talking about them.
For Kauffman, order told us how we could indeed be an accident of natureâand yet be very much more than just an accident.
Charles Darwin was absolutely right: human beings and all other living things are undoubtedly the heirs of four billion years of random mutation, random catastrophes, and random struggles for survival;
Nor did Darwin know that the forces of order and self-organization apply to the creation of living systems just as surely as they do to the formation of snowflakes or the appearance of convection cells in a simmering pot of soup.
But it is also the story of order: a kind of deep, inner creativity that is woven into the very fabric of nature.
the thrill of being taken seriously by a real adultâTodd was twenty-four at the timeâwas for Kauffman a crucial step in his intellectual awakening.
âThey say that time heals,â he adds. âBut thatâs not quite true. Itâs simply that the grief erupts less often.â
technology isnât really like a commodity at all. It is much more like an evolving ecosystem.
âSo thereâd be a network of possible technologies, all interconnected and growing as more things became possible. And therefore the economy could become more complex.â
this process is an excellent example of what he meant by increasing returns: once a new technology starts opening up new niches for other goods and services, the people who fill those niches have every incentive to help that technology grow and prosper.
âThere was something too marvelous about DNA. I simply didnât want it to be true that the origin of life depended on something quite as special as that. The way I phrased it to myself was, âWhat if God had hung another valence bond on nitrogen? [Nitrogen atoms are abundant in DNA molecules.] Would life be impossible?â And it seemed to me to be an appalling conclusion that life should be that delicately poised.â But then, thought Kauffman, who says that the critical thing about life is DNA? For that matter, who says that the origin of life was a random event? Maybe there was another way to get a self-replicating system started, a way that would have allowed living systems to bootstrap their way into existence from simple reactions.
In other words, Kauffman realized, if the conditions in your primordial soup were right, then you wouldnât have to wait for random reactions at all. The compounds in the soup could have formed a coherent, self-reinforcing web of reactions.
Taken as a whole, in short, the web would have catalyzed its own formation. It would have been an âautocatalytic set.â Kauffman was in awe when he realized all this. Here it was again: order. Order for free. Order arising naturally from the laws of physics and chemistry. Order emerging spontaneously from molecular chaos and manifesting itself as a system that grows. The idea was indescribably beautiful.
to Kauffman, this autocatalytic set story was far and away the most plausible explanation for the origin of life that he had ever heard. If it were true, it meant the origin of life didnât have to wait for some ridiculously improbable event to produce a set of enormously complicated molecules; it meant that life could indeed have bootstrapped its way into existence from very simple molecules. And it meant that life had not been just a random accident, but was part of natureâs incessant compulsion for self-organization.
an autocatalytic set can bootstrap its own evolution in precisely the same way that an economy can, by growing more and more complex over time.
If innovations result from new combinations of old technologies, then the number of possible innovations would go up very rapidly as more and more technologies became available. In fact, he argued, once you get beyond a certain threshold of complexity you can expect a kind of phase transition analogous to the ones he had found in his autocatalytic sets. Below that level of complexity you would find countries dependent upon just a few major industries, and their economies would tend to be fragile and stagnant.
if a country ever managed to diversify and increase its complexity above the critical point, then you would expect it to undergo an explosive increase in growth and innovationâwhat some economists have called an âeconomic takeoff.â
The existence of that phase transition would also help explain why trade is so im portant to prosperity, Kauffman told Arthur. Suppose you have two different countries, each one of which is subcritical by itself. Their economies are going nowhere. But now suppose they start trading, so that their economies become interlinked into one large economy with a higher complexity. âI expect that trade between such systems will allow the joint system to become supercritical and explode outward.â
an autocatalytic set can undergo exactly the same kinds of evolutionary booms and crashes that an economy does. Injecting one new kind of molecule into the soup could often transform the set utterly, in much the same way that the economy was transformed when the horse was replaced by the automobile. This was the part of autocatalysis that really captivated Arthur. It had the same qualities that had so fascinated him when he first read about molecular biology: upheaval and change and enormous consequences flowing from trivial-seeming eventsâand yet with deep law hidden beneath.
a network analysis wouldnât help anybody predict precisely what new technologies are going to emerge next week. But it might help economists get statistical and structural measures of the process. When you introduce a new product, for example, how big an avalanche does it typically cause? How many other goods and services does it bring with it, and how many old ones go out? And how do you recognize when a good has become central to an economy, as opposed to being just another hula-hoop?
âThe point is that the phase transitions may be lawful, but the specific details are not. So maybe we have the starts of models of historical, unfolding processes for such things as the Industrial Revolution, for example, or the Renaissance as a cultural transformation, and why it is that an isolated society, or ethos, canât stay isolated when you start plugging some new ideas into it.â
In this sense a spin glass was quite a good metaphor for the economy. âIt naturally has a mixture of positive and negative feedbacks, which gives it an extremely high number of natural ground states, or equilibria.â Thatâs exactly the point heâd been trying to make all along with his increasing-returns economics.
âIt seemed as though they were dazzling themselves with fancy mathematics, until they really couldnât see the forest for the trees. So much time was being spent on trying to absorb the mathematics that I thought they often werenât looking at what the models were for, and what they did, and whether the underlying assumptions were any good.
what most of the economists didnât knowâand were startled to find outâwas that physicists are comparatively casual about their math. âThey use a little rigorous thinking, a little intuition, a little back-of-the-envelope calculationâso their style is really quite different,â
And the reason is that physical scientists are obsessive about founding their assumptions and their theories on empirical fact.
the physicists were nonetheless disconcerted at how seldom the economists seemed to pay attention to the empirical data that did exist. Again and again, for example, someone would ask a question like âWhat about noneconomic influences such as political motives in OPEC oil pricing, and mass psychology in the stock market? Have you consulted sociologists, or psychologists, or anthropologists, or social scientists in general?â And the economistsâwhen they werenât curling their lips at the thought of these lesser social sciences, which they considered horribly mushyâwould come back with answers like Such noneconomic forces really arenât importantâ; âThey are important, but they are too hard to treatâ; âThey arenât always too hard to treat, and in fact, weâre doing so in specific casesâ; and âWe donât need to treat them because theyâre automatically satisfied through economic effects.â
âOur particles in economics are smart, whereas yours in physics are dumb.â In physics, an elementary particle has no past, no experience, no goals, no hopes or fears about the future. It just is. Thatâs why physicists can talk so freely about âuniversal lawsâ: their particles respond to forces blindly, with absolute obedience. But in economics, said Arthur, âOur particles have to think ahead, and try to figure out how other particles might react if they were to undertake certain actions. Our particles have to act on the basis of expectations and strategies. And regardless of how you model that, thatâs what makes economics truly difficult.â
The only problem, of course, is that real human beings are neither perfectly rational nor perfectly predictableâas the physicists pointed out at great length. Furthermore, as several of them also pointed out, there are real theoretical pitfalls in assuming perfect predictions, even if you do assume that people are perfectly rational. In nonlinear systemsâand the economy is most certainly nonlinearâchaos theory tells you that the slightest uncertainty in your knowledge of the initial conditions will often grow inexorably. After a while, your predictions are nonsense.
âThe physicists were shocked at the assumptions the economists were makingâthat the test was not a match against reality, but whether the assumptions were the common currency of the field. I can just see Phil Anderson, laid back with a smile on his face, saying, âYou guys really believe that?ââ The economists, backed into a corner, would reply, âYeah, but this allows us to solve these problems. If you donât make these assumptions, then you canât do anything.â And the physicists would come right back, âYeah, but where does that get youâyouâre solving the wrong problem if thatâs not reality.â
Holland started by pointing out that the economy is an example par excellence of what the Santa Fe Institute had come to call âcomplex adaptive systems.â
Once you learned how to recognize them, in fact, these systems were everywhere. But wherever you found them, said Holland, they all seemed to share certain crucial properties.
First, he said, each of these systems is a network of many âagentsâ acting in parallel. In a brain the agents are nerve cells, in an ecology the agents are species, in a cell the agents are organelles such as the nucleus and the mitochondria, in an embryo the agents are cells, and so on. In an economy, the agents might be individuals or households. Or if you were looking at business cycles, the agents might be firms. And if you were looking at international trade, the agents might even be whole nations. But regardless of how you define them, each agent finds itself in an environment produced by its interactions with the other agents in the system. It is constantly acting and reacting to what the other agents are doing. And because of that, essentially nothing in its environment is fixed.
Furthermore, said Holland, the control of a complex adaptive system tends to be highly dispersed. There is no master neuron in the brain, for example, nor is there any master cell within a developing embryo. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. This is true even in an economy. Ask any president trying to cope with a stubborn recession: no matter what Washington does to fiddle with interest rates and tax policy and the money supply, the overall behavior of the economy is still the result of myriad economic decisions made every day by millions of individual people.
Second, said Holland, a complex adaptive system has many levels of organization, with agents at any one level serving as the building blocks for agents at a higher level.
Furthermore, said Hollandâand this was something he considered very importantâcomplex adaptive systems are constantly revising and rearranging their building blocks as they gain experience
Third, he said, all complex adaptive systems anticipate the future.
More generally, said Holland, every complex adaptive system is constantly making predictions based on its various internal models of the worldâits implicit or explicit assumptions about the way things are out there.
Finally, said Holland, complex adaptive systems typically have many niches, each one of which can be exploited by an agent adapted to fill that niche.
Moreover, the very act of filling one niche opens up more nichesâfor
And that, in turn, means that itâs essentially meaningless to talk about a complex adaptive system being in equilibrium: the system can never get there. It is always unfolding, always in transition. In fact, if the system ever does reach equilibrium, it isnât just stable. Itâs dead. And by the same token, said Holland, thereâs no point in imagining that the agents in the system can ever âoptimizeâ their fitness, or their utility, or whatever. The space of possibilities is too vast; they have no practical way of finding the optimum. The most they can ever do is to change and improve themselves relative to what the other agents are doing. In short, complex adaptive systems are characterized by perpetual novelty.
itâs no wonder that complex adaptive systems were so hard to analyze with standard mathematics.
to really get a deep understanding of the economy, or complex adaptive systems in general, what you need are mathematics and computer simulation techniques that emphasize internal models, the emergence of new building blocks, and the rich web of interactions between multiple agents.
It wasnât simply that Hollandâs point about perpetual novelty was exactly what heâd been trying to say for the past eight years with his increasing-returns economics. Nor was it that Hollandâs point about niches was exactly what he and Stuart Kauffman had been thrashing out for the past two weeks in the context of autocatalytic sets. It was that Hollandâs whole way of looking at things had a unity, a clarity, a rightness that made you slap your forehead and say, âOf course! Why didnât I think of that?â
However new Hollandâs thinking might have seemed to Arthur and the other visitors at the economics meeting, he had long since become a familiar and profoundly influential figure among the Santa Fe regulars. His first contact with the institute had come in 1985 during a conference entitled âEvolution, Games, and Learning,â which had been organized at Los Alamos by Doyne Farmer and Norman Packard. (As it happens, this was the same meeting in which Farmer, Packard, and Kauffman first reported the results of their autocatalytic set simulation.) Hollandâs talk there was on the subject of emergence, and it seemed to go quite well. But he remembers being peppered with sharp-edged questions from this person out in the audienceâa white-haired guy with an intent, slightly cynical face peering out from behind dark-rimmed glasses. âI was fairly flip in my answers,â says Holland. âI didnât know himâand Iâd probably have been scared to death if I had!â Flip answers or not, however, Murray Gell-Mann clearly liked what Holland had to say. Shortly thereafter, Gell-Mann called him up and asked him to serve on what was then called the Santa Fe Instituteâs advisory board, which was just being formed. Holland agreed. âAnd as soon as I saw the place, I really liked it,â he says. âWhat they were talking about, the way they went at thingsâmy immediate response was, âI sure hope these guys like me, because this is for me!ââ The feeling was mutual. When Gell-Mann speaks of Holland he uses words like âbrilliantâânot a term he throws around casually. But, then, itâs not often that Gell-Mann has had his eyes opened quite so abruptly. In the early days, Gell-Mann, Cowan, and most of the other founders of the institute had thought about their new science of complexity almost entirely in terms of the physical concepts they were already familiar with, such as emergence, collective behavior, and spontaneous organization. Moreover, these concepts already seemed to promise an exceptionally rich research agenda, if only as metaphors for studying the same ideasâemergence, collective behavior, and spontaneous organizationâin realms such as economics and biology. But then Holland came along with his analysis of adaptationânot to mention his working computer models. And suddenly Gell-Mann and the others realized that theyâd left a gaping hole in their agenda: What do these emergent structures actually do? How do they respond and adapt to their environment? Within months they were talking about the instituteâs program being not just complex systems, but complex adaptive systems.
So there you have the economic problem in a nutshell, he told Holland. How do we make a science out of imperfectly smart agents exploring their way into an essentially infinite space of possibilities?
if the underlying rules of evolution of the themes are in control and not me, then Iâll be surprised. And if Iâm not surprised, then Iâm not very happy, because I know Iâve built everything in from the start.â Nowadays, of course, this sort of thing is called âemergence.â
What captivated him wasnât that science allowed you to reduce everything in the universe to a few simple laws. It was just the opposite: that science showed you how a few simple laws could produce the enormously rich behavior of the world.
be renamed the IBM 701. At the time the machine represented a major and rather dubious gamble for the
Those heady, early days of computers were a ferment of new ideas about information, cybernetics, automataâconcepts that hadnât even existed ten years earlier. Who knew where the limits were? Almost anything you tried was liable to break new ground. And more than that, for the more philosophically minded pioneers like Holland, these big, clumsy banks of wire and vacuum tubes were opening up whole new ways to think about thinking.
At the time, of course, nobody knew to call this sort of thing âartificial intelligenceâ or âcognitive science.â But even so, the very act of programming computersâitself a totally new kind of endeavorâwas forcing people to think much more carefully than ever before about what it meant to solve a problem.
clear at the time. (In fact, they are far from clear now.) But the questions were being asked with unprecedented clarity and precision.
For the past five years he had been trying to write a program that could play checkersâand not only play the game, but learn to play it better and better with experience. In retrospect, Samuelâs checker player is considered one of the milestones of artificial intelligence research; by the time he finally finished revising and refining it in 1967, it was playing at a world championship level. But even in the 701 days, it was doing remarkably well. Holland remembers being very impressed with it, particularly with its ability to adapt its tactics to what the other player was doing. In effect, the program was making a simple model of âopponentâ and using that model to make predictions about the best line of play. And somehow, without being able to articulate it very well at the time, Holland felt that this aspect of the checker player captured something essential and right about learning and adaptation.
Through a microscope, most of the brain appears to be a study in chaos, with each nerve cell sending out thousands of random filaments that connect it willy-nilly to thousands of other nerve cells. And yet this densely interconnected network is obviously not random. A healthy brain produces perception, thought, and action quite coherently. Moreover, the brain is obviously not static. It refines and adapts its behavior through experience. It learns. The question is, How?
Hebb had published his answer in a book entitled The Organization of Behavior. His fundamental idea was to assume that the brain is constantly making subtle changes in the âsynapses,â the points of connection where nerve impulses make the leap from one cell to the next.
he argued that these synaptic changes were in fact the basis of all learning and memory. A sensory impulse coming in from the eyes,
a network that started out at random would rapidly organize itself. Experience would accumulate through a kind of positive feedback: the strong, frequently used synapses would grow stronger, while the weak, seldom-used synapses would atrophy.
Licklider went on to explain Hebbâs second assumption: that the selective strengthening of the synapses would cause the brain to organize itself into âcell assembliesââsubsets of several thousand neurons in which circulating nerve impulses would reinforce themselves and continue to circulate. Hebb considered these cell assemblies to be the brainâs basic building blocks of information. Each one would correspond to a tone, a flash of light, or a fragment of an idea. And yet these assemblies would not be physically distinct. Indeed, they would overlap, with any given neuron belonging to several of them. And because of that, activating one assembly would inevitably lead to the activation of others, so that these fundamental building blocks would quickly organize themselves into larger concepts and more complex behaviors. The cell assemblies, in short, would be the fundamental quanta of thought. Sitting there in the audience, Holland was transfixed by all this. This wasnât just the arid stimulus/response view of psychology being pushed at the time by behaviorists such as Harvardâs B. F. Skinner. Hebb was talking about what was going on inside the mind. His connectionist theory had the richness, the perpetual surprise that Holland responded so strongly to. It felt right. And Holland couldnât wait to do something with it. Hebbâs theory was a window onto the essence of thought, and he wanted to watch. He wanted to see cell assemblies organize themselves out of random chaos and grow. He wanted to see them interact. He wanted to see them incorporate experience and evolve. He wanted to see the emergence of the mind itself. And he wanted to see all of it happen spontaneously, without external guidance. No sooner had Licklider finished his lecture on Hebb than Holland turned to his leader on the 701 team, Nathaniel Rochester, and said, âWell, weâve got this prototype machine. Letâs program a neural network simulator.â
The basic idea would still look familiar enough. In their programs, Holland and Rochester modeled their artificial neurons as ânodesââin effect, tiny computers that can remember certain things about their internal state. They modeled their artificial synapses as abstract connections between various nodes, with each connection having a certain âweightâ corresponding to the strength of the synapse. And they modeled Hebbâs learning rule by adjusting the strengths as the network gained experience.
But in the end, by golly, the simulations worked. âThere was a lot of emergence,â says Holland, still sounding excited about it. âYou could start with a uniform substrate of neurons and see the cell assemblies form.â
âIâd like to be able to take themes from all over and see what emerges when I put them together,â
The Glass-Bead Game, but in English translations the book is usually called Master of the Game, or its Latin equivalent, Magister Ludi.
the novel describes a game that was originally played by musicians; the idea was to set up a theme on a kind of abacus with glass beads, and then try to weave all kinds of counterpoint and variation on the theme by moving the beads back and forth.
âThe fact that you could take calculus and differential equations and all the other things I had learned in my math classes to start a revolution in geneticsâthat was a real eye-opener.
Fisherâs whole analysis of natural selection focused on the evolution of just one gene at a time, as if each geneâs contribution to the organismâs survival was totally independent of all the other genes.
A single gene for green eyes isnât worth very much unless itâs backed up by the dozens or hundreds of genes that specify the structure of the eye itself. Each gene had to work as part of a team, realized Holland. And any theory that didnât take that fact into account was missing a crucial part of the story. Come to think on it, that was also what Hebb had been saying in the mental realm. Hebbâs cell assemblies were a bit like genes, in that they were supposed to be the fundamental units of thought. But in isolation the cell assemblies were almost nothing.
it bothered Holland that Fisher kept talking about evolution achieving a stable equilibriumâthat state in which a given species has attained its optimum size, its optimum sharpness of tooth, its optimum fitness to survive and reproduce. Fisherâs argument was essentially the same one that economists use to define economic equilibrium: once a speciesâ fitness is at a maximum, he said, any mutation will lower the fitness.
Fisher seemed to be talking about the attainment of some pristine, eternal perfection. âBut with Darwin, you see things getting broader and broader with time, more diverse,â says Holland. âFisherâs math didnât touch on that.â And with Hebb, who was talking about learning instead of evolution, you saw the same thing: minds getting richer, more subtle, more surprising as they gained experience with the world.
Holland couldnât help thinking of Art Samuelâs checker-playing program, which took advantage of exactly this kind of feedback: the program was constantly updating its tactics as it gained experience and learned more about the other player.
To Holland, evolution and learning seemed much more likeâwell, a game.
trying to win enough of what it needed to keep going. In evolution that payoff is literally survival, and a chance for the agent to pass its genes on to the next generation.
the payoff (or lack of it) gives agents the feedback they need to improve their performance: if theyâre going to be âadaptiveâ at all, they somehow have to keep the strategies that pay off well, and let the others die out.
This game analogy seemed to be true of any adaptive system.
And that meant, in turn, that all of them are fundamentally like checkers or chess: the space of possibilities is vast beyond imagining. An agent can learn to play the game betterâthatâs what adaptation is, after all. But it has just about as much chance of finding the optimum, stable equilibrium point of the game as you or I have of solving chess.
Equilibrium implies an endpoint. But to Holland, the essence of evolution lay in the journey, the endlessly unfolding surprise:
postdocâhe set himself the goal of turning his vision into a complete and rigorous theory of adaptation. âThe belief was that if I looked at genetic adaptation as the longest-term adaptation, and the nervous system as the shortest term,â he says, âthen the general theoretical framework would be the same.â
he was determined to crack this problem of selection based on more than one geneâand not just because Fisherâs independent-gene assumption had bugged him more than anything else about that book. Moving to multiple genes was also the key to moving away from this obsession with equilibrium.
Thatâs 2 1000 , or about 10 300 âa number so vast that it makes even the number of moves in chess seem infinitesimal. âEvolution canât even begin to try out that many things,â says Holland. âAnd no matter how good we get with computers, we canât do it.â Indeed, if every elementary particle in the observable universe were a supercomputer that had been number-crunching away since the Big Bang, they still wouldnât be close. And remember, thatâs just for seaweed. Humans and other mammals have roughly 100 times as many genesâand most of those genes come in many more than two varieties.
But now, says Holland, look what happens with that 1000-gene seaweed when you assume that the genes are not independent.
So once again, says Holland, you have a system exploring its way into an immense space of possibilities, with no realistic hope of ever finding the single âbestâ place to be. All evolution can do is look for improvements, not perfection. But that, of course, was precisely the question he had resolved to answer back in 1962: How? Understanding evolution with multiple genes obviously wasnât just a trivial matter of replacing Fisherâs one-variable equations with many-variable equations. What Holland wanted to know was how evolution could explore this immense space of possibilities and find useful combinations of genesâwithout having to search over every square inch of territory. As it happens, a similar explosion of possibilities was already well known to mainstream artificial intelligence researchers. At Carnegie Tech (now Carnegie Mellon University) in Pittsburgh, for example, Allen Newell and Herbert Simon had been conducting a landmark study of human problem-solving since the mid-1950s. By asking experimental subjects to verbalize their thoughts as they struggled through a wide variety of puzzles and games, including chess, Newell and Simon had concluded that problem-solving always involves a step-by-step mental search through a vast âproblem spaceâ of possibilities, with each step guided by a heuristic rule of thumb: âIf this is the situation, then that step is worth taking.â By building their theory into a program known as General Problem Solver, and by putting that program to work on those same puzzles and games, Newell and Simon had shown that the problem-space approach could reproduce human-style reasoning remarkably well. Indeed, their concept of heuristic search was already well on its way to
the Newell-Simon approach didnât help him with biological evolution. The whole point of evolution is that there are no heuristic rules, no guidance of any sort; succeeding generations explore the space of possibilities by mutations and random reshuffling of genes among the sexesâin short, by trial and error. Furthermore, those succeeding generations donât conduct their search in a step-by-step manner. They explore it in parallel: each member of the population has a slightly different set of genes and explores a slightly different region of the space. And yet, despite these differences, evolution produces just as much creativity and surprise as mental activity does, even if it takes a little longer. To Holland, this meant that the real unifying principles in adaptation had to be found at a deeper level.
certain sets of genes worked well together, forming coherent, self-reinforcing wholes. An example might be the cluster of genes that tells a cell how to extract energy from glucose molecules, or the cluster that controls cell division, or the cluster that governs how a cell combines with other cells to form a certain kind of tissue. Holland could also see analogs in Hebbâs theory of the brain, where a set of resonating cell assemblies might form a coherent concept such as âcar,â or a coordinated motion such as lifting your arm. But the more Holland thought about this idea of coherent, self-reinforcing clusters, the more subtle it began to seem. For one thing, you could find analogous examples almost anywhere you looked. Subroutines in a computer program. Departments in a bureaucracy. Gambits in the larger strategy of a chess game. Furthermore, you could find examples at every level of organization. If a cluster is coherent enough and stable enough, then it can usually serve as a building block for some larger cluster. Cells make tissues, tissues make organs, organs make organisms, organisms make ecosystemsâon and on. Indeed, thought Holland, thatâs what this business of âemergenceâ was all about: building blocks at one level combining into new building blocks at a higher level. It seemed to be one of the fundamental organizing principles of the world. It certainly seemed to appear in every complex, adaptive system that you looked. But why? This hierarchical, building-block structure of things is as commonplace as air. Itâs so widespread that we never think much about it. But when you do think about it, it cries out for an explanation: Why is the world structured this way? Well, there are actually any number of reasons. Computer programmers are taught to break things
As Holland thought about it, however, he became convinced that the most important reason lay deeper still, in the fact that a hierarchical, building-block structure utterly transforms a systemâs ability to learn, evolve, and adapt.
Certainly thatâs a much more efficient way to create something new than starting all over from scratch. And that fact, in turn, suggests a whole new mechanism for adaptation in general. Instead of moving through that immense space of possibilities step by step, so to speak, an adaptive system can reshuffle its building blocks and take giant leaps.
âSo if I have a process that can discover building blocks,â says Holland, âthe combinatorics start working for me instead of against me. I can describe a great many
The idea was to divide the face up into, say, 10 building blocks: hairline, forehead, eyes, nose, and so on down to the chin. Then the artist would have strips of paper with a variety of options for each:
the artist could talk to the witness, assemble the appropriate pieces, and produce a sketch of the suspect very quickly. Of course, the artist couldnât reproduce every conceivable face that way. But he or she could almost always get pretty close: by shuffling those 100 pieces of paper, the artist could make a total of 10 billion different faces, enough to sample the space of possibilities quite widely. âSo if I have a process that can discover building blocks,â says Holland, âthe combinatorics start working for me instead of against me. I can describe a great many complicated things with relatively few building blocks.â
And that, he realized, was the key to the multiple-gene puzzle. âThe cut and try of evolution isnât just to build a good animal, but to find good building blocks that can be put together to make many good animals.â His challenge now was to show precisely and rigorously how that could happen. And the first step, he decided, was to make a computer model, a âgenetic algorithmâ that would both illustrate the process and help him clarify the issues in his own mind.
the genetic algorithm that Holland finally came up with was weird. Except in the most literal sense, in fact, it wasnât really a computer program at all. In its inner workings it was more like a simulated ecosystemâa kind of digital Serengeti in which whole populations of programs would compete and have sex and reproduce for generation after generation, always evolving their way toward the solution of whatever problem the programmer might set for them. This wasnât the way programs were usually written, to put it mildly.
The upshot is that the genetic algorithm will converge to the solution of the problem at hand quite rapidlyâwithout ever having to know beforehand what the solution is.
the whole art of programming is to make sure that youâve written precisely the right instructions in precisely the right order. And thatâs obviously the most effective way to do itâif you already know precisely what you want the computer to do. But suppose you donât know, said Holland. Suppose, for example, that youâre trying to find the maximum value of some complicated mathematical function. The function could represent profit, or factory output, or vote counts, or almost anything else; the world is full of things that need to be maximized. Indeed, programmers have devised any number of sophisticated computer algorithms for doing so. And yet, not even the best of those algorithms is guaranteed to give you the correct maximum value in every situation. At some level, they always have to rely on old-fashioned trial and errorâguessing. But if thatâs the case, Holland told his colleagues, if youâre going to be relying on trial and error anyway, maybe itâs worth seeing what you can do with natureâs method of trial and errorânamely, natural selection. Instead of trying to write your programs to perform a task you donât quite know how to do, evolve them. The genetic algorithm was a way of doing that. To see how it works, said Holland, forget about the FORTRAN code and go down into the guts of the computer, where the program is represented as a string of binary ones and zeros: 11010011110001100100010100111011 âŠ, et cetera. In that form the program looks a heck of a lot like a chromosome, he said, with each binary digit being a single âgene.â And once you start thinking of the binary code in biological terms, then you can use that same biological analogy to make it evolve. First, said Holland, you have the computer generate a population of maybe 100 of these digital chromosomes, with lots of random variation from one to the next.
test each individual chromosome on the problem at hand by running it as a computer program, and then giving it a score that measures how well it does.
you take those individuals youâve selected as being fit enough to reproduce, and create a new generation of individuals through sexual reproduction.
Reproduction and crossover provided the mechanism for building blocks of genes to emerge and evolve togetherâand, not incidentally, provided a mechanism for a population of individuals to explore the space of possibilities with impressive efficiency.
Published in 1975, Adaptation in Natural and Artificial Systems was dense with equations and analysis. It summarized two decades of Hollandâs thinking about the deep interrelationships of learning, evolution, and creativity. It laid out the genetic algorithm in exquisite detail. And in the wider world of computer science outside Michigan, it was greeted with resounding silence. In a community of people who like their algorithms to be elegant, concise, and provably correct, this genetic algorithm stuff was still just too weird. The artificial intelligence community was a little more receptive, enough to keep the book selling at the rate of 100 to 200 copies per year. But even so, when there were any comments on the book at all, they were most often along the lines of, âJohnâs a real bright guy, but âŠâ
he simply didnât play the game of academic self-promotion.
âI think it would have bothered me if nobody had been willing to listen,â he adds. âBut Iâve always been very lucky in having bright, interested graduate students to bounce ideas off of.â
âSome of them have been really brilliantâand great fun for that reason,â he says. Holland deliberately took a rather hands-off approach to guidance, having seen too many professors build up a huge publication list by publishing âjointâ research papers that were in fact written entirely by their graduate students. âSo they all followed their noses and did things they thought were interesting. Then weâd all meet around a table about once a week, one of them would tell where he stood on his dissertation, and weâd all critique it. That was usually a lot of fun for everybody involved.â
so, he couldnât help but feel that the genetic algorithmâs bare-bones version of evolution was just too bare.
The genetic algorithm was all very nice. But by itself, it simply wasnât an adaptive agent.
Nearly twenty-five years after heâd first heard about Donald Hebbâs ideas, he was still convinced that adaptation in the mind and adaptation in nature were just two different aspects of the same thing. Moreover, he was still convinced that if they really were the same thing, they ought to be describable by the same theory.
what actually has to happen for game-playing agents to survive and prosper? Two things, Holland decided: prediction and feedback.
Prediction is what helps you seize an opportunity or avoid getting suckered into a trap. An agent that can think ahead has an obvious advantage over one that canât.
Very often, moreover, the models are literally inside our head,
We use these âmental modelsâ so often, in fact, that many psychologists are convinced they are the basis of all conscious thought.
Holland, the concept of prediction and models actually ran far deeper than conscious thoughtâor for that matter, far deeper than the existence of a brain. âAll complex, adaptive systemsâeconomies, minds, organismsâbuild models that allow them to anticipate the world,â he declares. Yes, even bacteria.
standard operating procedures are often taught by rote, without a lot of whys and wherefores.
as the standard operating procedure collectively unfolds, the company as a whole will behave as if it understood that model perfectly.
Instead, they just set up the production run by invoking a âstandard operating procedureââa set of rules of the form, âIf the situation is ABC, then take action XYZ.â And just as with a bacterium or the viceroy, says Holland, those rules encode a model of the companyâs world and a prediction:
In the cognitive realm, says Holland, anything we call a âskillâ or âexpertiseâ is an implicit modelâor more precisely, a huge, interlocking set of standard operating procedures that have been inscribed on the nervous system and refined by years of experience.
Hollandâs favorite example of implicit expertise is the skill of the medieval architects who created the great Gothic cathedrals. They had no way to calculate forces or load tolerances, or anything else that a modern architect might do. Modern physics and structural analysis didnât exist in the twelfth century. Instead, they built those high, vaulted ceilings and massive flying buttresses using standard operating procedures passed down from master to apprenticeârules of thumb that gave them a sense of which structures would stand up and which would collapse. Their model of physics was completely implicit and intuitive.
Ordinarily, for example, we think of prediction as being something that humans do consciously, based on some explicit model of the world.
DNA itself is an implicit model: âUnder the conditions we expect to find,â say the genes, âthe creature we specify has a chance of doing well. Human culture is an implicit model, a rich complex of myths and symbols that implicitly define a peopleâs beliefs about their world and their rules for correct behavior.
where do the models come from? How can any system, natural or artificial, learn enough about its universe to forecast future events?
Most models are quite obviously not conscious: witness the nutrient-seeking bacterium, which doesnât even have a brain.
feedback from the environment. This was Darwinâs great insight, that an agent can improve its internal models without any paranormal guidance whatsoever. It simply has to try the models out, see how well their predictions work in the real world, andâif it survives the experienceâadjust the models to do better the next time. In biology, of course, the agents are individual organisms, the feedback is provided by natural selection, and the steady improvement of the models is called evolution. But in cognition, the process is essentially the same: the agents are individual minds, the feedback comes from teachers and direct experience, and the improvement is called learning.
there was only one way to pin the ideas down: he would have to build a computer-simulated adaptive agent, just as he had done fifteen years earlier with genetic algorithms.
Learning was as fundamental to cognition as evolution was to biology. And that meant that learning had to be built into the cognitive architecture from the beginning,
Hollandâs ideal was still the Hebbian neural network, where the neural impulses from every thought strengthen and reinforce the connections that make thinking possible in the first place. Thinking and learning were just two aspects of the same thing in the brain, Holland was convinced. And he wanted to capture that fundamental insight in his adaptive agent.
He was still convinced that concepts had to be understood in Hebbian terms, as emergent structures growing from some deeper neural substrate that is constantly being adjusted and readjusted by input from the environment.
if the system has been told what to do in advance, then itâs a fraud to call the thing artificial intelligence: the intelligence isnât in the program but in the programmer. No, Holland wanted control to be learned. He wanted to see it emerging from the bottom up, just as it did from the neural substrate of the brain.
âIn contrast to mainstream artificial intelligence, I see competition as much more essential than consistency,â
Consistency is a chimera, because in a complicated world there is no guarantee that experience will be consistent.
âdespite all the work in economics and biology, we still havenât extracted whatâs central in competition.â Thereâs a richness there that weâve only just begun to fathom. Consider the magical fact that competition can produce a very strong incentive for cooperation, as certain players spontaneously forge alliances and symbiotic relationships with each other for mutual support. It happens at every level and in every kind of complex, adaptive system, from biology to economics to politics. âCompetition and cooperation may seem antithetical,â he says, âbut at some very deep level, they are two sides of the same coin.â
When you thought about it, in fact, the Darwinian metaphor and the Adam Smith metaphor fit together quite nicely: Firms evolve over time, so why shouldnât classifiers?
The upshot was that the population of rules would change and evolve over time, constantly exploring new regions of the space of possibilities. And there you would have it: by adding the genetic algorithm as a third layer on top of the bucket brigade and the basic rule-based system, Holland could make an adaptive agent that not only learned from experience but could be spontaneous and creative.
general cognitive theory of learning, reasoning, and intellectual discovery. As they later recounted in their 1986 book, Induction, all four of them had independently come to believe that such a theory had to be founded on the three basic principles that happened to be the same three that underlay Hollandâs classifier system: namely, that knowledge can be expressed in terms of mental structures that behave very much like rules; that these rules are in competition, so that experience causes useful rules to grow stronger and unhelpful rules to grow weaker; and that plausible new rules are generated from combinations of old rules.
In psychology this kind of knowledge organization is known as a default hierarchy,
they argued that these three principles ought to cause the spontaneous emergence of default hierarchies as the basic organizational structure of all human knowledgeâas indeed they appear to do. The cluster of rules forming a default hierarchy is essentially synonymous with what Holland calls an internal model. We use weak general rules with stronger exceptions to make predictions about how things should be assigned to categories: âIf itâs streamlined and has fins and lives in the water, then itâs a fishââbut âIf it also has hair and breathes air and is big, then itâs a whale.â
The classifier system had started with nothing. Its initial set of rules had been totally random, the computer equivalent of primordial chaos. And yet, here was this marvelous structure emerging out of the chaos to astonish and surprise them. âWe were elated,â says Holland. âIt was the first case of what someone could really call an emergent model.â
the two of them had rather sleepily begun to bat around an approach that might just crack this problem of rational expectations in economics: instead of assuming that your economic agents are perfectly rational, why not just model a bunch of them with Holland-style classifier systems and let them learn from experience like real economic agents?
âYou know,â he adds, âitâs perfectly possible for a scientist to feel that he has what it takesâbut that he isnât accepted in the community. John Holland went through that for decades. I certainly felt like thatâuntil I walked into the Santa Fe Institute, and all these incredibly smart people, people Iâd only read about, were giving me the impression of âWhat took you so long to get here?â For ten days, he had been talking and listening nonstop. His head was so full of ideas that it hurt. He was exhausted. He needed to catch up on about three weeks of sleep. And he felt as though he were in heaven.
artificial life was analogous to artificial intelligence. The difference was that, instead of using computers to model thought processes, you used computers to model the basic biological mechanisms of evolution and life itself.
something right out of the Vietnam-era counterculture.
the Game of Life wasnât actually a game that you played; it was more like a miniature universe that evolved as you watched. You started out with the computer screen showing a snapshot of this universe: a two-dimensional grid full of black squares that were âaliveâ and white squares that were âdead.â The initial pattern could be anything you liked. But once you set the game going, the squares would live or die from then on according to a few simple rules. Each square in each generation would first look around at its immediate neighbors. If too many of those neighbors were already alive, then in the next generation the square would die of overcrowding. And if too few neighbors were alive, then the square would die of loneliness. But if the number of neighbors was just right, with either two living squares or three living squares, then in the next generation that central square would be aliveâeither by surviving if it were already alive or by being âbornâ if it werenât. That was all. The rules were nothing but a kind of cartoon biology. But what made the Game of Life wonderful was that when you turned these simple rules into a program, they really did seem to make the screen come alive.
You could start up the game with a random scattering of live squares, and watch them instantly organize themselves into all manner of coherent structures.
realized that it must have been the Game of Life. There was something alive on that screen.
It was one of those clear, frosty nights when the stars were sort of sparkling. Across the Charles River in Cambridge you could see the Science Museum and all the cars driving around. I thought about the patterns of activity, all the things going on out there. The city was sitting there, just living. And it seemed to be the same sort of thing as the Game of Life. It was certainly much more complex. But it was not necessarily different in kind.â
that night of epiphany changed his life. But at the time it was little more than an intuition, a certain feeling he had. âIt was one of those things where you have this flash of insight, and then itâs gone. Like a thunderstorm, or a tornado, or a tidal wave that comes through and changes the landscape, and then itâs past. The actual mental image itself was no longer really there, but it had set me up to feel certain ways about certain things.
among the corals and fishes, he had come to love moving in that third dimension. It was intoxicating. But once he was back in Boston, heâd soon discovered that scuba diving in the cold, brown waters of New England just wasnât the same. So as a substitute heâd tried hang-gliding. And heâd become hooked the first day. Sailing over the world, riding upward from thermal to thermalâthis was the ultimate in three dimensions. He became a fanatic, buying his own hang-glider and spending every spare minute aloft. All of which explains why, at the beginning of the summer of 1975, Langton set out for Tucson along with a couple of hang-gliding buddies who were moving to San Diego and who had a truck. Their plan was to spend the next few months making their way across the country at the slowest speed possible, while they went hang-gliding off any hill that looked halfway inviting. And thatâs exactly what they started to do, working their way down the Appalachians until they came to Grandfather Mountain, North Carolina.
âI had this weird experience of watching my mind come back,â he says. âI could see myself as this passive observer back there somewhere. And there were all these things happening in my mind that were disconnected from my consciousness. It was very reminiscent of virtual machines, or like watching the Game of Life. I could see these disconnected patterns self-organize, come together, and merge with me in some way. I donât know how to describe it in any objectively verifiable way, and maybe it was just a figment of all these funny drugs they were giving me, but it was as if you took an ant colony and tore it up, and then watched the ants come back together, reorganize, and rebuild the colony.
I did a lot of non-specific, nondirected generic thinking about biology, physical science, and ideas of the universe, and about how those ideas changed with time. Then there was this scent I talk about. Through all of this, I was always following it, but without any direction.
I very rapidly realized that I was much more interested, not in our specific, current understanding of the universe, but in how our world view had changed through time. What I was really interested in was the history of ideas.
One of Langtonâs favorite cartoons is a panel of Gary Larsonâs The Far Side, which shows a fully equipped mountaineer about to descend into an immense hole in the ground. As a reporter holds up a microphone, he proclaims, âBecause it is not there! âThatâs how I felt,â laughs Langton. The more he studied anthropology, he says, the more he sensed that the subject had a gaping hole. âIt was a fundamental dichotomy.
the time he left physics he was pursuing what eventually turned into a philosophy-anthropology double major.
So on every side, says Langton, âI was just immersed in this idea of the evolution of information. That quickly became my chief interest. It just smelled right.â Indeed, the scent was overpowering. Somehow, he says, he knew he was getting very close.
If you could create a real theory of cultural evolution, as opposed to some pseudoscientific justification for the status quo, he reasoned, then you might be able to understand how cultures really workedâand among other things, actually do something about war and social inequity.
wasnât just cultural evolution, Langton realized. It was biological evolution, intellectual evolution, cultural evolution, concepts combining and recombining and leaping from mind to mind over miles and generationsâall wrapped together.
There was a unity here, a common story that involved elements coming together, structures evolving, and complicated systems acquiring the capacity to grow and be alive. And if he could only learn to look at that unity in the right way, if he could only abstract its laws of operation into the right kind of computer program, then he would have captured everything that was important about evolution.
His basic argument was that biological and cultural evolution were simply two aspects of the same phenomenon, and that the âgenesâ of culture were beliefsâwhich in turn were recorded in the basic âDNAâ of culture: language. In retrospect it was a pretty naive attempt, he says. But it was his manifestoâand
âI kept getting this look you get when they think youâre a crackpot,â he says. âIt was very discouragingâespecially coming as it did after the accident, when I felt unsure of what I was or who I was.â Objectively, Langton had made enormous progress by this point; he could concentrate, he was strong, and he could run five miles at a stretch. But to himself, he still felt bizarre, grotesque, and mentally impaired. âI couldnât tell. Because of this neurological scrambling, I couldnât be sure of any of my thoughts anymore. So I couldnât be sure of this one. And it wasnât helping that nobody understood what I was trying to say.â
And I kept seeing things out there that related to it. I didnât know anything about nonlinear dynamics at the time, but there were all these intuitions for emergent properties, the interaction of lots of parts, the kinds of things that the group could do collectively that the individual couldnât.â
âBy now Iâd had the epiphany and I was a religious convert,â he says. âThis was clearly my life from now on. I knew I wanted to go on and do a Ph.D. in this general area. Itâs just that the path to take wasnât obvious.â
Not knowing even how to begin, Langton decided it was time to go to the University of Arizona library, where he could do a computerized literature search. He tried the key words âself-reproduction.â âI got zillions of things back!â he says.
âThis was right. When I found all that, I said, âHey, I may be crazy, but these people are at least as crazy as I am!ââ
was all there: evolution, the Game of Life, self-assembly, emergent reproduction, everything. Von Neumann, he discovered, had gotten interested in the issue of self-reproduction back in the late 1940s, in the aftermath of his work with Burks and Goldstine on the design of a programmable digital computer.
To get a feel for the issues, von Neumann started out with a thought experiment. Imagine a machine that floats around on the surface of a pond, he said, together with lots of machine parts. Furthermore, imagine that this machine is a universal constructor: given a description of any machine, it will paddle around the pond until it locates the proper parts, and then construct that machine. In particular, given a description of itself, it will construct a copy of itself. Now that sounds like self-reproduction, said von Neumann. But it isnâtâat least, not quite. The newly created copy of the first machine will have all the right parts. But it wonât have a description of itself, which means that it wonât be able to make any further copies of itself. So von Neumann also postulated that the original machine should have a description copier: a device that will take the original description, duplicate it, and then attach the duplicate description to the offspring machine. Once that happens, he said, the offspring will have everything it needs to continue reproducing indefinitely. And then that will be self-reproduction. As a thought experiment, von Neumannâs analysis of self-reproduction was simplicity itself. To restate it in a slightly more formal way, he was saying that the genetic material of any self-reproducing system, natural or artificial, has to play two fundamentally different roles. On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring. But as a scientific prediction, that analysis turned out to be breathtaking: when Watson and Crick finally unraveled the molecular structure of DNA a few years later, in 1953, they discovered that it fulfilled von Neumannâs two requirements precisely.
Thought experiment
To restate it in a slightly more formal way, he was saying that the genetic material of any self-reproducing system, natural or artificial, has to play two fundamentally different roles. On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring.
the genetic material of any self-reproducing system, natural or artificial, has to play two fundamentally different roles. On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring.
One of the highlights was von Neumannâs proof that there existed at least one cellular automaton pattern that could indeed reproduce itself. The pattern heâd found was immensely complicated, requiring a huge lattice and 29 different states per cell. It was far beyond the simulation capacity of any existing computer. But the very fact of its existence settled the essential question of principle: self-reproduction, once considered to be an exclusive characteristic of living things, could indeed be achieved by machines.
Langton wasted no time. By that point heâd learned that Michiganâs Computer and Communication Sciences program was famous for exactly the kind of perspective that he was after. âTo them,â says Langton, âinformation processing was writ large across all of nature. However information is processed is worthwhile understanding. So I applied under that philosophy.â By and by he got a letter back from Professor Gideon Frieder, the chairman of the department. âSorry,â it said, âyou donât have the proper background.â Application denied. Langton was enraged. He fired back a seven-page letter, the gist of which was, What the hell!? âHere is your whole philosophy, the purpose you claim to exist, live, and breathe for. This is exactly what Iâve been pursuing. And youâre telling me no?â A few weeks afterward Frieder wrote again, saying in effect, âWelcome aboard.â As he told Langton later, âI just liked the idea of having somebody around who would say that to the chairman.â
Second-order phase transitions are much less common in nature, Langton learned. (At least, they are at the temperatures and pressures humans are used to.) But they are much less abrupt, largely because the molecules in such a system donât have to make that either-or choice. They combine chaos and order. Above the transition temperature, for example, most of the molecules are tumbling over one another in a completely chaotic, fluid phase. Yet tumbling among them are myriads of submicroscopic islands of orderly, latticework solid, with molecules constantly dissolving and recrystallizing around the edges. These islands are neither very big nor very long-lasting, even on a molecular scale. So the system is still mostly chaos. But as the temperature is lowered, the largest islands start to get very big indeed, and they begin to live for a correspondingly long time. The balance between chaos and order has begun to shift. Of course, if the temperature were taken all the way past the transition, the roles would reverse: the material would go from being a sea of fluid dotted with islands of solid, to being a continent of solid dotted with lakes of fluid. But right at the transition, the balance is perfect: the ordered structures fill a volume precisely equal to that of the chaotic fluid. Order and chaos intertwine in a complex, ever-changing dance of submicroscopic arms and fractal filaments. The largest ordered structures propagate their fingers across the material for arbitrarily long distances and last for an arbitrarily long time. And nothing ever really settles down.
âIt reminded me of the feelings I experienced when I learned to scuba dive in Puerto Rico,â he explains. âFor most of our dives we were fairly close to shore, where the water was crystal clear and you could see the bottom perfectly about 60 feet down. However, one day our instructor took us to the edge of the continental shelf, where the 60-foot bottom gave way to an 80-degree slope that disappeared into the depthsâI believe at that point the transition was to about 2000 feet. It made you realize that all the diving you had been doing, which had certainly seemed adventurous and daring, was really just playing around on the beach. The continental shelves are like puddles compared to âThe Ocean.â âWell, life emerged in the oceans,â he adds, âso there you are at the edge, alive and appreciating that enormous fluid nursery. And thatâs why âthe edge of chaosâ carries for me a very similar feeling: because I believe life also originated at the edge of chaos. So here we are at the edge, alive and appreciating the fact that physics itself should yield up such a nursery. âŠâ
the only rules that allow you to build a universal computer are those that are in Class IV, like the Game of Life. These are the only rules that provide enough stability to store information and enough fluidity to send signals over arbitrary distancesâthe two things that seem essential for computation. And, of course, these are also the rules that sit right in this phase transition at the edge of chaos.
one of the interesting questions we can ask about living things is, Under what conditions do systems whose dynamics are dominated by information processing arise from things that just respond to physical forces? When and where does the processing of information and the storage of information become important?â
phase transitions, complexity, and computation were all wrapped together, Langton realized. Or, at least, they were in the von Neumann universe. But Langton was convinced that the connections held true in the real world as wellâin everything from social systems to economies to living cells. Because once you got to computation, you were getting awfully close to the essence of life itself. âLife is based to an incredible degree on its ability to process information,â he says. âIt stores information. It maps sensory information. And it makes some complex transformations on that information to produce action.
In the von Neumann universe, likewise, the Class IV rules might eventually produce a frozen configuration, or they might not. But either way, Langton says, the phase transition at the edge of chaos would correspond to what computer scientists call âundecidableâ algorithms. These are the algorithms that might halt very quickly with certain inputsâequivalent to starting off the Game of Life with a known stable structure. But they might run on forever with other inputs. And the point is, you canât always tell ahead of time which it will beâeven in principle. In fact, says Langton, thereâs even a theorem to that effect: the âundecidability theoremâ proved by the British logician Alan Turing back in the 1930s. Paraphrased, the theorem essentially says that no matter how smart you think you are, there will always be algorithms that do things you canât predict in advance. The only way to find out what they will do is to run them. And, of course, those are exactly the kind of algorithms you want for modeling life and intelligence. So itâs no wonder the Game of Life and other Class IV cellular automata seem so lifelike. They exist in the only dynamical regime where complexity, computation, and life itself are possible: the edge of chaos.
Langton now had four very detailed analogiesâ Cellular Automata Classes: I & II â âIVâ â III Dynamical Systems: Order â âComplexityâ â Chaos Matter: Solid â âPhase Transitionâ â Fluid Computation: Halting â âUndecidableâ â Nonhalting âalong with a fifth and far more hypothetical one: Life: Too static â âLife/Intelligenceâ â Too noisy But what did they all add up to? Just this, Langton decided: âsolidâ and âfluidâ are not just two fundamental phases of matter, as in water versus ice. They are two fundamental classes of dynamical behavior in generalâincluding dynamical behavior in such utterly nonmaterial realms as the space of cellular automaton rules or the space of abstract algorithms. Furthermore, he realized, the existence of these two fundamental classes of dynamical behavior implies the existence of a third fundamental class: âphase transitionâ behavior at the edge of chaos, where you would encounter complex computation and quite possibly life itself.
he did have this irresistible vision of life as eternally trying to keep its balance on the edge of chaos, always in danger of falling off into too much order on the one side, and too much chaos on the other. Maybe thatâs what evolution is, he thought: just a process of lifeâs learning how to seize control of more and more of its own parameters, so that it has a better and better chance to stay balanced on the edge.
âWhat are you working on?â asked Doyne Farmer. âI donât really know how to describe it,â admitted Langton. âIâve been calling it artificial life.â âArtificial life!â exclaimed Farmer. âWow, we gotta talk!â So they had talked. A lot. After the conference, moreover, they had kept on talking via electronic mail. And Farmer had made it a point to bring Langton out to Los Alamos on several occasions to give talks and seminars. (Indeed, it was at the âEvolution, Games, and Learningâ conference in May 1985 that Langton gave his first public discussion of his lambda parameter and the phase transition work. Farmer, Wolfram, Norman Packard, and company were profoundly impressed.) This was the same period when Farmer was busy with Packard and Stuart Kauffman on the autocatalytic set simulation for the origin of lifeânot to mention helping to get the Santa Fe Institute up and runningâand he was getting deeply involved with issues of complexity himself.
Langton had no idea what most of the speakers would say until they got up to say it. âThe meeting was a very emotional experience for me,â he says. âYouâll never re-create that feeling. Everybody had been doing artificial life on his own, on the side, often at home. And everybody had had this feeling of, There must be something here.â But they didnât know who to turn to. It was a whole collection of people whoâd had the same uncertainties, the same doubts, whoâd wondered if they were crazy. And at the meeting we almost embraced each other. There was this real camaraderie, this sense of âI may be crazyâbut so are all these other people.ââ
Langton had no idea what most of the speakers would say until they got up to say it. âThe meeting was a very emotional experience for me,â he says. âYouâll never re-create that feeling. Everybody had been doing artificial life on his own, on the side, often at home. And everybody had had this feeling of, There must be something here.â But they didnât know who to turn to. It was a whole collection of people whoâd had the same uncertainties, the same doubts, whoâd wondered if they were crazy. And at the meeting we almost embraced each other. There was this real camaraderie, this sense of âI may be crazyâbut so are all these other people.ââ There were no breakthroughs in any of the presentations, he says. But you could see the potential in almost all of them. The talks ranged from the collective behavior of a simulated ant colony, to the evolution of digital ecosystems made out of assembly-language computer code, to the power of sticky protein molecules to assemble themselves into a virus.
âI was so hyped up, it was like an altered state of consciousness,â he says. âI have this image of a sea of gray matter, with ideas swimming around, ideas recombining, ideas leaping from mind to mind.â For that space of five days, he says, âit was like being incredibly alive.â
had been enormously impressed by Hollandâs genetic algorithms, and classifier systems, and the boids, and so on. I thought a good deal about them, and the possibilities they opened up. My instinct was that this was an answer. The problem was, What was the question in economics?
In particular, he says, he wanted to see the program take some of the classical problems in economics, the hoary old chestnuts of the field, and see how they changed when you looked at them in terms of adaptation, evolution, learning, multiple equilibria, emergence, and complexityâall the Santa Fe themes. Why, for example, are there speculative bubbles and crashes in the stock market? Or why is there money? (That is, how does one particular good such as gold or wampum become widely accepted as a medium of exchange?)
âEither someone gets glassy-eyed or the communication begins,â he says. âAnd if it does, then youâre exercising a form of power thatâs extremely compelling: intellectual power. If you can get a person who understands the concept somewhere down in the bowels of the brain, where that same ideaâs been sitting forever, then you have a grasp on that person. You donât do it by physical coercion, but by a kind of intellectual appeal that amounts to a coercion. You grab them by the brains instead of by the balls.â
we would thrash out what economists could do about bounded rationality.â That is, what would really happen to economic theory if they quit assuming that people could instantaneously compute their way to the solution of any economic problemâeven if that problem were just as hard as chess?
the way to set the dial of rationality was to leave it alone. Let the agents set it by themselves.
there is only one way to be perfectly rational, while there are an infinity of ways to be partially rational. So which way is correct for human beings? âWhere,â he asked, âdo you set the dial of rationality?â
If people are perfectly rational, then theorists can say exactly how they will react. But what would perfect irrationality be like? Hahn wondered.
No matter where you put them, in fact, the agents would try to do something. So unlike the neoclassical theory, which has almost nothing at all to say about dynamics and change in the economy, a model full of adaptive agents would come with dynamics already built in.
So all the agents could start off as perfectly stupid. That is, they would just make random, blundering decisions. But they would get smarter and smarter as they reacted to one another.â
Instead of emphasizing decreasing returns, static equilibrium, and perfect rationality, as in the neoclassical view, the Santa Fe team would emphasize increasing returns, bounded rationality, and the dynamics of evolution and learning. Instead of basing their theory on assumptions that were mathematically convenient, they would try to make models that were psychologically realistic. Instead of viewing the economy as some kind of Newtonian machine, they would see it as something organic, adaptive, surprising, and alive.
âIn fact,â Arthur adds, âthe key intellectual influence that whole first year was machine learning in general and John Holland in particularânot condensed-matter physics, not increasing returns, not computer science, but learning and adaptation.
âEconomics, as it is usually practiced, operates in a purely deductive mode,â he says. âEvery economic situation is first translated into a mathematical exercise, which the economic agents are supposed to solve by rigorous, analytical reasoning. But then here were Holland, the neural net people, and the other machine-learning theorists. And they were all talking about agents that operate in an inductive mode, in which they try to reason from fragmentary data to a useful internal model.â Induction is what allows us to infer the existence of a cat from the glimpse of a tail vanishing around a corner. Induction is what allows us to walk through the zoo and classify some exotic feathered creature as a bird, even though weâve never seen a scarlet-crested cockatoo before. Induction is what allows us to survive in a messy, unpredictable, and often incomprehensible world.
They try to fill in the gaps on the fly by forming hypotheses, by making analogies, by drawing from past experience, by using heuristic rules of thumb. Whatever works, worksâeven if they donât understand why. And for that very reason, induction cannot depend upon precise, deductive logic.
Hollandâs answer was essentially that you learn in that environment because you have to: âEvolution doesnât care whether problems are well defined or not.â Adaptive agents are just responding to a reward,
they could operate in an environment that was not well defined at all.
Moreover, because the system was always testing those hypotheses to find out which ones were useful and led to rewards, it could continue to learn even in the face of crummy, incomplete informationâand even while the environment was changing in unexpected ways.
<<Action produces information>>
âBut its behavior isnât optimal!â the economists complained, having convinced themselves that a rational agent is one who optimizes his âutility function.â âOptimal relative to what?â Holland replied. Talk about your ill-defined criterion: in any real environment, the space of possibilities is so huge that there is no way an agent can find the optimumâor even recognize it. And thatâs before you take into account the fact that the environment might be changing in unforeseen ways.
âThis whole induction business fascinated me,â says Arthur. âHere you could think about doing economics where the problem facing the economic agent was not even well defined, where the environment is not well defined, where the environment might be changing, where the changes were totally unknown. And, of course, you just had to think for about a tenth of a second to realize, thatâs what life is all about. People routinely make decisions in contexts that are not well defined, without even realizing it. You muddle through, you adapt ideas, you copy, you try what worked in the past, you try out things. And, in fact, economists had talked about this kind of behavior before. But we were finding ways to make it analytically precise, to build it into the heart of the theory.â
âArrow, Hahn, Holland, myself, maybe half a dozen of us. We had just begun to realize that if you do economics this wayâif there was this Santa Fe approachâthen there might be no equilibrium in the economy at all. The economy would be like the biosphere: always evolving, always changing, always exploring new territory. âNow, what worried us was that it didnât seem possible to do economics in that case,â says Arthur. âBecause economics had come to mean the investigation of equilibria.
Frankie Hahn said, âIf things are not repeating, if things are not in equilibrium, what can we, as economists, say? How could you predict anything? How could you have a science?ââ
Look at meteorology, he told them. The weather never settles down. It never repeats itself exactly. Itâs essentially unpredictable more than a week or so in advance. And yet we can comprehend and explain almost everything that we see up there.
we have a real science of weatherâwithout full prediction. And we can do it because prediction isnât the essence of science. The essence is comprehension and explanation.
âWell, Hollandâs answer was to me a revelation,â says Arthur. âIt left me almost gasping. I had been thinking for almost ten years that much of the economy would never be in equilibrium. But I couldnât see how to âdoâ economics without equilibrium. Johnâs comment cut through the knot for me. After that it seemedâstraightforward.â
âA lot of people, including myself, had naively assumed that what weâd get from the physicists and the machine-learning people like Holland would be new algorithms, new problem-solving techniques, new technical frameworks. But what we got was quite differentâwhat we got was very often a new attitude, a new approach, a whole new world view.â
Evolution, of course, was a lot more than just random mutation and natural selection. It was also emergence and self-organization. And that, despite the best efforts of Stuart Kauffman, Chris Langton, and a great many other people, was something that no one understood very well.
Holland also realized that he was going to have to face up to the major philosophical flaw in classifier systems. In the spontaneous emergence paper, he says, the spontaneity had been real, and the emergence had been completely intrinsic. But in classifier systems, for all their learning ability and for all their power to discover emergent clusters of rules, there was still a deus ex machina; the systems still depended on the shadowy hand of the programmer. âA classifier system gets a payoff only because I assign winning or losing,â says Holland. It was something that had always bugged him. Leaving aside questions of religion, he says, the real world seems to get along just fine without a cosmic referee.
Any given organismâs ability to survive and reproduce depends on what niche it is filling, what other organisms are around, what resources it can gather, even what its past history has been.
Indeed, evolutionary biologists consider it so important that theyâve made up a special word for it: organisms in an ecosystem donât just evolve, they coevolve. Organisms donât change by climbing uphill to the highest peak of some abstract fitness landscape, the way biologists of R. A. Fisherâs generation had it. (The fitness-maximizing organisms of classical population genetics actually look a lot like the utility-maximizing agents of neoclassical economics.) Real organisms constantly circle and chase one another in an infinitely complex dance of coevolution.
In the human world, moreover, the dance of coevolution has produced equally exquisite webs of economic and political dependenciesâalliances, rivalries, customer-supplier relationships, and on and on. It is the dynamic that underlay Arthurâs vision of an economy under glass, in which artificial economic agents would adapt to each other as you watched. It is the dynamic that underlay Arthur and Kauffmanâs analysis of autocatalytic technology change. It is the dynamic that underlies the affairs of nations in a world that has no central authority.
he was ever going to understand these phenomena at the deepest level, he was going to have to start by eliminating this business of outside reward.
he wanted to understand a deep paradox in evolution: the fact that the same relentless competition that gives rise to evolutionary arms races can also give rise to symbiosis and other forms of cooperation. Indeed, it was no accident that cooperation in its various guises actually underlay quite a few items on Hollandâs list. It was a fundamental problem in evolutionary biologyânot to mention economics, political science, and all of human affairs. In a competitive world, why do organisms cooperate at all? Why do they leave themselves open to âalliesâ who could easily turn on them?
Or in the natural world, consider that an overly trusting creature might very well get eaten. So once again: Why should any organism ever dare to cooperate with another?
Nice guysâor more precisely, nice, forgiving, tough, and clear guysâcan indeed finish first.
TIT FOR TAT would start out by cooperating on the first move, and from there on out would do exactly what the other program had done on the move before. That is, the TIT FOR TAT strategy incorporated the essence of the carrot and the stick. It was âniceâ in the sense that it would never defect first. It was âforgivingâ in the sense that it would reward good behavior by cooperating the next time. And yet it was âtoughâ in the sense that it would punish uncooperative behavior by defecting the next time. Moreover, it was âclearâ in the sense that its strategy was so simple that the opposing programs could easily figure out what they were dealing with.
In his 1984 book, The Evolution of Cooperation, Axelrod pointed out that TIT FOR TAT interaction can lead to cooperation in a wide variety of social settingsâincluding some of the most unpromising situations imaginable. His favorite example was the âlive-and-let-liveâ system that spontaneously developed during World War I,
Axelrod also pointed out that TIT FOR TAT interactions lead to cooperation in the natural world even without the benefit of intelligence.
This TIT FOR TAT mechanism for the origin of cooperation was exactly the sort of thing Holland meant when he said that people at the institute ought to be looking for the analog of âweather frontsâ in the social sciences.
With it heâs been able to demonstrate both the evolution of cooperation and the evolution of predator-prey relationships simultaneously, in the same ecosystem. And that success has inspired him to start work on still more sophisticated variations of Echo: âThereâs a
Axelrod produced a computer simulation of this scenario in collaboration with Hollandâs then-graduate student Stephanie Forrest. The question was whether a population of individuals coevolving via the genetic algorithm could discover TIT FOR TAT. And the answer was yes: in the computer runs, either TIT FOR TAT or a strategy very much like it would appear and spread through the population very quickly.
They all had âtrade,â in that there were goods being exchanged in one way or another. They all had âresource transformation,â such as might be produced by enzymes or production processes. And they all had âmate selection,â which acted as a source of technological innovation.
physicists were telling us that there were three ways now to proceed in science: mathematical theory, laboratory experiment, and computer modeling. You have to go back and forth. You discover something with a computer model that seems out of whack, and then you go and do theory to try to understand it. And then with the theory, you go back to the computer or to the laboratory for more experiments. To many of us, it seemed as though we could do the same thing in economics with great profit. We began to realize that weâd been restricting ourselves in economics unnaturally, by exploring only problems that might yield to mathematical analysis. But now that we were getting into this inductive world, where things started to get very complicated, we could extend ourselves to problems that might only be studied by computer experiment. I saw this as a necessary developmentâand a liberation.â
neoclassical theory finds Wall Street utterly incomprehensible. Since all economic agents are perfectly rational, goes the argument, then all investors must be perfectly rational. Moreover, since these perfectly rational investors also have exactly the same information about the expected earnings of all stocks infinitely far into the future, they will always agree about what every stock is worthânamely, the ânet present valueâ of its future earnings when they are discounted by the interest rate. So this perfectly rational market will never get caught up in speculative bubbles and crashes; at most it will go up or down a little bit as new information becomes available about the various stocksâ future earnings. But either way, the logical conclusion is that the floor of the New York Stock Exchange must be a very quiet place. In reality, of course, the floor of the New York Stock Exchange is a barely controlled riot. The place is wracked by bubbles and crashes all the time, not to mention fear, uncertainty, euphoria, and mob psychology in every conceivable combination.
Its credo is that life is not a property of matter per se, but the organization of that matter.
Its promise is that by exploring other possible biologies in a new mediumâcomputers and perhaps robotsâartificial life researchers can achieve what space scientists have achieved by sending probes to other planets: a new understanding of our own world through a cosmic perspective on what happened on other worlds. âOnly when we are able to view life-as-we-know-it in the context of life-as-it-could-be will we really understand the nature of the beast,â Langton declared. The idea of viewing life in terms of its abstract organization is perhaps the single most compelling vision to come out of the workshop, he said. And itâs no accident that this vision is so closely associated with computers: they share many of the same intellectual roots. Human beings have been searching for the secret of automataâmachines that can generate their own behaviorâat least since the time of the Pharaohs, when Egyptian craftsmen created clocks based on the steady drip of water through a small orifice.
the first century A.D., Hero of Alexandria produced his treatise Pneumatics, in which he described (among other things) how pressurized air could generate simple movements in various gadgets shaped like animals and humans. In Europe, during the great age of clockworks more than a thousand years later, medieval and Renaissance craftsmen devised increasingly elaborate figures known as âjacks,â which would emerge from the interior of the clock to strike the hours; some of their public clocks eventually grew to include large numbers of figures that acted out entire plays. And during the Industrial Revolution, the technology of clockwork automata gave rise to the still more sophisticated technology of process control, in which factory machines were guided by intricate sets of rotating cams and interlinked mechanical arms.
That effort culminated in the early decades of the twentieth century with the work of Alonzo Church, Kurt Gödel, Alan Turing, and others, who pointed out that the essence of a mechanical processâthe âthingâ responsible for its behaviorâis not a thing at all. It is an abstract control structure, a program that can be expressed as a set of rules without regard to the material the machine is made of. Indeed, said Langton, this abstraction is what allows you to take a piece of software from one computer and run it on another computer: the âmachinenessâ of the machine is in the software, not the hardware. And once youâve accepted that, he said, echoing his own epiphany at Massachusetts General Hospital nearly eighteen years before, then itâs a very small step to say that the âalivenessâ of an organism is also in the softwareâin the organization of the molecules, not the molecules themselves.
living systems are machines, all right, but machines with a very different kind of organization from the ones weâre used to. Instead of being designed from the top down, the way a human engineer would do it, living systems always seem to emerge from the bottom up, from a population of much simpler systems.
âThe most surprising lesson we have learned from simulating complex physical systems on computers is that complex behavior need not have complex roots,â he wrote, complete with italics. âIndeed, tremendously interesting and beguilingly complex behavior can emerge from collections of extremely simple components.â
the statement applied equally well to one of the most vivid demonstrations at the artificial life workshop: Craig Reynoldsâ flock of âboids.â Instead of writing global, top-down specifications for how the flock should behave, or telling his creatures to follow the lead of one Boss Boid, Reynolds had used only the three simple rules of local, boid-to-boid interaction. And it was precisely that locality that allowed his flock to adapt to changing conditions so organically. The rules always tended to pull the boids together, in somewhat the same way that Adam Smithâs Invisible Hand tends to pull supply into balance with demand. But just as in the economy, the tendency to converge was only a tendency, the result of each boid reacting to what the other boids were doing in its immediate neighborhood. So when a flock encountered an obstacle such as a pillar, it had no trouble splitting apart and flowing to either side as each boid did its own thing.
Try doing that with a single set of top-level rules, said Langton. The system would be impossibly cumbersome and complicated, with the rules telling each boid precisely what to do in every conceivable situation.
since itâs effectively impossible to cover every conceivable situation, top-down systems are forever running into combinations of events they donât know how to handle. They tend to be touchy and fragile, and they all too often grind to a halt in a dither of indecision.
The theme was heard over and over again at the workshop, said Langton: the way to achieve lifelike behavior is to simulate populations of simple units instead of one big complex unit. Use local control instead of global control. Let the behavior emerge from the bottom up, instead of being specified from the top down. And while youâre at it, focus on ongoing behavior instead of the final result. As Holland loved to point out, living systems never really settle down.
there was a third great idea to be distilled from the workshop presentations: the possibility that life isnât just like a computation, in the sense of being a property of the organization rather than the molecules. Life literally is a computation.
For example, Why is life quite literally full of surprises? Because, in general, it is impossible to start from a given set of GTYPE rules and predict what their PTYPE behavior will beâeven in principle. This is the undecidability theorem, one of the deepest results of computer science: unless a computer program is utterly trivial, the fastest way to find out what it will do is to run it and see. There is no general-purpose procedure that can scan the code and the input and give you the answer any faster than that. Thatâs why the old saw about computers only doing what their programmers tell them to do is both perfectly true and virtually irrelevant; any piece of code thatâs complex enough to be interesting will always surprise its programmers. Thatâs why any decent software package has to be endlessly tested and debugged before it is releasedâand thatâs why the users always discover very quickly that the debugging was never quite perfect. And, most important for artificial life purposes, thatâs why a living system can be a biochemical machine that is completely under the control of a program, a GTYPE, and yet still have a surprising, spontaneous behavior in the PTYPE.
in the poorly defined, constantly changing environments faced by living systems, said Langton, there seems to be only one way to proceed: trial and error, also known as Darwinian natural selection. The process may seem terribly cruel and wasteful, he pointed out. In effect, nature does its programming by building a lot of different machines with a lot of randomly differing GTYPES, and then smashing the ones that donât work very well. But, in fact, that messy, wasteful process may be the best that nature can do. And by the same token, John Hollandâs genetic algorithm approach may be the only realistic way of programming computers to cope with messy, ill-defined problems. âIt is quite likely that this is the only efficient, general procedure that could find GTYPEs with specific PTYPE traits,â
True, computer viruses lived their lives entirely within the cyberspace of computers and computer networks. They didnât have any independent existence out in the material world. But that didnât necessarily rule them out as living things. If life was really just a question of organization, as Langton claimed, then a properly organized entity would literally be alive, no matter what it was made of.
The future effect of the changes we make now are unpredictable, he pointed out, even in principle. Yet we are responsible for the consequences, nonetheless. And that, in turn, meant that the implications of artificial life had to be debated in the open, with public input.
âBy the middle of this century,â he wrote, âmankind had acquired the power to extinguish life on Earth. By the middle of the next century, he will be able to create
Furthermore, he said, suppose that you could create life. Then suddenly you would be involved in something a lot bigger than some technical definition of living versus nonliving. Very quickly, in fact, you would find yourself engaged in a kind of empirical theology. Having created a living creature, for example, would you then have the right to demand that it worship you and make sacrifices to you? Would you have the right to act as its god? Would you have the right to destroy it if it didnât behave the way you wanted it to?
âChris was definitely worth it,â he says. âPeople like him, who have a real dream, a vision of what they want to do, are rare. Chris hadnât learned to be very efficient. But I think he had a good vision, one that was really needed. And I think he was doing a really good job carrying it out. He wasnât afraid to tackle the details.â
thereby becoming an exquisitely tuned ecosystem. Atoms search for a minimum energy state by forming chemical
If entropy is always increasing, he asked himself, and if atomic-scale randomness and disorder are inexorable, then why is the universe still able to bring forth stars and planets and clouds and trees? Why is matter constantly becoming more and more organized on a large scale, at the same time that it is becoming more and more disorganized on a small scale? Why hasnât everything in the universe long since dissolved into a formless miasma?
this sensitivity to initial conditions could be described by the emerging science of âchaos,â more generally known as âdynamical systems theoryâ;
âIâm of the school of thought that life and organization are inexorable,â he says, âjust as inexorable as the increase in entropy. They just seem more fluky because they proceed in fits and starts, and they build on themselves. Life is a reflection of a much more general phenomenon that Iâd like to believe is described by some counterpart of the second law of thermodynamicsâsome law that would describe the tendency of matter to organize itself, and that would predict the general properties of organization weâd expect to see in the universe.â
chaos theory actually had very little to say about the fundamental principles of living systems or of evolution. It didnât explain how systems starting out in a state of random nothingness could then organize themselves into complex wholes. Most important, it didnât answer his old question about the inexorable growth of order and structure in the universe.
people have recently been finding so many hints about things like emergence, adaptation, and the edge of chaos that they can begin to sketch at least a broad outline of what this hypothetical new second law might be like.
this putative law would have to give a rigorous account of emergence: What does it really mean to say that the whole is greater than the sum of its parts?
Flying boids (and real birds) adapt to the actions of their neighbors, thereby becoming a flock. Organisms cooperate and compete in a dance of coevolution, thereby becoming an exquisitely tuned ecosystem. Atoms search for a minimum energy state by forming chemical bonds with each other, thereby becoming the emergent structures known as molecules. Human beings try to satisfy their material needs by buying, selling, and trading with each other, thereby creating an emergent structure known as a market. Humans likewise interact with each other to satisfy less quantifiable goals, thereby forming families, religions, and cultures. Somehow, by constantly seeking mutual accommodation and self-consistency, groups of agents manage to transcend themselves and become something more. The trick is to figure out how, without falling back into sterile philosophizing or New Age mysticism.
connectionism: the idea of representing a population of interacting agents as a network of ânodesâ linked by âconnections.â
Exhibit A has to be the neural network movement, in which researchers use webs of artificial neurons to model such things as perception and memory retrievalâand, not incidentally, to mount a radical attack on the symbol-processing methods of mainstream artificial intelligence. But close behind are many of the models that have found a home at the Santa Fe Institute, including John Hollandâs classifier systems, Stuart Kauffmanâs genetic networks, the autocatalytic set model for the origin of life, and the immune system model that he and Packard did in the mid-1980s with Los Alamosâ Alan Perelson.
In John Hollandâs classifier system the node-and-connection structure is considerably less obvious, says Farmer, but itâs there. The set of nodes is just the set of all possible internal messages, such as 1001001110111110. And the connections are just the classifier rules, each of which looks for a certain message on the systemâs internal bulletin board, and then responds to it by posting another. By activating certain input nodesâthat is, by posting the corresponding input messages on the bulletin boardâthe programmer can cause the classifiers to activate more messages, and then still more. The result will be a cascade of messages analogous to the spreading activation in a neural network. And, just as the neural net eventually settles down into a self-consistent state, the classifier system will eventually settle down into a stable set of active messages and classifiers that solves the problem at handâor, in Hollandâs picture, that represents an emergent mental model.
a common framework should help the people working on these models to communicate a lot more easily than they usually do, without the babel of different jargons. âThe thing I considered important in that paper was that I hammered out the actual translation machinery for going from one model to another. I could take a model of the immune system and say, âIf that were a neural net, hereâs what it would look like.ââ But perhaps the most important reason for having a common framework, says Farmer, is that it helps you distill out the essence of the models, so that you can focus on what they actually have to say about emergence. And in this case, the lesson is clear: the power really does lie in the connections. Thatâs what gets so many people so excited about connectionism. You can start with very, very simple nodesâlinear âpolymers,â âmessagesâ that are just binary numbers, âneuronsâ that are essentially just on-off switchesâand still generate surprising and sophisticated outcomes just from the way they interact.
you can change them in two different ways. The first way is to leave the connections in place but modify their âstrength.â This corresponds to what Holland calls exploitation learning: improving what you already have.
The second, more radical way of adjusting the connections is to change the networkâs whole wiring diagram. Rip out some of the old connections and put in new ones. This corresponds to what Holland calls exploration learning: taking the risk of screwing up big in return for the chance of winning big. In Hollandâs classifier systems, for example, this is exactly what happens when the genetic algorithm mixes rules together through its inimitable version of sexual recombination; the new rules that result will often link messages that have never been linked before. This is also what happens in the autocatalytic set model when occasional new polymers are allowed to form spontaneouslyâas they do in the real world. The resulting chemical connections can give the autocatalytic set an opening to explore a whole new realm of polymer space.
the connectionist idea shows how the capacity for learning and evolution can emerge even if the nodes, the individual agents, are brainless and dead. More generally, by putting the power in the connections and not the nodes, it points the way toward a very precise theory of what Langton and the artificial lifers mean when they say that the essence of life is in the organization and not the molecules. And it likewise points the way toward a deeper understanding of how life and mind could have gotten started in a universe that began with neither. The Edge of Chaos However, says Farmer, as beautiful as that prospect may be, connectionist models are a long way from telling you everything youâd like to know about the new second law.
connectionist models are a long way from telling you everything youâd like to know about the new second law. To begin with, they donât tell you much about how emergence works in economies, societies, or ecosystems, where the nodes are âsmartâ and constantly adapting to each other. To understand systems like that, you have to understand the coevolutionary dance of cooperation and competition. And that means studying them with coevolutionary models such as Hollandâs Echo,
More important, says Farmer, neither connectionist models nor coevolutionary models tell you what makes life and mind possible in the first place. What is it about the universe that allows these things to happen? It isnât enough to say âemergenceâ; the cosmos is full of emergent structures like galaxies and clouds and snowflakes that are still just physical objects; they have no independent life whatsoever. Something more is required. And this hypothetical new second law will have to tell us what that something is. Clearly, this is a job for models that try to get at the basic physics and chemistry of the world, such as the cellular automata that Chris Langton is so fond of. And by no coincidence, says Farmer, Langtonâs discovery of this weird, edge-of-chaos phase transition in cellular automata seems to be a big part of the answer.
Langton is basically saying that the mysterious âsomethingâ that makes life and mind possible is a certain kind of balance between the forces of order and the forces of disorder. More precisely, heâs saying that you should look at systems in terms of how they behave instead of how theyâre made. And when you do, he says, then what you find are the two extremes of order and chaos.
right in between the two extremes, he says, at a kind of abstract phase transition called âthe edge of chaos,â you also find complexity: a class of behaviors in which the components of the system never quite lock into place, yet never quite dissolve into turbulence, either. These are the systems that are both stable enough to store information, and yet evanescent enough to transmit it. These are the systems that can be organized to perform complex computations, to react to the world, to be spontaneous, adaptive, and alive.
in the mid-1980s, says Farmer, it was much the same story with the autocatalytic set model. The model had a number of parameters such as the catalytic strength of the reactions, and the rate at which âfoodâ molecules are supplied. He, Packard, and Kauffman had to set all these parameters by hand, essentially by trial and error. And one of the first things they discovered was that nothing much happened in the model until they got those parameters into a certain rangeâwhereupon the autocatalytic sets would take off and develop very quickly.
the totalitarian, centralized approach to the organization of society doesnât work very well.â In the long run, the system that Stalin built was just too stagnant, too locked in, too rigidly controlled to survive. Or look at the Big Three automakers in Detroit in the 1970s. They had grown so big and so rigidly locked in to certain ways of doing things that they could barely recognize the growing challenge from Japan, much less respond to it. On the other hand, says Farmer, anarchy doesnât work very well, eitherâas certain parts of the former Soviet Union seemed determined to prove in the aftermath of the breakup. Nor does an unfettered laissez-faire system: witness the Dickensian horrors of the Industrial Revolution in England or, more recently, the savings and loan debacle in the United States. Common sense, not to mention recent political experience, suggests that healthy economies and healthy societies alike have to keep order and chaos in balanceâand not just a wishy-washy, average, middle-of-the road kind of balance, either. Like a living cell, they have to regulate themselves with a dense web of feedbacks and regulation, at the same time that they leave plenty of room for creativity, change, and response to new conditions. âEvolution thrives in systems with a bottom-up organization, which gives rise to flexibility,â says Farmer. âBut at the same time, evolution has to channel the bottom-up approach in a way that doesnât destroy the organization. There has to be a hierarchy of controlâwith information flowing from the bottom up as well as from the top down.â The dynamics of complexity at the edge of chaos, he says, seems to be ideal for this kind of behavior.
the hypothetical new second law will still have to explain how emergent systems get there, how they keep themselves there, and what they do there.
itâs easy to persuade yourself that those first two questions have already been answered by Charles Darwin (as generalized by John Holland). Since the systems that are capable of the most complex, sophisticated responses will always have the edge in a competitive world, goes the argument, then frozen systems can always do better by loosening up a bit, and turbulent systems can always do better by getting themselves a little more organized. So if a system isnât on the edge of chaos already, youâd expect learning and evolution to push it in that direction. And if it is on the edge of chaos, then youâd expect learning and evolution to pull it back if it ever starts to drift away. In other words, youâd expect learning and evolution to make the edge of chaos stable, the natural place for complex, adaptive systems to be.
In the space of all possible dynamical behaviors, the edge of chaos is like an infinitesimally thin membrane, a region of special, complex behaviors separating chaos from order. But then, the surface of the ocean is only one molecule thick, too; itâs just a boundary separating water from air. And the edge of chaos region, like the surface of the ocean, is still vast beyond all imagining. It contains a near-infinity of ways for an agent to be both complex and adaptive. Indeed, when John Holland talks about âperpetual novelty,â and adaptive agents exploring their way into an immense space of possibilities, he may not say it this wayâbut heâs talking about adaptive agents moving around on this immense edge-of-chaos membrane.
at heart it will not be about mechanism so much as direction: the deceptively simple fact that evolution is constantly coming up with things that are more complicated, more sophisticated, more structured than the ones that came before.
It seems that learning and evolution donât just pull agents to the edge of chaos; slowly, haltingly, but inexorably, learning and evolution move agents along the edge of chaos in the direction of greater and greater complexity. Why? âItâs a thorny question,â says Farmer. âItâs very hard to articulate a notion of âprogressâ in biology.â What does it mean for one creature to be more advanced than another? Cockroaches, for example, have been around for several hundred million years longer than human beings, and they are very, very good at being cockroaches. Are we more advanced than they are, or just different? Were our mammalian ancestors of 65 million years ago more advanced than Tyrannosaurus rex, or just luckier in surviving the impact of a marauding comet? With no objective definition of fitness, says Farmer, âsurvival
One of the wonderful things about autocatalysis is that you can follow emergence from the ground up, he says. The concentration of a few chemicals gets spontaneously pumped up by orders of magnitude over their equilibrium concentration because they can collectively catalyze each otherâs formation. And that means that the set as a whole is now like a new, emergent individual sticking up from the equilibrium backgroundâexactly what you want for explaining the origin of life.
subjected the model autocatalytic sets to fluctuations in their âfoodâ supply: the stream of small molecules that served as raw material for the sets. âWhat was really cool was that some sets were like Panda bears, which can only digest bamboo,â says Farmer. âIf you changed their food supply they just collapsed. But others were like omnivores; they had lots of metabolic pathways that allowed them to substitute one food molecule for another. So when you played around with the food supply they were virtually unchanged.â Such robust sets, presumably, would have been the kind that survived on the early Earth.
made another modification in the autocatalytic model to allow for occasional spontaneous reactions, which are known to happen in real chemical systems. These spontaneous reactions caused many of the autocatalytic sets to fall apart. But the ones that crashed paved the way for an evolutionary leap. âThey triggered avalanches of novelty,â he says. âCertain variations would get amplified, and then would stabilize again until the next crash. We saw a succession of autocatalytic metabolisms, each replacing the other.â Maybe thatâs a clue, says Farmer. âIt will be interesting to see if we can articulate a notion of âprogressâ that would involve emergent structures having certain feedback loops [for stability] that werenât present in what went before. The key is that there would be a sequence of evolutionary events structuring the matter in the universe in the Spencerian sense, in which each emergence sets the stage and makes it easier for the emergence of the next level.â
People are thrashing around trying to define things like âcomplexityâ and âtendency for emergent computation.â I can only evoke vague images in your brain with words that arenât precisely defined in mathematical terms. Itâs like the advent of thermodynamicsâbut weâre where they were in about 1820. They knew there was something called âheat,â but they were talking about it in terms that would later sound ridiculous.â
Only a minority thought that heat might represent some kind of microscopic motion in the pokerâs atoms. (The minority was right.) Moreover, no one at the time seems to have imagined that messy, complicated things like steam engines, chemical reactions, and electric batteries could all be governed by simple, general laws.
we now have a good understanding of chaos and fractals, which show us how simple systems with simple parts can generate very complex behaviors. We know quite a bit about gene regulation in the fruit fly, Drosophila. In a few very specific contexts we have some hints as to how self-organization is achieved in the brain. And in artificial life we are creating a new repertoire of âtoy universes.â Their behavior is a pale reflection of what actually goes on in natural systems. But we can simulate them completely, we can alter them at will, and we can understand exactly what makes them do what they do. The hope is that we will eventually be able to stand back and assemble all these fragments into a comprehensive theory of evolution and self-organization.
âBut what makes it exciting is the very fact that things arenât laid in stone. Itâs still happening. I donât see anybody with a clear path to an answer. But there are lots of little hints flying around. Lots of little toy systems and vague ideas. So itâs conceivable to me that in twenty or thirty years we will have a real theory.â
What weâre really looking for in the science of complexity is the general law of pattern formation in non-equilibrium systems throughout the universe.
âItâs so annoying because I can almost taste it, almost see it. Iâm not being a careful scientist. Nothingâs finished. Iâve only had a first glance at a bunch of things. I feel more like a howitzer shell piercing through wall after wall, leaving a mess behind. I feel that Iâm rushing through topic after topic, trying to see where the end of the arc of the howitzer shell is, without knowing how to clean up anything on the way back.â
Self-organization couldnât do it all alone. After all, mutant genes can self-organize themselves just as easily as normal ones can. And when the result is, say, a fruit fly monstrosity with legs where its antennae should be, or no head, then you still need natural selection to sort out whatâs viable from whatâs hopeless.
Langton had recognized what Kauffman had not: that the edge of chaos was much more than just a simple boundary separating purely ordered systems from purely chaotic systems. Indeed, it was Langton who finally got Kauffman to understand the point after several long conversations: the edge of chaos was a special region unto itself, the place where you could find systems with lifelike, complex behaviors.
It had crossed my mind that you could get complex computation at the phase transition. But the thought that I hadnât had, which was silly, was that selection would get you there. The thought just didnât cross my mind.â Now that it had crossed his mind, however, his old problem of self-organization versus natural selection took on a wonderful new clarity. Living systems are not deeply entrenched in the ordered regime, which was essentially what heâd been saying for the past twenty-five years with his claim that self-organization is the most powerful force in biology. Living systems are actually very close to this edge-of-chaos phase transition, where things are much looser and more fluid. And natural selection is not the antagonist of self-organization. Itâs more like a law of motionâa force that is constantly pushing emergent, self-organizing systems toward the edge of chaos.
Big avalanches are rare, and small ones are frequent. But the steadily drizzling sand triggers cascades of all sizesâa fact that manifests itself mathematically as the avalanchesâ âpower-lawâ behavior: the average frequency of a given size of avalanche is inversely proportional to some power of its size. Now the point of all this, says Bak, is that power-law behavior is very common in nature. Itâs been seen in the activity of the sun, in the light from galaxies, in the flow of current through a resistor, and in the flow of water through a river. Large pulses are rare, small ones are common, but all sizes occur with this power-law relationship in frequency.
The sand pile metaphor suggests an answer, he says. Just as a steady trickle of sand drives a sand pile to organize itself into a critical state, a steady input of energy or water or electrons drives a great many systems in nature to organize themselves the same way. They become a mass of intricately interlocking subsystems just barely on the edge of criticalityâwith breakdowns of all sizes ripping through and rearranging things just often enough to keep them poised on the edge. A prime example is the distribution of earthquakes, says Bak. Anyone who lives in California knows that little earthquakes that rattle the dishes are far more common than the big earthquakes that make international headlines. In 1956, geologists Beno Gutenberg and Charles Richter (of the famous Richter scale) pointed out that these tremors in fact follow a power law: in any given area, the number of earthquakes each year that release a certain amount of energy is inversely proportional to a certain power of the energy.
The standard earthquake model says that the rocks on either side are locked together by enormous pressure and friction; they resist the motion until suddenly they slip catastrophically. In Bak and Tangâs version, however, the rocks on either side bend and deform until they are just ready to slip past each otherâwhereupon the fault undergoes a steady cascade of little slips and bigger slips that are just sufficient to keep the tension at that critical point. So a power law for earthquakes is exactly what you would expect, they argued; itâs just a statement that the earth has long since tortured all its fault zones into a state of self-organized criticality. And indeed, their simulated earthquakes follow a power law very similar to the one found by Gutenberg and Richter.
Unfortunately, he adds, self-organizing criticality only tells you about the overall statistics of avalanches; it tells you nothing about any particular avalanche. This is yet another case where understanding is not the same thing as prediction.
it doesnât even make any difference if the sand pile scientists try to prevent the collapse theyâve predicted. They can certainly do so by putting up braces and support structures and such. But they just end up shifting the avalanche somewhere else. The global power law stays the same.
It was one thing to talk about individual agents being on the edge of chaos. Thatâs precisely the dynamical region that allows them to think and be alive. But what about a collection of agents taken as a whole? The economy, for example: people talk as if it had moods and responses and passing fevers. Is it at the edge of chaos? Are ecosystems? Is the immune system? Is the global community of nations?
arguing by analogy, it seems reasonable to think that each new level is âaliveâ in the same senseâby virtue of being at or very near the edge of chaos.
how could you even test such a notion? Langton had been able to recognize a phase transition by watching for cellular automata that showed manifestly complex behavior on a computer screen. Yet it was not at all obvious how to do that for economies or ecosystems out in the real world. How are you supposed to tell whatâs simple and whatâs complex when youâre looking at the behavior of Wall Street? Precisely what does it mean to say that global politics or the Brazilian rain forest is on the edge of chaos?
Bakâs self-organized criticality suggested an answer, Kauffman realized. You can tell that a system is at the critical state and/or the edge of chaos if it shows waves of change and upheaval on all scales and if the size of the changes follows a power law.
For the best and most vivid metaphor, he says, imagine a pile of sand on a tabletop, with a steady drizzle of new sand grains raining down from above. (This experiment has actually been done, by the way, both in computer simulations and with real sand.) The pile grows higher and higher until it canât grow any more: old sand is cascading down the sides and off the edge of the table as fast as the new sand dribbles down.
the resulting sand pile is self-organized, in the sense that it reaches the steady state all by itself without anyone explicitly shaping it. And itâs in a state of criticality, in the sense that sand grains on the surface are just barely stable.
imagine a stable ecosystem or a mature industrial sector where all the agents have gotten themselves well adapted to each other. There is little or no evolutionary pressure to change. And yet the agents canât stay there forever, he says, because eventually one of the agents is going to suffer a mutation large enough to knock him out of equilibrium. Maybe the aging founder of a firm finally dies and a new generation takes over with new ideas. Or maybe a random genetic crossover gives a species the ability to run much faster than before. âSo that agent starts changing,â says Kauffman, âand then he induces changes in one of his neighbors, and you get an avalanche of changes until everything stops changing again.â But then someone else mutates. Indeed, you can expect the population to be showered with a steady rain of random mutations, much as Bakâs sand pile is showered with a steady drizzle of sand grainsâwhich means that you can expect any closely interacting population of agents to get themselves into a state of self-organized criticality, with avalanches of change that follow a power law.
you can argue that these avalanches lie behind the great extinction events in the earthâs past, where whole groups of species vanish from the fossil record and are replaced by totally new ones.
terms of human organizations, itâs as if the jobs are so subdivided that no one has any latitude; all they can do is learn how to perform the one job theyâve been hired for, and nothing else. Whatever the metaphor, however, itâs clear that if each individual in the various organizations is allowed a little more freedom to march to a different drummer, then everyone will benefit. The deeply frozen system will become a little more fluid, says Kauffman, the aggregate fitness will go up, and the agents will collectively move a bit closer to the edge of chaos.
In organizational terms, itâs as if the lines of command in each firm are so screwed up that nobody has the slightest idea what theyâre supposed to doâand half the time they are working at cross-purposes anyway. Either way, it obviously pays for individual agents to tighten up their couplings a bit, so that they can begin to adapt to what other agents are doing. The chaotic system will become a little more stable, says Kauffman, the aggregate fitness will go up, and once again, the ecosystem as a whole will move a bit closer to the edge of chaos.
the evidence consists of a sort of power law in the fossil record suggesting that the global biosphere is near the edge of chaos; a couple of computer models showing that systems can adapt their way to the edge of chaos through natural selection; and now one computer model showing that ecosystems may be able to get to the edge of chaos through coevolution.
Fontana started with one of those cosmic observations that sound so deceptively simple. When we look at the universe on size scales ranging from quarks to galaxies, he pointed out, we find the complex phenomena associated with life only at the scale of molecules. Why? Well, said Fontana, one answer is just to say âchemistryâ: life is clearly a chemical phenomenon, and only molecules can spontaneously undergo complex chemical reactions with one another. But again, Why? What is it that allows molecules to do what quarks and quasars canât? Two things, he said. The first source of chemistryâs power is simple variety: unlike quarks, which can only combine to make protons and neutrons in groups of three, atoms can be arranged and rearranged to form a huge number of structures. The space of molecular possibilities is effectively limitless. The second source of power is reactivity: structure A can manipulate structure B to form something newâstructure C.
âSo when we ask questions like how life emerges, and why living systems are the way they areâthese are the kind of questions that are really fundamental to understanding what we are and what makes us different from inanimate matter. The more we know about these things, the closer weâre going to get to fundamental questions like, âWhat is the purpose of life?â Now, in science we can never even attempt to make a frontal assault on questions like that. But by addressing a different questionâlike, Why is there an inexorable growth in complexity?âwe may be able to learn something fundamental about life that suggests its purpose, in the same way that Einstein shed light on what space and time are by trying to understand gravity. The analogy I think of is averted vision in astronomy: if you want to see a very faint star, you should look a little to the side because your eye is more sensitive to faint light that wayâand as soon as you look right at the star, it disappears.â Likewise, says Farmer, understanding the inexorable growth of complexity isnât going to give us a full scientific theory of morality. But if a new second law helps us understand who and what we are, and the processes that led to us having brains and a social structure, then it might tell us a lot more about morality than we know now. âReligions try to impose rules of morality by writing them on stone tablets,â he says. âWe do have a real problem now, because when we abandon conventional religion, we donât know what rules to follow anymore. But when you peel it all back, religion and ethical rules provide a way of structuring human behavior in a way that allows a functioning society. My feeling is that all of morality operates at that level. Itâs an evolutionary process in which societies constantly perform experiments, and whether or not those experiments succeed determines which cultural ideas and moral precepts propagate into the future.â If so, he says, then a theory that rigorously explains how coevolutionary systems are driven to the edge of chaos might tell us a lot about cultural dynamics, and how societies reach that elusive, ever-changing balance between freedom and control. âI draw a lot of fairly speculative conclusions about the implications of all this,â says Chris Langton. âIt comes from viewing the world very much through these phase transition glasses: you can apply the idea to a lot of things and it kind of fits.â Witness the collapse of communism in the former Soviet Union and its Eastern European satellites, he says: the whole situation seems all too reminiscent of the power-law distribution of stability and upheaval at the edge of chaos. âWhen you think of it,â he says, âthe Cold War was one of these long periods where not much changed. And although we can find fault with the U. S. and Soviet governments for holding a gun to the worldâs headâthe only thing that kept it from blowing up was Mutual Assured Destructionâthere was a lot of stability. But now that period of stability is ending. Weâve seen upheaval in the Balkans and all over the place. Iâm more scared about whatâs coming in the immediate future. Because in the models, once you get out of one of these metastable periods, you get into one of these chaotic periods where a lot of change happens. The possibilities for war are much higherâincluding the kind that could lead to a world war. Itâs much more sensitive now to initial conditions. âSo whatâs the right course of action?â he asks. âI donât know, except that this is like punctuated equilibrium in evolutionary history. It doesnât happen without a great deal of extinction. And itâs not necessarily a step for the better. There are models where the species that dominate in the stable period after the upheaval may be less fit than the species that dominated beforehand. So these periods of evolutionary change can be pretty nasty times. This is the kind of era when the United States could disappear as a world power. Who knows whatâs going to come out the other end?
âThe thing to do is to try to determine whether we can apply this sort of thing to historyâand if so, whether we also see this kind of punctuated equilibrium. Things like the fall of Rome. Because in that case, we really are part of the evolutionary process. And if we really study that process, we may be able to incorporate this thinking into our political, social, and economic theories, where we realize that we have to be very careful and put some global agreements and treaties in place to get us through. But then the question is, do we want to gain control of our own evolution or not? If so, does that stop evolution? Itâs good to have evolution progress.
If single-celled things had found a way to stop evolution to maintain themselves as dominant life-forms, then we wouldnât be here. So you donât want to stop it. On the other hand, maybe you want to understand how it can keep going without the massacres and the extinctions. âSo maybe the lesson to be learned is that evolution hasnât stopped,â says Langton. âItâs still going on, exhibiting many of the same phenomena it did in biological historyâexcept that now itâs taking place on the social-cultural plane. And we may be seeing a lot of the same kinds of extinctions and upheaval.â
suppose that these models about the origin of life are correct. Then life doesnât hang in the balance. It doesnât depend on whether some warm little pond just happens to produce template-replicating molecules like DNA or RNA. Life is the natural expression of complex matter. Itâs a very deep property of chemistry and catalysis and being far from equilibrium. And that means that weâre at home in the universe. Weâre to be expected.
that reminds us that we make the world we live in with one another. Weâre participants in the story as it unfolds. We arenât victims and we arenât outsiders. We are part of the universe, you and me, and the goldfish. We make our world with one another. âAnd now suppose itâs really true that coevolving, complex systems get themselves to the edge of chaos,â he says. âWell, thatâs very Gaia-like. It says that thereâs an attractor, a state that we collectively maintain ourselves in, an ever-changing state where species are always going extinct and new ones are coming into existence. Or if we imagine that this really carries over into economic systems, then itâs a state where technologies come into existence and replace others, et cetera. But if this is true, it means that the edge of chaos is, on average, the best that we can do. The ever-open and ever-changing world that we must make for ourselves is in some sense as good as it possibly can be. âWell, thatâs a story about ourselves,â says Kauffman. âMatter has managed to evolve as best it can. And weâre at home in the universe. Itâs not Panglossian, because thereâs a lot of pain. You can go extinct, or broke. But here we are on the edge of chaos because thatâs where, on average, we all do the best.â
âBy about 1985,â says Arthur, âit seems to me that all sorts of economists were getting antsy, starting to look around and sniff the air. They sensed that the conventional neoclassical framework that had dominated over the past generation had reached a high water mark. It had allowed them to explore very thoroughly the domain of problems that are treatable by static equilibrium analysis. But it had virtually ignored the problems of process, evolution, and pattern formationâproblems where things were not at equilibrium, where thereâs a lot of happenstance, where history matters a great deal, where adaptation and evolution might go on forever.
We can deal with inductive learning rather than deductive logic, we can cut the Gordian knot of equilibrium and deal with open-ended evolution, because many of these problems have been dealt with by other disciplines. Santa Fe provided the jargon, the metaphors, and the expertise that you needed in order to get the techniques started in economics. But more than that, Santa Fe legitimized this different vision of economics.
It wasnât that the standard formulation was wrong, he said, but that we were exploring into a new way of looking at parts of the economy that are not amenable to conventional methods. So this new approach was complementary to the standard ones. He also said that we didnât know where this new sort of economics was taking us. It was the beginnings of a research program. But he found it very interesting and exciting.
He said, âI think we can safely say we have another type of economics here. One type is the standard stuff that weâre all familiar withââhe was too modest to call it the Arrow-Debreu system, but he basically meant the neoclassical, general equilibrium theoryââand then this other type, the Santa Feâstyle evolutionary economics.â
âNonscientists tend to think that science works by deduction,â he says. âBut actually science works mainly by metaphor. And whatâs happening is that the kinds of metaphor people have in mind are changing.â To put it in perspective, he says, think of what happened to our view of the world with the advent of Sir Isaac Newton. âBefore the seventeenth century,â he says, âit was a world of trees, disease, human psyche, and human behavior. It was messy and organic. The heavens were also complex. The trajectories of the planets seemed arbitrary. Trying to figure out what was going on in the world was a matter of art. But then along comes Newton in the 1660s. He devises a few laws, he devises the differential calculusâand suddenly the planets are seen to be moving in simple, predictable orbits!
So in the Enlightenment, which lasted from about 1680 all through the 1700s, the era shifted to a belief in the primacy of nature: if you just left things alone, nature would see to it that everything worked out for the common good.â The metaphor of the age, says Arthur, became the clockwork motion of the planets: a simple, regular, predictable Newtonian machine that would run of itself. And the model for the next two and a half centuries of reductionist science became Newtonian physics. âReductionist science tends to say, âHey, the world out there is complicated and a messâbut look! Two or three laws reduce it all to an incredibly simple system!â âSo all that remained was for Adam Smith, at the height of the Scottish Enlightenment around Edinburgh, to understand the machine behind the economy,â says Arthur. âIn 1776, in The Wealth of Nations, he made the case that if you left people alone to pursue their individual interests, the âInvisible Handâ of supply and demand would see to it that everything worked out for the common good.â Obviously, this was not the whole story: Smith himself pointed to such nagging problems as worker alienation and exploitation. But there was so much about his Newtonian view of the economy that was simple and powerful and right that it has dominated Western economic thought ever since. âSmithâs idea was so brilliant that it just dazzled us,â says Arthur.
âPeople realized that logic and philosophy are messy, that language is messy, that chemical kinetics is messy, that physics is messy, and finally that the economy is naturally messy. And itâs not that this is a mess created by the dirt thatâs on the microscope glass. Itâs that this mess is inherent in the systems themselves. You canât capture any of them and confine them to a neat box of logic.â The result, says Arthur, has been the revolution in complexity. âIn a sense itâs the opposite of reductionism. The complexity revolution began the first time someone said, âHey, I can start with this amazingly simple system, and lookâit gives rise to these immensely complicated and unpredictable consequences.ââ Instead of relying on the Newtonian metaphor of clockwork predictability, complexity seems to be based on metaphors more closely akin to the growth of a plant from a tiny seed, or the unfolding of a computer program from a few lines of code, or perhaps even the organic, self-organized flocking of simpleminded birds.
âIf I had a purpose, or a vision, it was to show that the messiness and the liveliness in the economy can grow out of an incredibly simple, even elegant theory. Thatâs why we created these simple models of the stock market where the market appears moody, shows crashes, takes off in unexpected directions, and acquires something that you could describe as a personality.â
When you look at the subject through Chris Langtonâs phase transition glasses, for example, all of neoclassical economics is suddenly transformed into a simple assertion that the economy is deep in the ordered regime, where the market is always in equilibrium and things change slowly if at all. The Santa Fe approach is likewise transformed into a simple assertion that the economy is at the edge of chaos, where agents are constantly adapting to each other and things are always in flux. Arthur always knew which assertion he thought was more realistic.
If you think that youâre a steamboat and can go up the river, youâre kidding yourself. Actually, youâre just the captain of a paper boat drifting down the river. If you try to resist, youâre not going to get anywhere. On the other hand, if you quietly observe the flow, realizing that youâre part of it, realizing that the flow is ever-changing and always leading to new complexities, then every so often you can stick an oar into the river and punt yourself from one eddy to another.
âSo whatâs the connection with economic and political policy? Well, in a policy context, it means that you observe, and observe, and observe, and occasionally stick your oar in and improve something for the better. It means that you try to see reality for what it is, and realize that the game you are in keeps changing, so that itâs up to you to figure out the current rules of the game as itâs being played. It means that you observe the Japanese like hawks, you stop being naive, you stop appealing for them to play fair, you stop adhering to standard theories that are built on outmoded assumptions about the rules of play, you stop saying, âWell, if only we could reach this equilibrium weâd be in fat city.â You just observe. And where you can make an effective move, you make a move.â Notice that this is not a recipe for passivity, or fatalism, says Arthur. âThis is a powerful approach that makes use of the natural nonlinear dynamics of the system. You apply available force to the maximum effect. You donât waste it. This is exactly the difference between Westmorelandâs approach in South Vietnam versus the North Vietnamese approach. Westmoreland would go in with heavy forces and artillery and barbed wire and burn the villages. And the North Vietnamese would just recede like a tide. Then three days later theyâd be back, and no one knew where they came from. Itâs also the principle that lies behind all of Oriental martial arts. You donât try to stop your opponent, you let him come at youâand then give him a tap in just the right direction as he rushes by. The idea is to observe, to act courageously, and to pick your timing extremely well.â Arthur is reluctant to get into the implications of all this for policy issues. But he does remember one small workshop that Murray Gell-Mann persuaded him to cochair in the fall of 1989, shortly before he left the institute. The purpose of the workshop was to look at what complexity might have to say about the interplay of economics, environmental values, and public policy in a region such as Amazonia, where the rain forest is being cleared for roads and farms at an alarming rate. The answer Arthur gave during his own talk was that you can approach policy-making for the rain forest (or for any other subject) on three different levels. The first level, he says, is the conventional cost-benefit approach: What are the costs of each specific course of action, what are the benefits, and how do you achieve the optimum balance between the two? âThere is a place for that kind of science,â says Arthur. âIt does force you to think through the implications of the alternatives. And certainly at that meeting we had a number of people arguing the costs and benefits of rain forests. The trouble is that this approach generally assumes that the problems are well defined, that the options are well defined, and that the political wherewithal is there, so that the analystâs job is simply to put numbers on the costs and benefits of each alternative. Itâs as though the world were a railroad switch yard: Weâre going down this one track, and we have switches we can turn to guide the train onto other tracks.â Unfortunately for the standard theory, however, the real world is almost never that well definedâparticularly when it comes to environmental issues. All too often, the apparent objectivity of cost-benefit analyses is the result of slapping arbitrary numbers on subjective judgments, and then assigning the value of zero to the things that nobody knows how to evaluate. âI ridicule some of these cost-benefit analyses in my classes,â he says. âThe âbenefitâ of having spotted owls is defined in terms of how many people visit the forest, how many will see a spotted owl, and whatâs it worth to them to see a spotted owl, et cetera. Itâs all the greatest rubbish. This type of environmental cost-benefit analysis makes it seem as though weâre in front of the shop window of nature looking in, and saying, âYes, we want this, or this, or thisââbut weâre not inside, weâre not part of it. So these studies have never appealed to me. By asking only what is good for human beings, they are being presumptuous and arrogant.â The second level of policy-making is a full institutional-political analysis, says Arthur: figuring out whoâs doing what, and why. âOnce you start to do that for, say, the Brazilian rain forest, you find that there are various players: landowners, settlers, squatters, politicians, rural police, road builders, indigenous peoples. They arenât out to get the environment, but they are all playing this elaborate, interactive Monopoly game, in which the environment is being deeply affected. Moreover, the political system isnât some exogenous thing that stands outside the game. The political system is actually an outcome of the gameâthe alliances and coalitions that form as a result of it.â In short, says Arthur, you look at the system as a system, the way a Taoist in his paper boat would observe the complex, ever-changing river. Of course, a historian or a political scientist would look at the situation this way instinctively. And some beautiful studies in economics have recently started to take this approach. But at the time of the workshop in 1989, he says, the idea still seemed to be a revelation to many economists.
âIf you really want to get deeply into an environmental issue, I told them, you have to ask these questions of who has what at stake, what alliances are likely to form, and basically understand the situation. Then you might find certain points at which intervention may be possible. âSo all of that is leading up to the third level of analysis,â says Arthur. âAt this level we might look at what two different world views have to say about environmental issues. One of these is the standard equilibrium viewpoint that weâve inherited from the Enlightenmentâthe idea that thereâs a duality between man and nature, and that thereâs a natural equilibrium between them thatâs optimal for man. And if you believe this view, then you can talk about âthe optimization of policy decisions concerning environmental resources,â which was a phrase I got from one of the earlier speakers at the workshop. âThe other viewpoint is complexity, in which there is basically no duality between man and nature,â says Arthur. âWe are part of nature ourselves. Weâre in the middle of it. Thereâs no division between doers and done-to because we are all part of this interlocking network. If we, as humans, try to take action in our favor without knowing how the overall system will adaptâlike chopping down the rain forestâwe set in motion a train of events that will likely come back and form a different pattern for us to adjust to, like global climate change. âSo once you drop the duality,â he says, âthen the questions change. You canât then talk about optimization, because it becomes meaningless. It would be like parents trying to optimize their behavior in terms of âus versus the kids,â which is a strange point of view if you see yourself as a family. You have to talk about accommodation and coadaptationâwhat-would be good for the family as a whole. âBasically, what Iâm saying is not at all new to Eastern philosophy. Itâs never seen the world as anything else but a complex system. But itâs a world view that, decade by decade, is becoming more important in the Westâboth in science and in the culture at large. Very, very slowly, thereâs been a gradual shift from an exploitative view of natureâman versus natureâto an approach that stresses the mutual accommodation of man and nature. What has happened is that weâre beginning to lose our innocence, or naivete, about how the world works. As we begin to understand complex systems, we begin to understand that weâre part of an ever-changing, interlocking, nonlinear, kaleidoscopic world. âSo the question is how you maneuver in a world like that. And the answer is that you want to keep as many options open as possible. You go for viability, something thatâs workable, rather than whatâs âoptimal.â A lot of people say to that, âArenât you then accepting second best?â No, youâre not, because optimization isnât well defined anymore. What youâre trying to do is maximize robustness, or survivability, in the face of an ill-defined future. And that, in turn, puts a premium on becoming aware of nonlinear relationships and causal pathways as best we can. You observe the world very, very carefully, and you donât expect circumstances to last.â So what is the role of the Santa Fe Institute in all this? Certainly not to become another policy think tank, says Arthur, although there always seem to be a few people who expect it to. No, he says, the instituteâs role is to help us look at this ever-changing river and understand what weâre seeing. âIf you have a truly complex system,â he says, âthen the exact patterns are not repeatable. And yet there are themes that are recognizable. In history, for example, you can talk about ârevolutions,â even though one revolution might be quite different from another. So we assign metaphors. It turns out that an awful lot of policy-making has to do with finding the appropriate metaphor. Conversely, bad policy-making almost always involves finding inappropriate metaphors. For example, it may not be appropriate to think about a drug âwar,â with guns and assaults. âSo from this point of view, the purpose of having a Santa Fe Institute is that it, and places like it, are where the metaphors and a vocabulary are being created in complex systems.
âSo I would argue that a wise use of the SFI is to let it do science,â he says. âTo make it into a policy shop would be a great mistake. It would cheapen the whole affair. And in the end it would be counterproductive, because what weâre missing at the moment is any precise understanding of how complex systems operate. This is the next major task in science for the next 50 to 100 years.â
âI think thereâs a personality that goes with this kind of thing,â Arthur says. âItâs people who like process and pattern, as opposed to people who are comfortable with stasis and order. I know that every time in my life that Iâve run across simple rules giving rise to emergent, complex messiness, Iâve just said, âAh, isnât that lovely!â And I think that sometimes, when other people run across it, they recoil.â
The brutal fact was that there was very little research funding out there even for mainstream economics, much less for this speculative Santa Fe stuff. âIt turns out that economics is very poorly supported in this country,â says Cowan. âIndividual economists are paid very well, but they donât get paid for doing basic research. They get paid from corporate sources, for doing programmatic things. At the same time, the field gets remarkably little in the way of research money from the National Science Foundation and other government agencies because itâs a social science, and the government is not a big patron of the social sciences. It smacks of âplanning,â which is a bad word.â
âadaptive computationâ: an effort to develop a set of mathematical and computational tools that could be applied to all the sciences of complexityâincluding economics.
John Hollandâs ideas about genetic algorithms and classifier systems had long since permeated the institute, and would presumably form the backbone of adaptive computation.
A lively cross-fertilization was well under wayâwitness Doyne Farmerâs âRosetta Stone for Connectionismâ paper, in which he pointed out that neural networks, the immune system model, autocatalytic sets, and classifier systems were essentially just variations on the same underlying theme. Indeed, Mike Simmons had invented the phrase âadaptive computationâ one day in 1989, as he and Cowan had been sitting in Cowanâs office kicking around names that would be broad enough to cover all these ideasâbut that wouldnât carry the intellectual baggage of a phrase like âartificial intelligence.â
he was also hoping that the program would give economists, sociologists, political scientists, and even historians some of the same precision and rigor that Newton brought to physics when he invented calculus. âWhat weâre still waiting forâit may take ten or fifteen yearsâis a really rich, vigorous, general set of algorithmic approaches for quantifying the way complex adaptive agents interact with one another,â he says. âThe usual way debates are conducted now in the social sciences is that each person takes a two-dimensional slice through the problem, and then argues that theirs is the most important slice. âMy slice is more important than your slice, because I can demonstrate that fiscal policy is much more important than monetary policy,â and so forth. But you canât demonstrate that, because in the end itâs all words, whereas a computer simulation provides a catalog of explicitly identified parameters and variables, so that people at least talk about the same things. And a computer lets you handle many more variables. So if a simulation has both fiscal policy and monetary policy in it, then you can start to say why one turns out to be more important than the other. The results may be right or they may be wrong. But itâs a much more structured debate. Even when the models are wrong, they have an enormous advantage of structuring the discussions.â
the institute might actually be much better off without a permanent faculty. âThe virtue was that we were more flexible than we would have been,â says Cowan. After all, heâd realized, once you hire a bunch of people full time, your research program is pretty well cast in concrete until those people leave or die. So why not just keep the institute going in its catalyst role? It had certainly worked beautifully so far. Keep going with a rotating cast of visiting academics who would stay for a while, mix it up in the intellectual donnybrook, and then go back to their home universities to continue their collaborations long distanceâand, not incidentally, to spread the revolution among their stay-at-home colleagues.
I feel as though Iâve taken a new lease on life, at the cranium part of my body. And that, to me, is a major accomplishment. It makes everything Iâve ever done here worth-while.â
The issue that grabs him the hardest, he says, is adaptationâor, more precisely, adaptation under conditions of constant change and unpredictability. Certainly he considers it one of the central issues in the elusive quest for global sustainability. And, not incidentally, he finds it to be an issue thatâs consistently slighted in all this talk about âtransitionsâ to a sustainable world. âSomehow,â he says, âthe agenda has been put into the form of talking about a set of transitions from state A, the present, to a state B thatâs sustainable. The problem is that there is no such state. You have to assume that the transitions are going to continue forever and ever and ever. You have to talk about systems that remain continuously dynamic, and that are embedded in environments that themselves are continuously dynamic.â Stability, as John Holland says, is death; somehow, the world has to adapt itself to a condition of perpetual novelty, at the edge of chaos.
Of course, adds Cowan, it may be that concepts such as the edge of chaos and self-organized criticality are telling us that Class A catastrophes are inevitable no matter what we do. âPer Bak has shown that itâs a fairly fundamental phenomenon to have upheavals and avalanches on all scales, including the largest,â he says. âAnd Iâm prepared to believe that.â But he also finds reason for optimism in this mysterious, seemingly inexorable increase in complexity over time. âThe systems that Per Bak looks at donât have memory or culture,â he says. âAnd for me itâs an article of faith that if you can add memory and accurate information from generation to generationâin some better way than we have in the pastâthen you can accumulate wisdom. I doubt very much whether the world is going to be transformed into a wonderful paradise free of trauma and tragedy. But I think itâs a necessary part of a human vision to believe we can shape the future. Even if we canât shape it totally, I think that we can exercise some kind of damage control. Perhaps we can get the probability of catastrophe to decrease in each generation.