Thursday, May 26, 2005

Salt of the Earth

In The Edge of Chaos, Chaos, and Heroism I said that living systems are at their most evolutionarily fecund when, recently having visited chaos itself, they are returning to the dynamical "edge of chaos" — a paradoxical fact that makes them all "heroes" in a way. For what is a hero but one who accepts the perils of chaos and death to restore life to the community? And who in the history of the West is more of a hero than Jesus Christ, who "harrowed" the depths of hell after his sacrificial death on a cross, prior to his unexpected resurrection and glorious ascension into heaven?

In this way, I said, the scientific insights of theoretical biologist Stuart Kauffman, set forth in At Home in the Universe, furnish "a point of tangency between the theory of evolution 'beyond Darwin' and the wisdom of one of our major faith traditions." Now I would like to explore at least one of the theological implications of that point of tangency.

In Matthew 5:13, just after the "Blessed are ... " beatitudes, Jesus tells his flock, "You are the salt of the earth. But if the salt loses its saltiness, how can it be made salty again? It is no longer good for anything, except to be thrown out and trampled by men" (translation: New American Bible for Catholics).

Salt was in those days, in the times before refrigeration, used to preserve meat, to keep it from spoiling. In Jesus' era, the Jews as a nation were in desperate need of preservation. They were under the heel of the Roman emperor, and, indeed, only a few decades after the Gospel events, they would lose their war of independence with Rome and be scattered. Knowing they were in extremis as a free nation and an independent people, many Jews looked imminently for the long-promised messiah.

The Gospels make it clear that Jesus (whom Christians believe was that messiah) thought the ancient religious practices of his fellow Jews had gone wrong. Instead of unity, rich with poor, the wealthy and powerful were scapegoating the poor and afflicted, claiming their poverty and infirmity betokened God's punishment of these individual Jews — and by extension all Jews and the whole Jewish nation — for their sins. If we read between the lines, Jesus' "preferential option" for the weak and the weary was the message he sent, among other ways, by opting to take his meals with outcasts.

We, of course, find echoes of this early form of Jesus' heroism in our Robin Hood stories.


Albert
Nolan's
Jesus
Before
Christianity
Now, it is risky to associate with lepers (or so it was thought). The outcasts Jesus embraced were all, if not literal lepers, social lepers, spurned by "all the right people." (I'm getting this from an excellent study of the Gospel message, Jesus Before Christianity, by a South African Dominican priest, Father Albert Nolan.) So the "salt of the earth" were the flock that were, if not themselves afflicted or poor, willing to join Jesus in associating with all the wrong people.

Seen in the light of Stuart Kauffman's dynamical systems, the implications are profound. These systems contain a plethora of individuals (or individual subsystems, the equivalent of "nations"). Each individual or nation wants to live long and prosper. To do this, it needs to evolve to a "peak" on an abstract "fitness landscape" where hills and mountains represent high Darwinian fitness, and dales and valleys betoken low Darwinian fitness.

Here's the crucial thing: the fitness of the system as a whole — its survivablity, as it were — depends on the individuals and nations not retreating so far up their own traditional fitness peaks as to cut off communication with those on other peaks.

Why? Because, in ways Kauffman lays out semi-mathematically, those entity-to-entity, peak-to-peak interchanges make the system stronger. For example, in a biological ecosystem the interactions of the various member species allow the ecosystem to gravitate back to the fecund edge of chaos, should a catastrophe bring the onset of chaos.

But if some members of the system completely shun other system members, eschewing dialogue and interchange, the system becomes brittle, not supple. The next catastrophe is apt to take it down the tubes.

Such member-to-member dialogue within a living dynamical system is yet more powerful than that. Kauffman shows that by virtue of it, the members "co-evolve." They learn better and better survival strategies as they adapt in response to others' adaptations.

This process of co-evolution, Kauffman reveals, even causes the fitness landacape itself to change. What were peaks can become valleys, and vice versa. Anyone who insists on not changing with the times is apt to disappear from the scene.


The Jewish nation — but, thankfully, not the Jewish people — disappeared in about 70 A.D., after the siege and fall of Jerusalem, at about the time Christians were ceasing to call themselves a Jewish sect and were beginning to worship Jesus, not just as the longed-for messiah, but as true God come to earth. Yet I don't intend this as a retrospective "I told you so" story. I intend it to suggest, rather, the Jesus was a very wise man. He didn't need computer models and cyber-simulations to tell him that solidarity is powerful and dialogue is divine. He simply knew.

The theologcial implication is clear. If, as Christians or as members of any other religion, we entrench ourselves behind ideological fortress walls on what is, or used to be, a much-vaunted fitness peak, we risk that the co-evolution going on around us will change the landscape, level our peak, and crumble our walls.

Now, to furnish a concrete example — and to risk the scorn of theological conservatives — I refer you to two posts I made to my A World of Doubt blog: Cardinal Keeler's Boycott and Of Pedestals and Fulcrums. The point of both was to criticize my local archbishop, Cardinal William Keeler, for refusing to share the stage at Loyola College's recent commencement exercises with New York City's ex-mayor Rudy Giuliani.

Giuliani was asked to furnish the keynote address in honor of his courageous leadership in the wake of the 9/11 World Trade Center attack, which occurred just as these graduates were beginning as freshmen. But Giuliani had gone on record, as a public official, as supporting women's abortion rights, however much he privately disapproves of abortion. So the Baltimore Archdiocese announced it would send no representative to the graduation ceremony of a Catholic instution of higher learning within its jurisdiction.

Not that the festivities would include words from Keeler or any of his minions, for archdiocese representatives do no customarily speak at Loyola graduations. Still, the cold shoulder was a symbolic way of saying no to dialogue with lay Catholic public servants who, like Giuliani and last year's presidential also-ran, Senator John Kerry, feel they should uphold the law of the land — which, since Roe v. Wade in 1973, permits abortions.

"No to dialogue" is not a salt-of-the-earth strategy. Dialogue and solidarity, not stiffness and self-righteousness, even in the most godly of causes, are the only way the salt which has "lost its saltiness" can be made salty again. They are the one way that systems threatened by chaos, dissolution, and death can be restored to life and health.

Tuesday, May 24, 2005

On the Need for Boundaries in Nature

I continue to read Stuart Kauffman's profound book At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. Looked at in a certain way, his chapter 6, "Noah's Vessel," offers insights into why there has to be competition, struggle, and even open warfare in nature.

The theme of the chapter is the propensity for organic molecules to broker the formation of other organic molecules by dint of what chemists call catalysis. If you bring a sufficient number of potential reactants together with a sufficient number of potential catalysts, a chain reaction of ever-increasing organic molecular diversity is apt to ensue. This is the condition Kauffman labels "supracriticality."

On the other hand, if either the number of potential reactants or the number of potential catalysts is brought below some critical value, the system returns to being "subcritical." The diversity of novel molecules will, in this case, never mushroom.

Using a little back-of-the-envelope calculating, Kauffman indicates that the biosphere as a whole is supracritical. If you somehow brought all the perhaps 10,000,000 small organic molecules on the face of the earth together with all the 1 trillion or so proteins, there would be a chaotic explosion of never-before-seen organic molecules (see p. 123).

A whole lot of them would be poisonous to any particular type of cell of any living species. That's one reason why there have to be cell membranes that allow only certain kinds of "foreign" molecules to pass through.

And it's one reason why humans and other organisms eat their food rather than fusing with it (p. 123). We have to break down the organic molecules in the organisms we eat and then build them back up into possibly different molecular species that won't throw our cells into a supracritical tizzy.


The biosphere as a whole is supracritical, yet all cells are necessarily subcritical. Is there a happy medium anywhere in nature? Kauffman says there is.

He says that ecosystems, each made up of some number of interacting species, each coming in contact with some number of exogenous organic molecules that arrive possibly from other ecosystems, evolve to the line between subcritical and supracritical behavior.

For example, the diverse bacteria and other mini-critters in a human intestinal tract are an ecosystem (see p. 127). Each such single-celled organism is presumably subcritical, molecularly speaking. Even the propensity of the bacterial species to trade their various organic molecule types will not bring chaos and death to any of them.

They are also bathed in a certain number of exogenous types of organic molecules from the food we eat. As long as that number does not take the ecosystem supracritical, well and good.

If the ecosystem as a whole goes supracritical, however, some bacterial species will respond to the novel poisons that are inevitably introduced by going extinct, at least as far as the gut-internal ecosystem itself is concerned. On the other hand, if the ecosystem as a whole is well below the subcritical-supracritical line, it will tend to take on more bacterial species, either by mutation or by migration from outside the body.

In other words, the ecosystem will gravitate over time to the subcritical-supracritical edge. Again we encounter an abstract edge much like the edge of chaos to which, Kauffman says, natural selection "tunes" biological systems which need to evolve gracefully but which also need to avoid the perils of chaos proper.


Extending the analogy, we can think of the supracritical biosphere-as-a-whole as, in the absence of internal buffering mechanisms, chaotic. Each subcritical cell with its protective membrane is well into the ordered regime. And each ecosystem is poised at, or in the ordered regime near, the edge of chaos.

Playing into nature's ability to keep its balance in this way are, of course, other boundaries at other levels of organization ... not just the cell membrane at the level of the cell. There is the very real boundary between one organism and the next. And, in an ecosystem, there is the more abstract boundary between species — or, more accurately, between species' populations.

These boundaries introduce the possibility of selfishness, witting or unwitting. It can be a dog-eat-dog (or cow-eat-grass) world. At the level of an ecosystem, the introduction of a foreign species can drive existing species beyond the vanishing point to extinction.

The boundaries at various organizational levels also introduce the possibility of you-scratch-my-back-I'll-scratch-yours mutualism — again, witting or unwitting. Peeking ahead, I'm aware that Kauffman finds cooperation a much more potent evolutionary force than competition. I don't want to get ahead of him on this score, but, for now, I would like to point out certain implications of the above for religion.


In struggling to come to a belief in God, some people ask this question: if God is so good, why is nature so cruel?

If we evolved, as Darwin said, from apelike ancestors, nature's capacity for cruelty implicates us — as our "inhumane" behavior often so amply bears out. Would God have created us "in his own image," bearing such a legacy from our forebears?

Killing and eating other living things would seem, indeed, to be "the root of all cruelty" — even when we humans, as "highly evolved" as we think we are, just kill and don't eat. No animal would have to kill to eat (or for any other purpose, really) if we could just fuse with other organisms on an equal-to-equal basis. (OK, I admit that plants don't kill to eat. But plants don't have brains. What I'm saying applies to planets like ours, some of whose organisms have to eat to support brains.)

As Kauffman points out, such a meld-with-your-meat strategy would mean the end of life on this planet. As the supracriticality of the biosphere-as-a-whole invaded every living cell, poisoning all, the biosphere-as-a-whole would simply die. And so would we. (Or, OK, maybe there would just be plants.)

Nature is "red in tooth and claw," if Kauffman is right about how self-organized complexity works, because that's the only way it can be. There have to be boundaries at various levels of organization, with the attendant possibility of competition and cruelty. If God wanted it to be any other way, he'd have had to set us and our fellow creatures up as dummies in a heavenly wax museum, not in a real world like ours!

Monday, May 23, 2005

Order and Ontogeny (I.D. XXV)

I know I said in Dembski's Achilles' Heel (I.D. XXIV) that I'd shut up for a while about the intelligent design theory as spelled out in William A. Dembski's book named, appropriately, Intelligent Design. Yet the more I read Stuart Kauffman's At Home in the Universe, the more I find it works against Dembski's proposition that natural biological order vouchsafes design from above.

Kauffman's chapter 5, "The Mystery of Ontogeny," is a case in point. Undesigned "order for free," as Kauffman calls it, manifests itself in every living cell.

In ontogeny a fertilized egg cell, or zygote, splits. Then each daughter cell re-splits, and the resultant cells split again, for up to fifty cell generations. The result, if the embryo is that of a human, will be an organism like you or me. A mature Homo sapiens has some 250 cell types, among them skin cells and nerve cells and the cells lining the gut which produce hydrochloric acid and help break down food.

From one cell type, the zygote, come many cell types — but still a tolerably small number, considering the zillions of possibilities when there are 100,000 genes in the human genome, "and 1030,000 possible patterns of gene expression" (p. 107).

Each cell type corresponds to an "attractor," Kauffman shows. An attractor is a set of states of a network, in this case, the network of genes and the proteins they code for in a cell. Gene A might produce, indirectly, a protein which turns off ("deactivates" or "represses") gene B, or turns on ("activates" or "promotes") gene C.

The pattern of which genes are on and which are off determines the cell type, though all cells basically have an identical copy of the genome. But, to maintain its pattern, a cell has to engage in a round-robin of molecular reactions, a series of intermediate states which form a closed loop.

That closed loop is the attractor for the type of cell it is. When the cell is exposed to oddball chemical influences from outside itself, it is perturbed temporarily away from its attractor. But, due to homeostasis, gradually it tamely returns to the state cycle which constitutes its attractor. The reason for this, Kauffman shows, is that the perturbation leaves it in the original "basin of attraction," wherein the attractor itself if bound to reassert itself.

(See Basins of Attraction for more about attractors and basins.)


In Dembski's terms each cell, of whatever type, represents complex specified information, or CSI. That a cell's very existence is an "event" whose content is "information" is obvious from the fact that there are so many possible cell types which the 250 types we humans possess are drawn from. The type of any given cell reduces our uncertainty about which possible cell configuration we are dealing with — and uncertainty reduction is, Dembski says, the mark of information.

That the cell is complex is obvious from the fact that there are so many genes and proteins involved in the business the cell transacts.

That the cell is specified has to do with its unique, independently justifiable function. A HCl-producing gut cell has a specific function, apart from whatever its molecular constitution happens to be. So does a nerve cell, though it's a different function. So, too, does a skin cell. Each cell type's function is, in Dembskiyan terms, its "specification."

Dembski says that CSI in nature implies an intelligent designer. For where else could it come from?

Well, possibly from self-organization, Kauffman implies. (Kauffman was writing in advance of the intelligent design movement and does not address Dembski's claims directly.)


Kauffman shows that any genome, human or not, is apt to manifest gene-protein activity similar to that found in abstract networks that use what he calls "canalyzing" Boolean functions (see p. 103).

Boolean networks in general are those in which network constituents receive "inputs" about the states (on or off) of some of the other constitutents. Based on these inputs, each of which is either a 1-for-on or a 0-for-off, a constituent derives its own "next" state, on or off.

To do so, it uses one of a potentially large number of Boolean functions. A simple example is the OR function. If a constituent has exactly two inputs, and either is on (or both are on), that constituent will be on in the next time period. But if neither input is on, the constituent will be off.

In the case of a cell, the constituent may be a gene which codes for a protein that breaks down lactose or "milk sugar" (see pp. 100ff.). The Boolean function may be NOT IF, such that if a repressor molecule (made by another gene) is present, the gene will be inactive ... but not if there is also present a molecule called allolactose, which will be present if lactose is. In this way, a bacterium "turns on" its ability to process lactose only when lactose is actually present.

Genetic networks are, Kauffman says, essentially Boolean networks. When each gene has just two molecular inputs — in this example, the repressor and allolactose — the network is bound to be in the homeostatic, orderly regime.

But when each gene has more than two molecular inputs, the cell is prone to chaos — unless the Boolean functions used by the genes are "canalyzing." In that special case, one value of one of the several molecular inputs to a gene — either "present" or "absent" — renders all the other inputs irrelevant.

Luckily, canalyzing Boolean functions for manifold (i.e., more than two) inputs, though they are demonstrably in the minority, "are simple to build from a molecular point of view" (p. 106). So living cells are likely to "prefer" them. And even if they didn't prefer them, natural selection is apt to rule out cell types that feature non-canalyzing Boolean functions, since such cells would be prone to chaos.

All of which means that calm, orderly, useful cell types which gravitate to a tolerably small number of homeostatic attractors are to be expected in nature.


And that finding calls Dembski's ideas seriously into question. For CSI to imply design, it has to (among other things) be unlikely. The probability of the manner in which it is organized having happened by sheer luck of the draw must be low. Only then can we leverage the fact that it is "specified" — has a comprehensible function above and beyond the manner in which it is organized — into a presumption of design.

Kauffman makes it clear that attractors, ratified by or culled by natural selection, rather easily account for various cell types and, consequently, cell functions. That there is an attractor in the human genomic network for cells that produce hydrochloric acid is not all that surprising, given self-organized "order for free" and selectional sifting.

Furthermore, the specifics of the human gut cells' chemistry are not all that crucial. There are almost certainly other attractors in other hypothetical genomes which also promote HCl production. If humans didn't have the genome-cum-attractor that they do, there'd be some other cell type in our biosphere which produces hydrochloric acid and can serve digestion. We'd use that. The biochemical details would be different, but the specification would be the same. Any candidate specification, all-important to CSI in Dembski's worldview, is so much easier to implement, by virtue Kauffman's self-organization, than our ordinary estimates of probability would indicate.


This hurts Dembski's case in the following way. Dembski assumes that complexity and improbability are virtually synonyms. That is, if something such as a human gut cell is highly complex, then it must be highly unlikely. It must have an ultra-low probability of coming to be.

For instance, imagine a lottery in which various candidate genes — or, equivalently, the proteins which the genes code for — are drawn at random. There may be 100,000 genes/proteins in our genome. In a gut cell, as in any other cell type, a large number of these genes are "off" or inactive. But still, to draw just the ones that need to be present and active in an HCl-producing human gut cell would happen only at vanishingly small odds.

Looked at this way, complexity equals improbability.

But Kauffman's insights into self-organization and "order for free" make such complexity expectable, not improbable. In Kauffman's world, complexity does not equal improbability.

Which cuts Dembski off at the knees. If complexity such as is found in nature is not as improbable as we might have thought, then CSI is less likely to imply intelligent design!

Sunday, May 22, 2005

The Edge of Chaos, Chaos, and Heroism

Theoretical biologist Stuart Kauffman, in At Home in the Universe, shows that there is a dynamical-systems regime called the "edge of chaos" toward which life gravitates in order to evolve gracefully.

The edge of chaos may also be thought of as a “place” to which a living system — a genome, a cell, an embryo, an organism, an ecosystem, the biosphere — can return in order to be revitalized after it has, as the result of some perturbation, external or internal, been pushed over into the chaotic regime. In this sense, the edge of chaos functions as Mother Nature’s own personal health spa — or, as a sort of cosmic Betty Ford rehab clinic.

Roger Lewin's
Complexity:
Life at the
Edge of
Chaos
Living entities are complex adaptive systems whose nonlinear dynamics make them candidates for chaos, however inimical chaos may be to the survival of them or their constituent parts. A hypothesis about how chaos and the edge of chaos function with respect to living systems is broached in Roger Lewin’s Complexity: Life at the Edge of Chaos (see pp. 63-69, original ppbk. edition).

Lewin was asking theoretical biologist Stuart Kauffman, author of At Home in the Universe, to explain the Cambrian explosion. Six hundred million years ago, it seems, the world’s oceans teemed with a plethora of single-celled life forms – bacteria, algae, etc. – but as yet no multicellular life. Then, boom! In a geological heartbeat there appeared, in the waters of the earth, not just a smattering of many-celled body plans — or, in biologist-speak, phyla — but a hundred or so of them, all at once.

This, the so-called Cambrian explosion, was a unique event in the earth’s biohistory. Afterward, in short order, all but the thirty or so of these new phyla — the ones we still have today — went extinct. After that time, there were no more major phyla-creating episodes.

In ensuing geological eras, though, there were occasional mass extinctions of species, followed by rapid bursts of new species that were all created within existing phyla. As a result, the aggregate number of species that have appeared on the face of this planet proliferated vastly – although 99.9 percent of them would succumb to later extinctions.

One of the greatest of these mass deaths happened 250 million years ago, when the Permian extinction wiped out about 96 percent of the species that existed at the time. In rebounding from that catastrophe, the planet’s biosphere created a wealth of new species – but no additional phyla.


What, then, explains the distinction between the relatively high-level creativity in the Cambrian (i.e., at the level of phyla) versus the relatively low-level creativity of the post-Permian (at only the level of species)?

Science offers several possible explanations, any or all of which may capture part of the overall picture. The one posited by Stuart Kauffman in Levin's Complexity interests me because it involves the edge of chaos. The basic idea is this (see p. 69). The Cambrian biosphere was “a system in the chaotic regime, a system that was moving toward, but had not yet reached, the edge of chaos.” Accordingly,

... perturbations would cause big avalanches of change [with] unusually innovative novelties. As the system coevolved to a balanced state (the edge of chaos, in Stu’s coevolution model), responses to perturbation would diminish … until a steady turnover state was reached.

Then, after a very, very long time, along came the Permian extinction. In response to some significant disturbance of the biosphere, rafts of old species died out, to be replaced by boatloads of new ones — though not this time augmented with any new phyla.

Why no new entries at higher taxonomic levels? Says Kauffman, “All you would need to explain the difference of innovation after the Permian extinction is that the system is pushed again into the chaotic regime, but not as far. Innovation would occur in the postextinction rebound, but would be less exaggerated.”


To me, this suggests a general pattern to life. Left to its own devices, life spends most of its time in a steady, balanced state at the edge of chaos — or perhaps in the ordered regime not too far from the edge of chaos. Whatever potential living systems may possess for chaos, they are also in some marvelous way also biased away from it, toward order.

But, then, stuff happens. When it does, an orderly living system can be pushed deeply or shallowly into chaos by the associated perturbation. When that occurs, parts of the system — if not the whole system — may die. Providentially, though, a coevolutionary dynamic can be set up within the living system, by dint of which the perturbed system qua system can hold onto life and eventually recover its erstwhile poise at the edge of chaos.

As it is recovering, the system is, in a burst of creativity, revitalizing itself. And how profoundly creative the burst is depends on how far into the chaotic regime it had earlier been pushed!

After the whole episode has run its course, the system finds itself rejuvenated. The once and future king again sits on his throne. God’s in his heaven, and all’s right with the world.


It's as if a living system were some kind of hero. Things are going along smoothly for it in a dynamical locale near or precisely at the edge of chaos. Then, pow! Some kind of perturabation happens, and the system is over in chaos proper, where it cannot long survive.

This amounts to the canonical call heard by every hero in every hero saga. If he doesn't heed this call, he, along with all that he cares about, will be gone with the wind. Our hero is typically represented as, until his hand is forced by circumstance, reluctant to act. In this case, it is the unwarranted "perturbation" which forces our system's heroic hand.

Once motivated, the hero takes up his sword, or whatever, and at great personal risk saves the day. The hero's perils are the storybook analogue to the avalanches of creative change, the side effects of the "heroic" activity of restoring the system to life at the edge of chaos. Change, however creative, is always accompanied by peril.

And the eventual restoration of tranquility at the edge of chaos? Its storybook analogue is the peaceable kingdom that was brought to England, eventually, by King Arthur and his heroic Knights of the Round Table.


Jesus was the ultimate hero, to a Christian such as myself. Think of what he did. He freely accepted death on a Cross in order to bring into being the peaceable "kingdom of heaven," which, during his life, he always said was at hand. After his death and before his resurrection, Jesus Christ "descended into hell," says the Apostle's Creed — into, that is, the abode of the dead. This "harrowing of hell," as it has been called, is the analogue of a descent into chaos.

It is important to the standard theology of Christ, the world's ultimate hero, that his crucifixion was freely accepted — just as every hero in every hero saga must at some point agree to go on his own "hero's journey."

Perhaps this is the significance of Jesus's ability to walk on water. Drowning is a metaphor for both death and chaos. Walking on water signifies that, should he ever die, the water-walker has freely accepted his own death.

Here, then, is a point of tangency between the theory of evolution "beyond Darwin" and the wisdom of one of our major faith traditions. To imagine Jesus's "hero's journey" as an analogue of a descent into the dynamical regime of chaos furnishes a possible bridge between science and religion.


Of course, we are all the heroes of our own lives. In every life there are junctures, our own personal Cambrian explosions, when we go through "phase transitions" to maturity. We are challenged by some "perturbation" such as the onset of puberty, and if we are to live, we must accept our peril and change. (But see the post Modern Immaturity in my A World of Doubt blog for some reflections on how, in our modern Peter Pan culture, we have abandoned the rites of passage that traditionally bring maturity.)

We have to stand ready to move from the edge of chaos into chaos proper and fight our way back out again. For it is the return trip wherein lies our personal growth.

Thursday, May 19, 2005

Basins of Attraction

Stuart Kauffman introduces us to a lot of unfamiliar concepts in his At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. His purpose is to show us how the principles of self-organizing systems may have kick-started life on earth, as well as facilitated the further evolution thereof. One of the concepts he introduces is that of "basins of attraction." It is introduced in Kauffman's chapter four, "Order for Free."

Within a dynamical, state-changing system, basins of attraction are formed by "attractors" which guide the system's changes of state into repeating cycles. These "state cycles" may invlove as few as one single state or as many as the total number of states the system can possibly take on — the full size, in other words, of the system's "state space."

Somewhere in between, a suitably low number of states in the length of the system's state cycle makes for a system that is orderly, but not too orderly. Such a system can avoid both chaos — in the guise of a seemingly neverending state cycle in which no state of affairs is ever revisited — and the type of order in which the system is frozen into a single state forever.

Such a short-state-cycle system can evolve gracefully. It exhibits a lovely order amid change.


An attractor is to a basin of attraction as a lake is to its drainage basin. Any initial configuration of the system's "macrostate" — that is, any set of "microstates" of the system's component entities — converges "downhill" toward the attractor associated with the basin of attraction which that particular initial configuration happens to be in.

Eventually, barring "perturbation" or "mutation," the macrostate of the system will reach the associated attractor and churn through its state cycle (whether long or short) endlessly. The more orderly the system, the shorter its state cycle.

A dynamical system can have any number of attractors and associated basins. At any given time, it is in just one basin — though perhaps not yet on the one attractor which forms that basin — but there will be many other attractors and basins which are not actually in play.

The system can be "perturbed" by arbitrarily changing the "activity" of any one of its component entities. In the simplest case, an "on" entity could be arbitrarily switched "off," or vice versa. That minor alteration could put the system in a different basin of attraction. Kauffman finds, though, that if the system is one whose attractors' state cycles are suitably short, chances are it will remain in the same basin of attraction and head back for the same dynamical attractor (see p. 83).

The system can also be "mutated" by changing which entities respond to which others' activity states, or how they respond to those others' states. Again, when the system has state cycles that are suitably short, simple mutations are apt to keep the system in the same basin of attraction.

Mutations will typically cause the underlying layout of basins and attractors to change for the system as a whole. But, if the system has short state cycles, only slightly. This resistance to wholesale reconfiguration means that short-state-cycle systems can evolve readily but gracefully, with an emphasis on stability and an immunity to chaos.


Nature's capacity for self-organization, Kauffman says, is the "handmaiden" of Darwin's natural selection. But Kauffman shows that gracefully evolving short-state-cycle systems habitually find the "edge of chaos," the dynamical province where new order for free can emerge without danger of triggering chaos per se. Or, alternatively, they gravitate to a spot in the ordered regime very near that edge — which of these two possibilities is the correct hypothesis, Kauffman cannot say.

When at (or near) the fecund edge, systems do their most creative adapting, their best fitness-generative evolving. So, how does this happen? What takes them to a regime poised stably away from chaos, but still able to evolve flexibly and gracefully, so to become more fit? Kauffman's experiments with computer models suggest that it is natural selection itself which "tunes" them to the edge of chaos.

If self-organization makes evolvability possible in the first place, by originating the initial life forms on the planet, natural selection "tunes" co-evolution to the best possible combination of stability and flexibility. Self-organization and natural selection, taken together, do the dance of life.


I find such notions irresistible. Perhaps this is because I am, in some abstruse way, in the same intellectual and spiritual basin of attraction as Stuart Kauffman.

That would put William Dembski, author of Intelligent Design, in a different basin of attraction, along with his fellow I.D. proponent Michael Behe.

In yet another basin of attraction is Richard Dawkins, author of The Blind Watchmaker and other books extolling pure Darwinism's ability to explain everything, evolution-wise, without recourse to either self-organization or intelligent design.

I would label these three basins, respectively, the theistic-evolutionist, the creationist, and the atheistic-evolutionist. I say so, even though I don't know whether the "theistic-evolutionist" Kauffman believes in God; I do know that he would like to "reinvent the sacred."

I say it even though Dembski and Behe aren't like those creationists who insist on a six-day special creation, as in the book of Genesis, occurring a mere thousands of years ago, based on counting the generations of the Old Testament.

And I say it even though I realize that not all members of the atheistic-evolutionist set are out-and-out atheists, à la Dawkins. Some are better labeled agnostics, à la the late Stephen Jay Gould.

My point in suggesting that there are three major basins of attraction in the evolution debate is this: it is rare for anyone to hop out of one basin over into another, no matter how much or how well other people may argue for it.

The best I can tell, I was somehow deposited into my particular basin from an early age, before I heard of Charles Darwin, and well before I, as a middle-aged adult, "got religion." So, when I (as a young adult) began to learn of the creation-evolution debate, I was naturally on the side of the Darwinists.

Then, when I embraced Christianity, I embraced a style of faith that was open to the notion of divine action in an evolving world. But with this proviso: no creationism per se, even of the evolution-tolerant variety broached by Dembski, Behe, and others who propose intelligent design. Why not? Creationism is in a different basin of attraction.

Wednesday, May 18, 2005

Dembski's Achilles' Heel (I.D. XXIV)

Of late I have been scrutinizing William A. Dembski's argument in Intelligent Design that nature's "complex specified information" (CSI) vouches for God's design in the results of biological evolution. The Fruits of Flow (I.D. XXIII) was my most recent post along these lines. In it, using an argument based on earlier posts in the series, I decided that CSI alone is insufficient to Dembski's purpose.

I said that an informational "event" that possesses sufficient complexity (read, improbability) and whose pattern (as its "specification") can be derived from "background information" independent of the event itself does not necessarily imply design. For example, an adaptational event that is ratified by natural selection possesses the requisite complexity, and its target pattern, Darwinian fitness itself, serves admirably as an independent specification. So such an event is CSI ... and it is not produced by direction or design.

I also said that the type of self-organization Stuart Kauffman proposes in At Home in the Universe when he talks of life's origin in collectively autocatalytic sets combines the requisite complexity with the requisite specification. In this case, the independently specifiable target pattern is catalytic closure, in which the production of new copies of every protein in the set is facilitated by the presence of one or more of the set's other proteins. Catalytic closure (plus a handful of other necessities, such as a rudimentary cell wall) turns a prebiotic soup into a protocell that can self-reproduce ... and it is a pattern that is independently specifiable based on background information.

Here again, I said, is an example of Dembskiyan CSI that is not designed.

And it is more, I now wish to add. I think self-organized complexity is the fundamental Achilles' heel of Dembski's argument to design.


Crucial to Dembski's argument is his idea that nothing in the natural world can create nature's own CSI — not natural law, not chance, not any combination of the two. He uses reasoning based on the determinsim of mathematical functions to show that law (i.e., necessity) only moves CSI around; it doesn't create it. Likewise, chance cannot be reasonably ascribed as a source of CSI when it, the CSI, is vastly improbable. Furthermore, in a clever proof, Dembski shows that combinations of necessity and chance are impotent to originate CSI.

What he misses is that lawful behavior à la Kauffmanian self-organization can turn complex unspecified information into CSI. It can fabricate a previously absent specification as an emergent property of a self-organizing system.

For example, when a set of proteins in a prebiotic soup becomes sufficiently complex, it gains catalytic closure as an emergent property that then undergirds self-reproduction, heritable variation (generational changes in the roster of proteins in the set), and Darwinian evolvability.


Another way to look at it is that self-organizing systems "export entropy." Entropy is disorder, the opposite of information.

Entropy is a construct in both information theory and, in the physical sciences, thermodynamics and statistical mechanics. When Kauffman says self-organization produces "order for free," he is saying, I think, the same thing as that such systems export entropy. For they are canonically open, nonequilibrium systems that take in food and energy and expel waste. I imagine that this dissipative process can be understood both thermodynamically and in terms of the flow of information.

As I hope I succeeded in showing in Information, Order, and Entropy (I.D. XXI), the exportation of entropy "creates" information only in a local sense. That is, when you mentally place the system in question inside a tight spatial frame and also within a narrow temporal bracket, it looks like new information magically appears: order for free.

For example, I said, if a system is framed as just a gas-filled cylinder-with-piston, omitting the external weight to which the piston attaches and which conserves the entropy exported by the piston, and if you bracket the system temporally such that it is "born" at the beginning of the piston's inward counterstroke and "dies" at the end of that counterstroke, it looks as if the entropy of the system decreases over the system's "lifetime." During this span of time, order and information emerge magically within this notably open system.

Globally, however, no new order or information is created; Dembski is right on this point. Removing the spatial frame and temporal bracket that we have placed around the physical "system" and its "lifetime," we can easily see this.

But never mind. Locally, there is within a self-organized living system an anti-entropic "order for free," and, to that system, this is what counts.


Kauffman hopes someday he can be empirically proven right when he says that collectively autocatalytic sets of proteins are very, very likely to appear in a primordial soup. If so, they would furnish nature with her first lifelike entities, which could then go on to leverage heritable variation and evolve, Darwin-style, under the aegis of blind natural selection.

If this is what happened on the early earth, then somehow, at some point in time, the evolution of these first protocells introduced within the cells a genome made of DNA, and true Darwinian evolution began. But, at first, there was (Kauffman says) no separate genome. In biologists' lingo, there was at first no distinction between the "genotype" and the "phenotype."

Possibly the inclusion of a separate genome happened in the way some biologists hypothesize for the origin of cell nuclei: the originally nucleus-free cells "ate" other cells and forevermore turned those cells into their nuclei.

But, no matter. What is crucial here is that the very first step to a biosphere that could then go on to evolve through mutation-cum-natural-selection was (if Kauffman is right) a self-organized one. By sheer chance — but with notably high probability — enough proteins got together to form a collectively autocatalytic set, and bang!


So life is, first and foremost, an entropy exporter. It staves off disorder and death by creating, locally to itself, order for free. Bounded by its own birth and death, and within its own cell walls, it does what Dembski says it cannot: create information.

Never mind that it actually "steals" this information from abroad, and someday will have to put it back. Ashes to ashes, dust to dust ... but meanwhile, life flourishes in the here and now.

This is why I say that, so viewed, life itself provides Dembski's argument to intelligent design with its Achilles' heel. True, no information within nature as a whole can be created by nature. But place part of nature within an appropriate spatial frame and apt temporal brackets, and if self-organization chances to transpire within that locality, life and evolvability emerge. Creatures learn to export their own entropy, defy all logic, and thrive.


This is not to say that there is no God. Nor is it to say that unaided nature can, evolving as Darwin proposed, produce the irreducible complexity which Dembski and his cohort Michael Behe insist organs like an eye or a wing represent.

I don't claim to know for certain whether the eye, to take one oft-cited example, is irreducible complex. If it is, then that would very likely mean natural selection, which can account for only cumulative complexity, would be walled off from it.

But Darwin's defenders, Richard Dawkins prime among them, claim the eye is only cumulatively complex.

I can't resolve that dispute. Furthermore, I feel little motivation to try. For I happen to believe in God, even if Dawkins is right.

Even if the eye, or the wing of a bird, or sonar in bats actually developed by the ultra-slow, step-by-step accretion of very minor changes, this fact to me does not exclude God from evolution. What it does do, I think, is exclude the provability of God in evolution. And that is quite a different matter.


Before embellishing that point, at this juncture I would like to mention one other thing. As of this present moment, it is a pure hunch of mine. If I read Kauffman aright, self-organization does more that just kick-start life and its consequent evolution. Once life begins, self-organization also potentiates evolvability itself.

By that I refer to such things in Kauffman's discourse as the "evolution of co-evolution." Kauffman presents a long, involved discussion of this topic; I won't rehash it here. The gist of it is that species in ecosystems co-evolve — they do a dance in which changes in one species keep step with changes in others. For example, when finches live by harvesting nectar from blossoms whose tubes lengthen over evolutionary time, so, too, do the bills of the finches.

Or, when frogs' tongues are adapted to zotting delicious flies whose DNA, in turn, adaptively "learns" to secrete oil from the flies' feet, the frogs' DNA will "learn" to roughen the surface of froggy tongues, to neutralize the slipperiness of the oil.

Such is co-evolution — a process which itself is fraught (Kauffman says) with self-organization. The same principles which guide any self-organizing system to the orderly, but not too orderly "edge of chaos" — the "place" where homeostasis and graceful evolvability emerge — apply to co-evolving ecosystems. Taken as a system-of-systems, an ecosystem at the edge of chaos possesses much more evolvability than would otherwise be expected. In a word, it is much more fecund a source of evolutinary change.

My hunch, then, is this: a self-organized, co-evolving ecosystem, poised at the edge of chaos, can over time and in conjunction with purely Darwinian principles produce results that seem, in retrospect, like "irreducible complexity."

For the nonce, accordingly, my answer to Michael Behe and others who point to irreducible complexity as a sign of God's intelligent design is this: first, you'd better show definitively that Stuart Kauffman's "laws of self-organization and complexity" can't account for it.


So, with this post, I intend to stop addressing Dembski's and Behe's arguments to intelligent design directly, at least for a while. I'd like to change gears and start talking about a larger view of questions raised by evolution. In this view, the key concept is not intelligent design, it is "divine action."

"Divine action" is an umbrella term used by theologians who inquire into such questions as, "Does God intervene in quantum events to turn their inherent uncertainty into the seeds of providential worldly change?" I myself am unconvinced that this is so, but some experts on science and theology say it may be.

Arthur
Peacocke's
Paths from
Science
Towards
God
As for myself, I tend to go along with those, like Arthur Peacocke, who say (see his Paths from Science Towards God) that divine action is a matter of "top-down causation" or "whole-part influence." More on that in later posts, but the basic idea is that the types of complex systems which Stuart Kauffman and others deal with have bottom-up emergent properties and they also have top-down holistic effects.

In the latter, influences on the system-as-a-whole "trickle down" to become influences on the component parts of the system. Those parts in turn may be complex wholes with their own internal trickle-down effects. Down through any number of tiers of emergent complexity, causative influence which began outside the system-of-systems can spread.

Imagine God as treating the world-as-a-whole as a complex system-of-systems. Into it, God occasionally feeds information. Peacocke says it is pure information, with no concomitant energy input that would invalidate the second law of thermodynamics. This new information trickles down from level to level of the world-system-as-a-whole and, by dint of whole-part influence, causes entities within the world to experience events which otherwise wouldn't be in the cards for them.

And so on. Because the world is made of complex systems that have self-organized à la the insights of Stuart Kauffman (whose research Peacocke mentions only in passing) we can conclude that this very bottom-up/top-down structure furnishes a way for God to influence what goes in within it.

Such divine causal influences, if they exist, are subtle and mainfestly beyond the ability of science to pin down. We would not expect them to qualify as creating complex specified information of the type Dembski expects.

But they might well give evolution a drift which Darwin, as a scientist, could never have explained.

Tuesday, May 17, 2005

The Fruits of Flow (I.D. XXIII)

In Is Fitness a "Specification"? (I.D. XXII), I suggested that fitness, as in Darwin's "survival of the fittest," serves admirably as a specification in William A. Dembski's sense of that word in Intelligent Design. This comes as something of a surprise, since Dembski is an anti-Darwinist.

To Dembski, information that is both complex and specified reliably implies design, and thus an intelligence responsible for the design.

Information is that which, because it somehow gets selected from a range of possibilities, reduces our uncertainty by eliminating all the other possibilites.

Complexity obtains for an informational "event" when the range of alternative possibilities is suitably large, making the selection of the single possibility that gets actualized highly improbable. Think of a royal flush in poker, as against all the other possible hands.

Specification exists for an informational possibility when, "independently of the possibility's actualization, the possibility is identifiable by means of a pattern." The pattern is a specification if and only if some part of our background knowledge can generate it in the absence of any knowledge of the event itself.

But, I said, an adaptation event during the course of Darwinian biological evolution by natural selection possesses a Dembskiyan specification! Namely, a "target" pattern which Darwin termed "fitness" serves as adaptation's specification, by virtue of the fact that we can say, independently of the actual adaptation, what adaptive changes would improve a population's fitness.

But, in this case, the real-world "target" is an unwitting one. There is no consicous archer intentionally shooting arrows at a pre-existing, visible target. Rather, blind natural selection simply removes the arrows that don't happen to strike the target. The fittest genetic variants are "selected for," and the less fit are "selected against."


Accordingly, we can look at, say, an adaptation event which famously occurred in England during the coal age. When burning coal blackened the countryside, once light-in-color moths turned dark. This was not because coal dust blackened them, but because they adapted genetically to produce more pigment and thus blend in better with their darkened surroundings, so to avoid predation.

But when England stopped its coal pollution and the countryside went back to normal shades, the moths went light again, for the very same reason.

In Dembskiyan terms, the mothly adaptation could be called a case of "respecification" — a word I made up; Dembski doesn't use it. At some earlier time, the pattern the moths adhered to was a specification for light-coloredness. Then came coal, and a new pattern/specification emerged: dark-coloredness. Finally came the abandonment of coal, and the old pattern/specification was reinstated.

The moths were unwittingly being aimed by "the survival of the fittest" — blind natural selection — at a moving target. Which they successfully hit.

The lesson here is that respecification does not necessarily imply intelligent design.


From Stuart Kauffman's work on self-organization, as reported in At Home in the Universe, we can conclude that initial specification need not imply design, either.

Kauffman shows that life may have originated with "collectively catalytic" or "autocatalytic" sets of proteins, wherein the production, out of smaller "food" molecules, of each and every protein in the set is catalyzed by some other protein in the set. When that vaunted attribute of "catalytic closure" is the case, the addition of just a few other simple structures and functions can make the autocatalytic set into a self-reproducing protocell ... and life and evolvability begin!

But catalytic closure is a target that is exceedingly likely to be struck whenever the number of interacting proteins in a set climbs into the low-to-middle double digits. It doesn't much matter which of the millions of available protein "species" are involved. For reasons having to do with network theory, Kauffman shows that if there are a moderately large number of proteins, any proteins, autocatalysis is virtually guaranteed.

So catalytic closure of a protein set (call it A) whose size is, say, 30 proteins, would seem to constitute an independent, Dembski-style specification for the set. The protein set itself would be highly complex, since the probability of a set arising which comprises exactly those 30 proteins is quite low. Thus, this set would seem to constitute, per Dembski, "complex specified information," or CSI.

In Holism, Emergent Phenomena, and CSI (I.D. XVI), I think I showed that another candidate set of 30 proteins (call it B) which does not happen to attain catalytic closure contains no less, or no more, information that does this autocatalytic one. That is, the two are equally (im)probable, thus equally complex.

Now I'd like to extend that insight by noting that the major Dembskiyan difference between A and B is that A's signature pattern, i.e., that of catalytic closure, is a specification which we can confirm as such by virtue of the background knowledge that Kauffman gives us, while B seemingly has no pattern, and thus no specification. B is complex, unspecified information whose information content — i.e., its complexity or improbability — is precisely equal to A's.

We can therefore conclude that the initial specification which can turn a haphazard collection of 30 proteins (B, for instance) into a 30-protein autocatalytic set (A) merely by the chance substitution of one protein for another in the original set does not necessarily imply intelligent design.


Putting these two cases together, that of the English moths and that of the moderately large protein sets, we can conclude that neither respecifcation nor initial specification of complex informational events necessarily implies design. Thus we can conclude, against Dembski's primary claim, that the existence of complex specified information, or CSI, does not always betoken intelligent design.

Dembski might object that the two examples I have given involve "microevolution," in the case of the moths, and self-organization, for the proteins, and so they are both cases in which there is a flow of CSI, but not its original creation. OK, I agree. The respecification in the one case and the initial specification in the other don't actually create new information, much less new CSI.

But I would go on to say that the "creation" of CSI is obviously not the same thing as the emergence of a (new or revised) specification. Nor is it the same thing as the "creation" of (additional) complexity by virtue of, say, a protein set growing from 29 members to 30, since a 29-member set, even without catalytic closure, is, all by itself, manifestly low-probability enough to qualify as CSI. Once information crosses the threshold from insufficiently complex to sufficiently complex in Dembski's scheme, it doesn't become "more CSI-like" by becoming yet more complex.


I'd say that the attaching of a new specification to a formerly unspecified complex informational event — making its sundry parts into a complex system — does not "create" any information at all. Take, for example, the dropping of a keystone into an arch. Before the keystone's arrival, the other stones in the arch were held up by scaffolding. Afterward, the scaffolding can be pulled away, and the arch stands on its own.

Dropping a keystone into a waiting arch does not create new information. It does, however, turn complex unspecified information into complex specified information.

The arch pattern, it is true, is a target struck by artifice, not by law or chance. The arch is not the product of either natural selection or self-organization. But, like the color-changed moths or the earth's putative first autocatalytic set, it is the fruit of flow.

Some complex specified fruits of flow are designed, and some are not. The arch is, the first autocatalytic set isn't.

In fact, I'd go so far as to say that emergence is what happens when, as a fruit of self-organized information flow, a specification appears spontaneously, in the absence of direction or design, thereby turning complex unspecified information (a protein set lacking catalytic closure, for instance) into complex specified information (an autocatalytic set). No new information is created, it is quite true, but information that was not specified becomes information that indeed possesses a specification. And, if Kauffman is right, no direction or design is implicated.

Yes, "order for free" magically appears; no, there is, on net, no new information. This analysis works by virtue of what I said in Information, Order, and Entropy (I.D. XXI). The self-organized system is — has to be — an exporter of entropy, which is the opposite of order and information. It "eats information" in its environment, so to build up its internal order. But, as a result, its environment is a net loser of information. The amount of information the self-orgainizing entity gobbles up, in order to produce its own "spontaneous specification," equals the amount of information its surroundings lose.

The fact that "sponteneous specification" — self-organized emergence — is possible is one reason why Dembski's claim — that to prove design all you need is complex specified information — is not quite complete. You also need what he alludes to briefly, but not at great enough length: complexity that is irreducible.


Irreducible complexity is the cornerstone of the argument to intelligent design made by Dembski's fellow proponent Michael Behe — see The Handmaiden of Design (I.D. XIV). Behe (per Dembski) uses the example of an ordinary mousetrap, none of whose five parts can be deleted without loss of all functionality. I'd say the classic arch is likewise an irreducibly complex system, since removing any single stone leads to its collapse.

I'd say that to prove design you need irreducibly complex specified information. Not just CSI, but ICSI, as it were.

But that fact, ahem, reduces Dembski's argument to Behe's. Behe, who wrote Dembski's introduction, clearly agrees with Dembski that the information content of irreducibly complex systems must be specified, so Dembski's contribution to intelligent design theory is far from negligible.

At the same time, it would seem to me that any "system" which deserves the name would have to be specified. Specification would have to be a synonym for "systemness." An arch is a system and not just a pile of stones because of the independently knowable pattern/specification/target which it hits, no?


Which means that, when it comes to implying design, irreducible complexity is really the name of the game.

As Dembski points out, it is to be distinguished from cumulative complexity. Cumulative complexity can be built up by blind Darwinian processes like mutation-cum-natural-selection one slow step at a time. Each step has to be "selected for," which presumably happens when it, all by itself, boosts the fitness of its unwitting innovator. Or, at least, when it doesn't reduce its possessor's fitness and therefore ability to compete with other variants for survival.

But — or so Dembski and Behe claim — mutation-cum-natural-selection cannot produce irreducible complexity. I know of no reason to dispute that claim. Darwinism's staunchest proponents (such as Richard Dawkins) apparently agree with it, after all. Dawkins's claim in books like The Blind Watchmaker is that complex — yea, even specified — biological structures like the eye are examples of cumulative complexity, not of irreducible complexity.

Dawkins shows what he thinks is a valid Darwinian path from the first slightly light-sensitive patch of skin to a full-fledged eye, complete with lens and retina. Each mincing step along the way improves its bearer's fitness in some palpable way. His conclusion is that an eye is not in fact irreducibly complex. It is (he would surely agree) a fruit of flow. What's more, it is a fruit of blind, undirected, undesigned flow.

Monday, May 16, 2005

Is Fitness a "Specification"? (I.D. XXII)

Information, Order, and Entropy (I.D. XXI) was the most recent in my continuing series of posts concerning William A. Dembski's Intelligent Design. Now, in this post, I want to ask the arcane question, is fitness a specification?

The question struck me as I was reading the paper which became Dembski's chapter 6, "Intelligent Design as a Theory of Information," available online here. I find this paper to be a clearer, more coherent explication of Dembski's ideas than the one in the book — though this may be because I've already read the book!

Dembski's ability to impute intelligent design turns on his definition of information that is not only highly complex, but specified:

The distinction between specified and unspecified information may now be defined as follows: the actualization of a possibility (i.e., information) is specified if independently of the possibility's actualization, the possibility is identifiable by means of a pattern.

If it is independent of the information — if it can be explained instead with reference to "background knowledge" — then the pattern qualifies as a specification, and the information (i.e., the actualization of a possibility tht reduces our uncertainty) is specified information.

Some patterns are not specifications, though. If an archer shoots an arrow into a blank wall at random and then paints a target around the arrow, the pattern that results — the target, that is — is a fabricaation, not a specification.

Sometimes, we can be sure there has been no post facto target-painting, yet the background information which undergirds the pattern we associate with some piece of information cannot be discerned at all. So the information remains unspecified, pending a requisite increase in our backgound knowledge. For example, a string of gibberish remains just that unless and until we happen to break the cryptosystem by which it was encoded.

An illuminating illustration of specification-discernment which Dembski gives in his paper (but not in the book) is that of a married couple whose six children each buy for them coordinated portions of a complete, matching set of china, on the occasion of their 50th anniversary, such that when all six gifts are put together, the whole matching set is there. The completeness of the set and the fact that all pieces match one another constitutes a pattern. The fact that "we all know about matching sets of china and how to distinguish them from unmatched sets" is the background information that makes this pattern a specification wholly independent of the actual event.


Now, what I want to know is whether fitness, in the sense spoken of by Darwinists, is a pattern which furthermore is a specification.

The "survival of the fittest" is, clearly, an event and, as such, an instance of information. It is the actualization of a possibility. According to Darwinian evolution theory, it comes about by virtue of the culling action of natural selection. Those variants within a population who chance to have mutations that help them survive, thrive, and procreate are "selected for." Those who do not are "selected against."

You might even say that there is a pattern here. Fitter variants pass their genes to the next generation with greater likelihood than less fit variants.

The question is, is this pattern one that is specified independently of the actual (survival) events themselves? Is there background information by which we can determine the pattern of events independently of the events themselves?

My inclination is to say yes.

Take, for instance, Darwin's Galapagos finches. Suppose one particular species with long bills lives by drinking nectar from long-tubed blossoms. But suppose, for independent reasons, the blossoms' tubes get still longer as the plant generations roll by, such that the finches' bills no longer reach the nectar.

We can expect the birds' bills to lengthen like Pinocchio's nose over the course of many, many finch generations. Mutations which code for longer bills will be "selected for," as the older genetic variations which used to produce shorter bills are weeded out.

So there exists a pattern: a target of increased bill length which indeed the finches hit. And I'd say this pattern is indeed a specification, since background information about the theory of natural selection specifies the pattern apart from the (survival) events themselves. That is, we can expect the bills to get longer as the blossoms do. But if the blossoms someday get shorter again, the bills will too, since making longer bills takes up more energy, energy which could be devoted instead to some other finchy function.

It would accordingly seem that Darwinian fitness is itself a target or pattern which qualifies, on the basis of independent background information — the theory of "what gets selected for and what gets selected against" — as a specification.


It might be objected that fitness is not a stationary target but a moving one. After all, finch bills need to lengthen over the course of evolutionary time under certain circumstances and are better off growing shorter under others. I don't see this as a valid objection, though: targets in shooting galleries are often moving ones, and if the shooter manages to hit them, it is even greater testimony to his skill.

Nor do I think it a valid objection to say that in real life, we are unlikely to recognize all the factors which might or might not promote a gradual change in the bill lengths of birds. I liken this situation to the cipher that hasn't been broken — which, contrary to Dembski, I would say is actually specified information, though we can't prove it. Just because we can't demonstrate specification does not mean it doesn't exist.

A more robust objection might be that blossom-length changes and bill-length changes are not independent of one another. Most evolutionary "descent with modification," culled by natural selection, involves interdependencies of "co-evolution."

I think this objection can be overcome by referencing not just the finches or the flowers alone but the entire ecosystem within which they co-evolve. Our pattern or target is accordingly that the ecosystem as a whole evolves, and does so in a way such that the "total fitness" of its denizens as a group is maximized. Our specification, though its derived pattern forms a moving, slippery, co-evolutionary target, thus remains a true function of background information — say, a theory of co-evolution — that we have developed about how this real-world fitness-maximization function can be expected to play out.

So, as a result, I'm wondering why Darwinian evolution by natural selection alone isn't itself a case of Dembskiian "complex specified information" on the march!

Sunday, May 15, 2005

Information, Order, and Entropy (I.D. XXI)

Here is another in my series of posts inspired by Intelligent Design, William A. Dembski's book-length claim that scientific inquiry can demonstrate God's hand in the design of evolved biological systems. My most recent prior posts were A Place Saver (I.D. XX) and Questioning Specification (I.D. XIX).

John R. Pierce's
An Introduction
to Information
Theory

This post introduces a book I will be using to educate myself about information theory, John R. Pierce's An Introduction to Information Theory: Symbols, Signals, and Noise. I obtained it in view of the fact that the crux of Dembski's argument concerns this formal branch of mathematical science, also called communication theory, which among other things seeks a suitably general way to quantify the amount of information in any "message" sent or received across any "communication channel."

Dembski argues that only a divine designer could originate the "complex specified information" (CSI) that is found in the natural world: information that only very improbably could have arisen by chance, and that also betrays a demonstrable, if hidden, pattern. This pattern reflects "side" information, a body of knowledge which — independently of the CSI "main event" — is capable of generating the pattern that lurks behind or within the CSI. Hence, the pattern can be identified as the specification of the CSI.

In this post, however, I will be concerned not with CSI but with just plain information, whether or not complex, whether or not specified. Information is, at its most abstract, that by which uncertainty can be reduced. What I am interested in is whether self-organization of the type championed by Stuart Kauffman in At Home in the Universe can create information. Dembski says it cannot.

Yet, as Pierce shows, the antonym of information is entropy. And its synonym is order. Kauffman says that the "laws of self-organization and complexity" which he describes produce "order for free." If that isn't the same thing as information creation, why not?


Pierce broaches the topic of entropy and order (pp. 21-23) with reference to two related fields of physics, thermodynamics and statistical mechanics.

Thermodynamics has to do with thermal energy — heat — in systems whose molecules are in dynamic motion — canonically, gases. If a gas is allowed to expand against a moving metal piston in a metal cylinder, and if this takes place so slowly that no heat flows between the gas and the metal, some of the erstwhile thermal energy of the gas is converted to work, as the gas cools.

But if an equal amount of work is done by an external force that pushes the piston slowly back to its original position in the cylinder, the gas compresses and heats back up to its former temperature. Since no heat has been allowed to escape, this process is the exact reversal of the first process. Because the two processes exactly reverse one another, the entropy of the gas remains constant. So, says Pierce, "entropy is an indicator of reversibility; when there is no change of entropy, the process is reversible" (p. 21).

But what if, instead of a piston, the cylinder is simply divided into two parts by a membrane or partition, such that all the gas molecules start out on one side of the partition? This is a maximally low-entropy situation, since this particular arrangement of molecules — all on one side, none on the other — is the epitome of order.

Now, instead of moving the piston, let us imagine that the membrane dissolves. The gas molecules spread out to fill both halves of the cylinder. Entropy — disorder — increases. Yet the thermal energy of the gas remains the same, since no mechanical work has been done.

And, once the partition has been removed or the membrane has vanished, no work can be done. Before, work was possible, if the membrane simply became a piston. After, that option is no longer available. An increase in entropy means a decrease in "the possibility of converting thermal energy into mechanical energy" (p. 23).

But here's the key thing, with respect to information theory. When all the gas molecules were quarantined on one side of the membrane, we knew more about the positions of the molecules than afterward, after they had spread out into both halves of the cylinder. We had greater certainty as to the position of any one molecule, call her Hermione. Before, Hermione's location was definitely confined to one half of the cylinder. After, she could be anyhwere.

So an increase in entropy corresponds to greater uncertainty. Since information is that which reduces uncertainty, entropy is "negative information." The information in a system goes down when its entropy goes up. Which is simply another way of saying the obvious: the order in a system goes down when the disorder goes up.

That means that anything that increases a system's order increases its information content.


But isn't that exactly what happened when, in the first scenario, the piston was forced back into the cylinder, thus moving all the gas molecules to one side of the chamber? Yet, we said there was no change in entropy — and so there couldn't have been added order, added information content, right?

The trick here is that Pierce assumes, in the first scenario, that the work done when the piston moves outward is stored by having it lift a weight, and then is fully recovered by letting the weight fall, thereby forcing the piston back to its original position. Under the idealized assumption that no heat escapes during the process, the overall system (including the weight) returns exactly to its initial state. There was neither a net gain nor a net loss in entropy, or order, or information.

Yet, in the middle of the stroke-counterstroke process, after the stroke but before the counterstroke, there was less order — less information — in the cylinder, taken as a system separate from the external weight. If we frame just the cylinder as "the system" — excluding the weight, that is — and if we compare the system's state of affairs prior to the stroke with that which obtained prior to the counterstroke, there is a (temporary) entropy gain/information loss after the stroke has been completed.

So my conclusion is this: whether or not information has been "created" or "destroyed" is a question that cannot be answered until you decide how you want to frame or matte the system in terms of its spatial inclusiveness (i.e., is the weight part of the system?) and how you want to bracket the system's behavioral history in terms of its temporal inclusiveness (i.e, is the post-counterstroke situation also to be considered?).


It is believed that, overall, the entropy of the cosmos increases as the universe "runs down" and heads ineluctably for "heat death," billions of years from now. As the world we know becomes on net more disordered, it would seem accordingly that its information content must inexorably diminish. After the universe's ultimate heat death, it will offer no means whatever for reducing the uncertainty of any post-facto observer from any hypothetical sister cosmos as to what transpired in our cosmos before it died.

Yet, here we are, alive and kicking. Stuart Kauffman's view of this fact in At Home in the Universe is clearly that we and all other living things possess less entropy and contain more order and information than we have any "right" to possess. We are the beneficiaries of "order for free." We are also its bestowers. This dual truth arises, he says, because we are self-organizing systems.

I imagine the idea at the heart of Kauffmanian self-organization is the development — nay, the evolution — of living things' ability to "export" entropy.

Visualize the piston-in-cylinder system, framed spatially without inclusion of the external weight in the frame. Bracket it temporally from the beginning of the counterstroke to the end thereof. The system goes from high entropy to low. As it gains in order, it gains in information.

This looks like magic, because of how we've framed and bracketed the situation. We've excluded the weight which stores the energy which the stroke converts to mechanical form, and we've bracketed out the time of the stroke itself, focusing exclusively on the "information-creating" counterstroke. Once we broaden the frame and remove the bracket, we see that there's nothing mysterious going on. In fact, there is no net change in entropy or information.

Still, we cannot deny that the cylinder-piston-counterstroke system, seen with its original spatial frame and its original temporal bracket, does appear to "export" its entropy. As (to borrow Kauffman's Shakespearean allusion) it "struts and frets its appointed hour upon the stage," it flourishes a seeming "order for free."

So, the question becomes, when the process of "entropy exportation" cannot be as readily demystified simply by removing an arbitrary spatial frame and an arbitrary temporal bracket, isn't "self-organization" the likely verdict?

If so, then William Dembski's claim that self-organization, like other natural processes, cannot originate information qua information seems justified. On this view, self-organization simply changes the flow of existing information; it doesn't make new information.

Yet, even though it's by such lights something of a cheat, self-organized information flow could be responsible, all by itself, for life's origin and evolution on this planet. If, over eons of time as you evolve, you as an arbitrary initial life form become very, very good at entropy exportation — staving off death — you could even become quite brainy and start writing books like At Home in the Universe and Intelligent Design.

A Place Saver (I.D. XX)

Crucial to William A. Dembski's argument in his book Intelligent Design is his chapter 6, "Intelligent Design as a Theory of Information." It in turn was derived from his earlier paper by that name, which can be accessed on the Web here. I have also found Rich Baldwin's scholarly critique of the claims made in Dembski's original paper here. (See Questioning Specification (I.D. XIX) for the most recent of my previous posts about Dembski's book.)

I have no idea who Baldwin is, except that he seems to be an expert on information theory and, with someone named Ian Musgrave, is the co-proprietor of the "Information Theory and Creationism" website here. The paper I found criticizing "Intelligent Design as a Theory of Information" is part of this website. Its conclusion, "Where Dembski Goes Wrong," can be directly linked to by clicking here.

Baldwin says he identifies at least two egregious errors (one is that "Dembski makes a fantastic leap in assuming that an information metric derived from the probability of a single event (-log 2 p) and the shortness of the minimum algorithm needed to represent the event (Chaitin-Kolmogorov) are necessarily related") as well as several other weak points in Dembski's chain of reasoning.

But I find this critique goes into too much information-theoretical depth for me to grasp it as yet. So I am posting this brief message as a place saver, in hopes that after I investigate information theory a bit more — maybe a lot more — I'll be able to decide who's right, Dembski or Baldwin.

Questioning Specification (I.D. XIX)

Previously in this thread about Intelligent Design, William A. Dembski's book proposing a demonstrable-if-hidden structure in nature's complexity which betokens God's handiwork — the most recent installment was Tipping the Balance (I.D. XVIII) — I tried to come to grips with "specification," one of Dembski's marks of design. I want to return to that topic now, because I find I still don't really understand it.

Dembski says that some biological systems — say, the flagellum which propels a bacterium — pass a "complexity-specifcation criterion" which vouchsafes their having arisen by design. They are too complex — too improbable — to have arisen by chance. But neither can we attribute them to the automatic outworking of natural law, since they might not have arisen at all. They are contingent, not necessary.

Yet not all contingent, satisfyingly complex systems vouchsafe design. Only the ones that are "specified" do.

A contingent, not-absolutely-necessary biological system is, in probability theory, an event. If sufficiently complex — a flagellum requires the yoking together of some 50 proteins — its probability of occurring by sheer chance is low. But, even so, not all sufficiently improbable events have "structure" — a "suitable pattern" — the way a flagellum does. Only if events have a suitable pattern can design be inferred.

If it exists, this "suitable pattern" is the "specification" of a contingent, sufficiently complex event. The pattern qua specifcation needs to be susceptible of being generated, independently of the event, by recourse to "side information."

The reason that this "information" is on the side and not part of the event itself is to guarantee that it is not a "fabrication": a pseudo-specification that has been derived from knowledge of the event — kind of like an archer shooting an arrow blindly at a wall and then painting a target around wherever it strikes.

No, the target, as a specification, has to be logically (not necessarily temporally) prior to the event. Only then can we "detach" the pattern — the target, the specification — from the event itself, on the basis of the side information from which the specification can be derived.


Some of my confusion stems from the fact that Dembski says (or seems to say?) that we can determine when such a logically prior specification — such a structure or pattern, based on side information — exists for a contingent, suitably complex event even when we don't know what the pattern or specification is, or which side information can yield it.

For one thing, I'm not sure I fully see what he means when he says probability theory can assure us that the (unidentified) side information which generates the (unknown) pattern is "conditionally independent" of the event. I understand that this is what certifies that the specification is not a fabrication, but I don't really get how it all works in the absence of knowledge of what the side information is.

For another thing, I really don't follow what Dembski says with respect to "tractability." The (unidentified) side information "must provide the resources necessary for constructing the pattern in question," Dembski says (p. 139).

Note well that, as I say, the pattern we'd like to identify, but can't — the one to which the event in turn conforms — is not itself identical with the side information. Rather, the side information, should we become privy to it, would allow us to construct the pattern which could then be (in a separate step) transformed into the event in question.

Dembski gives, for an example, an as-it-turns-out fabricated series of coin flips which look for all the world as if repeated tosses of a fair coin generated it. Then he shows that the heads and tails in the supposed series of fair-coin tosses could be turned into a string of 1's and 0's. This putatively random sequence of bits constitutes the "pattern" for the event itself, which is, we must remember, the string of H's-for-heads and T's-for-tails.

In this simple example, the derivation of the pattern from the event itself is quite straightforward, since H's and T's practically cry out to be transformed into 1's and 0's.

When suitably chopped into apt subsequences, Dembski then shows, this seemingly random pattern of 1's and 0's would actually represent an easy-to-generate ordered list of binary numbers. This ordered list would simply recapitulate the pattern associated with the event. Our knowledge of binary arithmetic, embodied in an algorithm which generates the ordered list, would thus be the side information behind the pattern.

In this example, we easily derive the pattern from the event, then we twig to the way to restate the pattern based on the side information (i.e., our knowledge of binary arithmetic). Once we have done both of those things, it's easy to show "conditional independence": that knowledge of the side information doesn't affect the probabilities we independently assign to the event consisting of the putatively random coin tosses.

In other words, though there is side information and though from it can be derived a pattern to which the event corresponds, the same event could still have occurred as the result of tossing a fair coin the requisite number of times. The (very low) probability of that happening by blind luck is nevertheless not affected by our having twigged to the side information.

Also in this example Dembski implies that, once we identify the side information and how it generates the pattern, we can easily prove "tractability": that the algorithm which embodies the side information could indeed generate the pattern we have identified, which is the list of 1's and 0's which we transformed the original H's and T's into.


What boggles my mind about all this is Dembski's claim (if I read him aright) that we can prove "conditional independence" and "tractability" even when we have not twigged to the event's pattern's side information: in this example, our knowledge of binary arithmetic.

Now, I presume the sticking point in many real-world cases would be how to derive the pattern (the 1's and 0's) from the event when the nature of the event itself does not suggest how the pattern could be derived. How do you figure out the 1's and 0's of a bacterial flagellum?

Or, even if a 1's-and-0's pattern can be established as binary inforamtion, how do we always know what side information can serve to "detach" the pattern from the event and thereby show that the pattern is actually a specification?

What exactly is the sequence of steps Dembski proposes in determining that a specification, a detachable undlerlying pattern, exists for an event? I feel that Dembski simply does not go into enough explanatory detail at this crucial point in his argument to allow me to quench my uncertainty.

I assume that some or all of these questions may be answered in Dembski's more technical book, The Design Inference. I intend to get my hands on that book and report back.

***

Having written the above and looked it over, I have to admit a couple of things.

One is that when I began writing it, I didn't understand what I came to understand during the process of doing the writing: that, crucially, the pattern of an event is different from the side information which "detaches" that pattern from the event itself (thereby making the pattern a "specification" for the event).

The other thing I have to admit is that, in the light of that first realization, I may have been wrong to believe that Dembski believes that we can establish "detachability" if we can't identify the event's underlying pattern, or else if we can't pinpoint the side information which can generate the pattern independently of the event in question.

Wht Dembski actually seems to be talking about on pp. 138-139 is using probability theory and information theory to prove, respectively, conditional independence and tractability when these criteria are not obviously satisfied — but when the event's pattern and the side information which can putatively generate it are indeed in hand.

Which leads me to continue to wonder: when it comes to natural biological systems like a bacterial flagellum, how do we figure out what pattern of 1's and 0's it corresponds to? And, then, how do we figure out what side information successfully detaches that pattern from the actual "event," the flagellum itself?