Saturday, April 30, 2005

Demonstration vs. Discernment (I.D. VIII)

Herein, more of my posted comments as I read William A. Dembski's book Intelligent Design. Natural Theology and Beyond (I.D. VII) was, until now, the most recent of these posts.

I have just begun reading Part 2 of Dembski's book, "A Theory of Design." This group of three chapters apparently presents the crux of Dembski's argument that the natural biological world bears demonstrable marks of intelligent design.

Or, more to the point, that this particular proposition has "empirical content."

Robert
Stalnaker
For a proposition to have "empirical content," Dembski states in endnote 8 on pp. 284-285, means that "it rules out certain possible observations." This is in accord with M.I.T. philosopher Robert Stalnaker's assertion in the book Inquiry (see the same endnote and also p. 154):

Content requires contingency. To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities are excluded by its truth.

According to the same endnote, a scientific theory has empirical content "if it entails or renders probable a proposition that has empirical content."

I note in passing that the phrase "or renders probable" makes me queasy. Not exactly a crisp definition of something that would seem to absolutely require one: the notion of a theory that has "empirical content."


I expect Dembski will tighten up the definition as he goes along. Meanwhile, I have to admit I am perplexed at this point. For in his third chapter, "The Demise of British Natural Theology," he makes an argument that I either don't understand or don't agree with.

In the final several sections of this chapter Dembski seems to be laying the groundwork for claiming that the "apparent contrivance" in nature is the result of intelligent design, yes, but not the result of out-and-out miracles. But neither is it the result of natural lawfulness alone, whether God-ordained or not. Rather, it is the result of something in between these two positions: "intelligent causes" which, unlike mere "physical" or "mechanistic" causes, cannot be subsumed under the laws of nature.

Natural life forms need not be actual contrivances which simply popped into full-blown existence miraculously — a fact which Dembski likes, since miracle-based biology has no "empirical content."

If nature's apparent contrivances are not in fact actual contrivances — as in the well-known analogy of each species being like a watch, and thus having a hidden watchmaker, which Darwin showed not necessarily to be correct — they still require an explanation (see n. 42, p. 287).

But positing, with Charles Babbage in the 1830s, that God made the laws of nature and the laws of nature "took over from there" does not suit Dembski either — and here is where I think I begin to part company with him, if I understand him aright, that is.

Dembski seems to feel that doing biological science based on nothing but natural law, while continuing to assert the proposition that these laws enact God's intentional design "on their own," robs the intelligent design proposition of "empirical content" just as badly as a bare appeal to miracle does.

Stuart A.
Kauffman
Now, to a certain extent I comprehend the objection and agree with it. Lord knows, Stuart Kauffman in At Home in the Universe makes no overt claim that there is a God behind the laws of self-organized complexity which he describes as responsible for "order for free" in biology.

Yet he seems to feel that once these laws have been fully discovered and documented, they will be sufficient to account for the appearance of contrivance in the biological world all by themselves — that is, as "handmaidens" to Darwin's natural selection and heritable variation, but with no need to appeal to the sort of supra-legal divine activity Dembski seems to have in mind.

This supra-legal divine activity consisting of supposed "intelligent causes," I repeat, is not miracle-working, since miracles have no empirical content. Nor is it simply being content to watch natural law produce wondrous complexity, à la Stuart Kauffman — the intelligent design proposition, Dembski says, lacks empirical content in that case, too.

But who (besides Dembski) says the intelligent design proposition necessarily has to have empirical content, anyway? And if it does — Dembski seems about to embark on some sort of calculation that will show the contrary proposition, no evidence of design whatever, to be extremely dubious — who is to say that God's benign non-interference with intermediary laws of self-organized complexity is absolutely ruled out as the explanation for apparent contrivance in nature?


My point is sharpened by the opening section of chapter 4, "Naturalism and Its Cure." The section itself is called "Nature and Creation." Dembski's topic is how we use the two words nature and creation semi-synonymously. All the while, we studiously ignore that the former suggests a world wholly independent of God, while the latter implies a divine Creator.

Our science today, he says, assumes nature in that sense of the word and only speaks of creation and creatures metaphorically. But such a fully naturalistic view cannot answer the question, Dembski asserts, "Whence cometh the order of the world?" (p. 99). Dembski goes on, further down the page:

For those who cannot discern God's action in the world, the world is a self-contained, self-sufficient, self-explanatory, self-ordering system.

He identifies such people as, to use the words which Paul wrote in his First Letter to the Corinthians 2:14, psychikos as opposed to pneumatikos: "natural" or "soulish" as opposed to "spiritual" (see n. 1, p. 287). In "severing the world from God," he says they are guilty of committing "the essence of idolatry."

Here is one point where I think Dembski goes wrong. He confuses discerning God's action in the world with being able to demonstrate it.

By demonstration I mean the sort of thing which is possible only if that which is being demonstrated has "empirical content." Dembski is at pains to demonstrate the existence of "intelligent causes," supposedly needed to explain nature's wonders, because he seems blind to the possibility that spiritual discernment of a God behind nature does not require empirical demonstration.

I read Paul in exactly the opposite way, as suggesting that spiritual discernment is the appreciation of God's relationship to the natural world without demonstration or proof.


For some reason, I find that I want God's creative relationship to the evolving world to be one of being discernible behind it rather than being demonstrably "in the causal loop," as it were.

This puts me at odds with William Dembski at a rather deep level, I realize. It is obvious to me that he wants God's causal efficacy to be as demonstrable as I don't want it to be.

I can't really say why he wants demonstrability over mere discernibility. I suspect it may be because he is in love with logic. With mere discernibility, there is a missing link in the logic of "intelligent design."

I on the other hand am in love with that particular missing logical link. Perhaps it is because I think that it, and it alone, guarantees my (and the world's) freedom from divine determinism.

Thursday, April 28, 2005

Natural Theology and Beyond (I.D. VII)

Having laid the groundwork in Faith and Miracles (I.D. VI) and its preceding posts for what is very likely an alternative worldview to that of William A. Dembski, in his book Intelligent Design, I continue with my perusal of the book. I find that the material following Dembski's refutation of Schleiermacher's and Spinoza's critique of divine miracles leaves me unhappy as to the direction Dembski seems to be heading.

So, herein I post what are basically glorified notes to myself. My hope is to etch in my memory some of what constitute the basic anchor points of Dembski's sweeping, subtle, and nuanced argument thus far.


One anchor point is clearly the idea that modern science is (erroneously) wedded to "methodological naturalism" (see pp. 67-69). Methodological naturalism is, says Dembski, the view that nature ought to be treated as a closed causal system. As an axiomatic "inviolability thesis," it vouchsafes the intelligibility of the world.

Put another way, if science is to come up with a grand "theory of everything," then such things as miracles must be ruled right out of court — since a world which operates at the whim of a divine intercessor is perforce unintelligible.

Methodological naturalism is, says Dembski, a "regulative principle" for how science is to be done today, where for Spinoza and Schleiermacher this view of nature as a closed causal system was a claim about the very nature of nature. In other words, the naturalism of Spinoza and Schleiermacher was metaphysical, not just methodological.

The metaphysical naturalism of Spinoza and Schleiermacher, Dembski shows, was an elaborate case of question begging: their naturalistic assumptions against the possibility of miracles are said by Spinoza and Schleiermacher to be absolutely necessary truths. Why? Because of these philosophers' own closed-system-of-nature metaphysical presumptions. The argument is circular.

But God could in fact have ordained other than a closed natural system. Accordingly (says Dembski) metaphysical naturalism fails.

Still and all, in its heyday metaphysical naturalism served as a scaffolding for the follow-on arch of methodological naturalism, which as a "regulative principle" became normative for modern science.


A second Dembskiyan anchor point: as such, methodological naturalism not only lets out miracles per se, but also cashiers "a designing intelligence whose action transcends natural laws ... simply by being irreducible to natural laws" (p. 69).

Dembski spends much time considering the status of natural laws. For instance, he says:

... my own metaphysical preference is to view creation as an interrelated set of entities, each endowed by God with certain inherent capacities to interact with other entities. In some cases these inherent capacities can be described by natural laws. Nevertheless, no logical necessity attaches to these laws, nor for that matter to the inherent capacities. (p. 66, italics mine)

God can override the laws and/or rescind the capacities, in Dembski's view. For example, Dembski says, God resurrected Jesus's dead body by miraculously endowing it with new capacities.

As for overriding natural laws, possibly what Dembski has in mind is a suspension of them in certain instances. Or, possibly — and this is what I think Dembski is really vectoring toward — natural laws, though important, are not the whole story. They don't by themselves explain everything. Instead, God fills in the blanks he intentionally left in the lawbooks so as to exert causal influences in situations which the laws alone do not cover.

It isn't really clear at this point whether Dembski is going to put great emphasis on the overriding of natural law as distinct from the rescinding of ordinary capacities — or if they are indeed separate concepts. Perhaps the one implies the other. But my guess is that the main emphasis will fall on the supposed "blanks left in the lawbooks" which allow God to take a hand in nature's unfolding.

Charles
Darwin
What does seem crystal clear is that Dembski does not care for the ploy used by certain (but not all) advocates of British "natural theology" in the years just prior to Charles Darwin's Origin of Species in 1859 (see pp. 73-79). Whereas earlier natural theologians had asserted that the order in nature was ample direct evidence of a divine "watchmaker" or designer, the newer thrust of some natural theologians was to hold that the designer, God, had with perfect foreknowledge programmed nature, through its inviolable laws, to produce the creatures we see.


Anchor point number three for Dembski, in assailing such views, seems to be that "Natural laws produce their effects with an impersonal, automatic necessity" (p. 79). This is so, one of his endnotes says (endnote 16, p. 285), "even if we allow for a probabilistic component in the laws." Inviolable natural laws, probabilistic or not, may be "brute facts" that require no explanation:

... it is no longer clear what need there is for a designer since designers by definition design artifacts/contrivances, not abstracted lawlike regularities. A designer who is merely a law-giver always ends up being dispensable, for the laws of nature always have an integrity of their own and can thus just as well be treated as brute facts (as opposed to edicts of a clandestine law-giver). (p. 75)

Charles
Babbage
Dembski goes so far as to assert, along these lines, "If I can't ascertain that a thing is designed, I can't ascertain that the process [i.e., the law or program] giving rise to the thing is designed" (p. 78). He seems to simply assume that "the programs that the divine designer has woven into the laws of nature [in the view of the British natural theologian, mathematician, and proto-computer scientist Charles Babbage (1791-1871)] ... are unknowable."

But it seems to me that a sequence of apparently random digits generated as the decimal expansion of π (3.14159....) constitutes a "thing" which in and of itself bears no ascertainable marks of design. Yet, once it is revealed how the digits were produced, we can be sure that the underlying expansion-of-π algorithm was designed. So I doubt that Dembski is correct in simply assuming that "if I can't ascertain that a thing is designed, I can't ascertain that the process ... is designed."

Furthermore, Dembski posits that putatively designed laws which may instead just be "brute facts" strip natural theology of empirical content. Such an approach to natural theology rules nothing in, he says, that couldn't be explained by simply asserting that natural laws are brute facts which are just there.


So it seems reasonably clear that Dembski intends to reformulate natural theology in terms of (a) natural law not being the whole story of causality in the world order and/or (b) there also having been special miraculous interventions on the part of God.

The fact that Dembski has served notice that (a) and (b) are separate categories suggests to me that he will put more emphasis on God's cunning complemetation of natural laws than on his working of specific absolute miracles.

All in all, though, I think it's a shame that Dembski establishes the need to look beyond Babbage's natural-law-based approach to natural theology by resorting to what I consider shaky arguments, such as his assertion that locating design in natural laws is more problematic than locating design in "things." In some cases, as with the decimal expansion of π, in fact it may be less problematic.


Let me close by noting that Stuart Kauffman, in At Home in the Universe, seems to take the position that natural laws — in view of the laws which he posits and sketches out of self-organized, emergent complexity that make intellgent beings like us "we the expected" — are enough to underwrite our "reinventing the sacred." This, I'd say, makes Kauffman the lineal intellectual heir of Babbage.

I can't tell if Kauffman actually believes in God, though even if he does, I suspect the concpicuously theistic Dembski would bridle at Kauffman's lawfulness-based worldview as in essence natural theology redux. "Natural laws, however, proved too thin a soup to support the activity of a designer," à la the hopes of Charles Babbage, says Dembski (p. 82). Kauffman's investigations into laws of self-organization and complexity could well be construed as thickener for the soup.

Dembski's agenda is different. He says, "Indeed it's only when natural laws are viewed as incomplete, so that without the activity of an intelligent agent it is not possible to bring about a given object of nature, that natural theology can remain a valid enterprise" (p. 79). Clearly, Dembski will later on be proposing what such "activity of an intelligent agent" truly consists of, and how we may assure ourselves that it has actually taken place.

Wednesday, April 27, 2005

Faith and Miracles (I.D. VI)

Genesis by Witness (I.D. V) was my long-winded attempt to suggest that, to borrow a phrase from William A. Dembski's Intelligent Design (p. 61), "... faith needs to be presupposed before we can know that an event M is a miracle."

But my reason for saying that is not one Dembski apparently embraces. In my view, which I call "genesis by witness," faith has ontological force. Faith is what makes a miracle possible.

In Dembski's view, faith has merely epistemological force: faith is at best that cast of mind which allows us to know that a miracle is a miracle.

Friedrich
Schleiermacher
Dembski makes his view clear in discussing Friedrich Schleiermacher's "naturalistic critique [that] seeks to overthrow the concept of miracle" (p. 61). Schleiermacher (1768-1834) was a Prussian philosopher-theologian who sought to refine Baruch Spinoza's nearly-two-centuries-old view to the effect that miracles were wholly out of character for a rational God.

According to Dembski, Schleiermacher assumed that causality was a form of entailment (see p. 58). That is, if cause C is the cause of event E, then it is impossible for E to be false (not to happen) if C is true (does happen). So if miracle M happens instead of the expected naturalistic event E — even though C truly happens — something is rotten in Denmark. Specifically, there is a logical, not just a causal, contradiction. And logical contradictions are not only impossible, they are supposedly incoherent.

Since it putatively makes no sense to lay incoherence at the doorstep of a wholly rational God, there can accordingly be no "absolute miracles." Thus, Schleiermacher.

Now it appears to me, at least at this stage of my perusal of Intelligent Design, that Dembski wants more than anything else to refute Schleiermacher ... and thereby Spinoza. Dembski's method is roughly this:

First, disentangle "epistemological critiques" of miracles from "naturalistic" ones. Spinoza made both kinds of critiques, but Schleiermacher focused on the latter.

Second, show that epistemological critiques of miracles such as Spinoza's (and, later, David Hume's) are invalid. They:

... always remain inconclusive. ... [A]n epistemological critique can never succeed in overthrowing the concept of miracle. The problem ... is that our capacity or incapacity to know something is never determinative of a thing's status in reality ... Reality and our ability to know reality are always two separate questions. (p. 60)

Third, show that naturalistic critiques are equally ill-founded. In this vein, Dembski says that Schleiermacher's naturalistic critique simply makes Spinoza's earlier attempt more explicit. So, if Dembski can refute the latter, the former is also refuted.

To refute the latter, Dembski basically seeks to overturn the way in which Schleiermacher "collapsed causality into logical necessity," or, put differently, entailment (p. 61). This argument is quite an involved one, and rather than pick it apart here, I wish to suggest that, en route to making his case, Dembski glosses over a "third way" perspective I find more appealing than either Schleiermacher's or Dembski's.

This "third way" perspective is hinted at when Dembski says (p. 61), "Perhaps faith needs to be presupposed before we can know that an event M is a miracle." That sentence is endnoted thus:

Alan Richardson [in The Miracle Stories of the Gospels] takes this position in discussing Jesus' miracles: "Only those who came in faith understood the meaning of the acts of power. That is why any discussion of the Gospel miracles must begin, as we began, with a consideration of the biblical theology, with the faith which illuminates their character and purpose." (p. 283)

But my "third way" perspective goes one step further. It proposes that what is at issue is ontology as much as it is epistemology. "Coming in faith" makes miracles possible; it does not simply open our minds to them.

Put another way, "coming in faith" is a necessary (though perhaps not sufficient) condition for a miracle to occur. Our readiness to witness a miracle unblocks the very existence of the miracle.


Saying that miracles are in effect actualized by faith is wholly analogous to the paradoxical quantum logic of John Archibald Wheeler, which I discussed in

A photon's emission along a particular path is an event that must be modeled, Wheeler's logic goes, by applying the scientific theory of quantum mechanics. The theory, taken to its logical conclusion, tells us that pointing our telescope at Path A, a trajectory by which a photon may travel to us from a distant light source such as a quasar, retroactively cancels the equiprobable possibility that Path B is traversed instead. Pointing the telescope at Path B likewise selects it retroactively as the path that is traversed by the photon.

This, despite the apparent fact that the photon's actual path would have had to be chosen millions of years ago, depending on how far the light source is from Earth. Wheeler confirms this scientific "fact" mathematically, based on quantum-mechanical presuppositions, and also by virtue of a thought experiment. It has also been confirmed in the form of a more manageable experiment performed in a laboratory.

Such logic stands Schleiermacher on his head. Schleiermacher assumed (Dembski shows) that every event E has a cause C so definite and ineluctable that if C exists, E must happen. Or, put another way, C entails E.

Likewise, if event E happens, there must be some cause C which entails E. The existence of E entails that some C caused it.

But in Wheeler's crazy quantum logic, there is no cause C which entails the use of Path A by the photon. After all, perhaps our training the telescope on Path A was done at random, by the flip of a coin. Our happenstance observational choice does not constitute a coherent cause of Path A's actuality. If the coin had come up the other way, we would have trained the telescope on Path B, turning that potentiality into the observed actuality.

If Wheeler's logic is correct in the realm of quantum mechanics, and (note this well) if I am correct in assuming that his paradoxical quantum logic transposes over to the realm of miracles, then our observational "witness" (provided that we "come in faith") has the same ontological force for miracles as our telescopic observational choice has for quantum phenomena.

So I would say that faith has to be presupposed, not just so that we can know for sure an astounding event is a miracle, but so that the miraculous event can exist as such in the first place.

Faith is active and creative, not just reactive and judging. Faith moves mountains; it doesn't just confirm that mountains have been moved. Faith creates miracles as much as miracles create faith.

Monday, April 25, 2005

Genesis by Witness (I.D. V)

Though it is but loosely related, this post is the fifth in my series on William A. Dembski's Intelligent Design: The Bridge Between Science and Theology. The previous post in the series was Grounding Faith in Reality? (I.D. IV).

In Standpoint and Fact and in Randomness by Observership?, I have been trying to extend an idea I took up in Genesis by Observership, over in my A World of Doubt blog. Namely, that there is something about the sheer act of observation that makes the world what it is. This is an idea that physicist John Archibald Wheeler has called "genesis by observership." Now I would like to modify it slightly and extend it as, first, "genesis by accordance," and later as "genesis by witness."

As a noun, accordance can mean the act of granting. As a verb, accord can mean, says the Merriam-Webster Dictionary, "to grant or give especially as appropriate, due, or earned." I apply these words in a way I can best illustrate by reference to Schrödinger's cat.

Erwin Schrödinger,
as depicted on the
Austrian 1000 Schilling
bank note
Erwin Rudolf Josef Alexander Schrödinger (see this Wikipedia article) was a pioneer quantum physicist who devised a paradoxical thought experiment about a cat hidden from view in a sealed box. In the box with the cat is a mechanism that will release a poison gas and kill the cat, but only if a particular quantum event — the decay of a radioactive nucleus — takes place.

After an amount of time elapses that makes the probability of nuclear decay equal to (say) 0.50, when we open the box, the cat is either dead or alive. But which result occurs is, according to the laws of quantum mechanics, apparently determined only when the box is opened. That is, the laws of physics model reality in such a way that the post facto act of observation si the only thing that crystallizes the prior quantum potentiality into actuality. Or, in my terms, we accord an earlier event of radioactive decay the status of reality by (and only by) observing its consequences.

Similarly, in a J. A. Wheeler thought experiment I mentioned in my earlier posts, we accord the status of actuality to the arrival of a photon from a distant quasar along one of two equiprobable paths by observing, through a telescope, one path (call it Path A) to the exclusion of the other (Path B). Photon behavior, like radioactive behavior, involves quantum potentialities. We turn such a potentiality into an actual event by — even after the event has supposedly occurred — observing the event or its consequences. In so doing, we accord the (earlier) event the status of existence!

So, at least in my view, Wheeler's "genesis by observership" might well be called "genesis by accordance." We accord actual existence to certain phenomena by observing their consequences. In so doing, we create aspects of our own reality.


John
Archibald
Wheeler
There is another sense of accord/accordance which bolsters this notion that "genesis by accordance" might better describe "genesis by observership." Accord is a synonym for agreement. Now, in the experiment of Schrödinger's cat there is the implicit assumption that when the box is opened before two or more witnesses, all witnesses will agree as to whether the cat is dead or alive. If the cat is dead, all present will accord the triggering quantum event of radioactive decay the status of earlier having occurred. The universe will not suddenly split in two at the unveiling, with the decay event existing in one part's history and not existing in the other.

Likewise, if we imagine the telescope in the Wheeler experiment as recording photon arrivals photographically, it is not the case that one observer will see the record of the Path A photon and one not.

In other words, observership in these cases implies accordance, in both senses of the word. In sense one, observational "accordance" of the status of existence to a particular event establishes that event's actuality — even, oddly enough, after the fact. In sense two, all observers are "of one accord" about the ontological status of the event.


George
Spencer-Brown
This idea of "genesis by accordance" can be moved from the quantum realm to the realm of random events. The first step is to recognize, as I mentioned in my earlier posts, that per the theories of mathematician-philosopher George Spencer-Brown, randomness is not so much a hard fact as a status accorded to an event or a sequence of events by us as observers.

Examples of so-called random events include coin flips, die rolls, and, as the basis for biological evolution, genetic mutations. Taken singly or in sequence, we accord these events the status of not being "ordinarily caused" — though they may have hidden causes which determine which way (say) the coin lands, we are not privy to those causes or influences. So we accord randmoness as a status to said event or events, and content ourselves with looking for laws of probability that they can be expected to adhere to over the very long run.

In much the same way, quantum events — whether a nucleus decays, whether a photon follows Path A or Path B — are not "ordinarily caused." We cannot examine a chain of causality which explains them comprehensibly.

In fact, for all we know, coin flips, die rolls, and genetic mutations may be at the mercy of quantum uncertainties per se, and the two categories are really one.

All we really know is that we observe random events as being random, inexplicable, unpredictable, undirected, and so on.

And even though we disagree over such questions as whether randomness is an objective or purely subjective attribute of these events, we are united in calling the events random. The word accordance applies here, too, in both its senses.

The event itself — say, a coin flip — is unremarkable. But the randomness of events is an attribute that we also observe and remark upon, if (in G. Spencer Brown's terms) we are "bamboozled" by the events themselves. That is, if we try with all our might to find a formula by which a certain sequence of events can have been generated deterministically, and we fail, then the events become random for us. Barring the discovery of an explanatory algorithm, we are furthermore of one accord about this: the events happened at random.

In my view, it is an act of "genesis by accordance" which makes random events qua random events random.

On the other hand, if we discover a hidden rule or formula which accounts for the supposedly random events, it is as if we have switched telescopes in the Wheeler thought experiment. Now, all of a sudden, that photon actually took Path B. Now, all of a sudden, that series of supposedly random events was actually deterministic.


It is my contention that this situation is not merely epistemological, it is ontological. It is not merely a matter of what did we know and when did we know it. It is rather a matter of our knowledge, observation, and, most importantly, accordance granting being. Recognizing this is the second step in moving "genesis by accordance" from the quantum realm to the realm of random events.

In the case of events that we consider possibly random, possibly not, there are actually two categories: random, and not random. I am suggesting that which category an event is actually in ultimately depends — quite counterintuitively, quite against everything our common sense tells us — on which status we accord it!

Just as our observational accordance grants being to the quantum event which triggers the death of Schrödinger's cat, it also grants existence to ineffable, undetectable, unpredictable "trigger" of the random event (which may have been a quantum event). Otherwise, the event in question is at best "pseudorandom": a horse of an entirely different color.

Diagram of
quantum events
Let's say for the sake of argument that truly random events are in fact triggered by quantum events. So if Quantum Event A happens, the coin comes up heads. But if Quantum Event B happens (or if Quantum Event A simply fails to happen), the coin comes up tails. Now, we flip the coin and it comes up heads. According to Wheeler's "genesis by observership" logic — which as I say I am renaming "genesis by accordance" — it is our observation of heads that crystallizes the existence of Quantum Event A in the history of the universe. Or, as I would put it, we accord the status of existence to Quantum Event A by observing its result.


Let me for the sake of convenience of discussion label this last claim, this second step in moving "genesis by accordance" from the quantum realm to the realm of random events, that of "strong genesis by accordance." The first step was recognizing that the attribution of randomness, but not the existence of the random event itself, was a matter of observational accord. That first-step position could be called "weak genesis by accordance."

In that scenario, there was no underlying occurrence such as a quantum event which accounts for the randomness — so we account for it as merely a product of our observational accord. The first step, "weak genesis by accordance," has no ontological force. It merely accounts for the epistemology by which we know an event to be random.

"Strong genesis by accordance," on the other hand, actually establishes in existence an underlying event (or negates one) that leads to the coin flip (or whatever) being random. In my example, the underlying event is a quantum event, but perhaps it could be something else to which "strong genesis by observership/accordance" equally applies. Using Wheeler-style logic, I'm claiming that our observation of, say, heads accords existence per se to the underlying event which accounts for the observable result of heads.

I do not wish, in making such a strong second-step claim, to gloss over the fact that (even if I have managed to convey the claim to any degree comprehensibly) it is nonetheless a radically paradoxical, counterintuitive, non-commonsensical one. If anyone feels he or she is being "bamboozled," the feeling is quite understandable. Yet I think "strong genesis by accordance" may indeed be a foundational feature of our reality, no matter how much it goes against our common sense.


Now I want to extend the concept of "strong genesis by accordance" to another realm, that of miracles.

Apparently miraculous events, à la those in the Bible, have a lot in common with apparently random events. For one thing, the event in question is observed, or witnessed; it's not like a tree falling in a forest with no one to hear. For another, it cannot be shown to be "ordinarily caused," for if it is so shown — or if it is negated and shown never to have happened — it ceases to be apparently miraculous.

Baruch Spinoza
William A. Dembski spends much time in Intelligent Design discussing miracles. His chapter 2 is, in fact, called "The Critique of Miracles." He says (pp. 51-55) that for the rationalist philosopher Baruch Spinoza of the 17th century, miracles could simply never be. Spinoza had two reasons for saying so: they supposedly contradict God's rational nature, and anyway, even if they don't violate God's essential rationality, they can never be known.

I'm going to ignore that first argument as beyond the scope of the present discussion. The second argument, Dembski says, is basically epsitemological:

Even if there were such a thing as a miracle (i.e., an event within nature not caused according to universal laws of natural causation), how could we ever recognize it? Alternatively, how could we definitively exclude the possibility that an event was after all naturally caused? (pp. 53-54)

How could we ever know for sure that an apparently miraculous event is indeed a miracle? It is a question entirely analogous to asking how we could ever know that an apparently random event is indeed random.

Accordingly, if we entertain the "strong genesis by accordance" position as regards the realm of randomness, are we not entitled to entertain it in the realm of miracle?

Inflected into the realm of miracle, it would go like this: There is a non-ordinarily-caused trigger or source event beneath each seeming miracle, just as in the case of a coin flip there is a non-ordinarily-caused event (which we have assumed for want of a better choice to be a quantum event) which is the trigger or source of "heads" or "tails." This non-ordinarily-caused trigger or source event — whether or not it is a quantum event — is accorded existence per se by our act of observing its fruit or result.

In other words, "strong genesis by accordance" applies to seeming miracles in roughly (or exactly) the same way it applies to seemingly random outcomes. In both cases, through directly observing result-events we accord existence itself to their source-events.

Furthermore, this can happen temporally after the fact. Time is factored out. Our observation of the result can be delayed as long as you like, and the supposedly "prior" source event is still established in existence only by our "later" act of (potentially joint) observation.

Jesus
Performing
a Miracle
The source events of miracles might be acts of God mediated by quantum events. Or they might be direct acts of God. Either way, the observing a true miracle we accord the event that is at the root of the miracle its very existence by witnessing to the miracle. Hence we could call "genesis by accordance" instead "genesis by witness," at least when it applies to miracles.

I intend "genesis by witness" as it applies to miracles to imply the equivalent of "strong genesis by accordance." For instance, if we assume that at the root of a miracle is (whether or not it is mediated by a quantum event or some such naturalistic thing) an act of God, our witnessing to the miracle is crucial to the very existence of the act behind the miracle. This is one case where a tree falling in a forest with no one there to hear it makes no sound.

At Holy Eucharist
An analogy comes from the Catholic belief about the nature of sacraments such as that of Holy Eucharist. In the Catholic-style communion/Lord's Supper liturgy, the consecrated bread and wine become — after consecration they are, in a deeply ontological sense — the body and blood of Jesus Christ. By definition as a sacrament, the consecrated elements are together said to be a visible sign of God's invisible grace.

A priest when he celebrates Mass represents Christ and therefore God at the altar. That's why he cannot accomplish Holy Eucharist alone. There must be at least one other person there — possibly lay, or possibly another priest or "religious" person, such as a nun, deacon, brother, etc. Consecration of the elements by a priest all by himself is invalid.

If we think of a sacrament as a close cousin to a miracle, and if we assume the validity of "genesis by witness," this all makes (some) logical sense. If the priest represents God the Son at the altar, then he is the one person present who is ineligible to witness to the miraculous transformation of the eucharistic elements. A witness to an act is by definition not the actor himself.

So unless there is someone else present, there is no witness ... and no ontologically forceful "genesis by witness." The elements remain unchanged.

Also note that the act of witness to the holiness of the consecrated elements can be shared among those present at the consecration and those making their witness later on. The consecrated bread, in particular, not consumed at the Mass becomes "reserve sacrament" and is moved to a tabernacle. At a later date, this reserve sacrament can be witnessed to and consumed by those who were absent at the original consecration.

This is in fact what happens on Good Friday each year. On the anniversary of Christ's death on the cross, no consecration is done. But the reserve sacrament from the Holy Thursday celebration the previous day is distributed in the stead of newly consecrated elements. Ontologically, it is the body of Christ, even if none of the priests and congregants who are present were there the previous day.

In a sense, this is another case of witness to a miracle conferring existence to the miracle's source-event post facto, or after the fact.


For a miracle is much like the sacrament of Holy Eucharist in being a testament to faith. I accordingly suggest that the "genesis by witness" supposition is applicable to both. If we have faith in a miracle or a sacrament, it becomes part of our world. We are inalterably changed by our witnessing to either, and so is our universe.

Words
with Power
In a real sense, such a "genesis by witness" worldview stands in confirmation of the principle that "truth is an act." This was a discovery of the eighteenth-century Neapolitan philosopher Giambattista Vico (see my earlier post Verum Factum). As Northrop Frye wrote in his study of the Bible, Words with Power (p. 82):

Vico's axiom was verum factum: what is true for us is what we have made ... a creation in which we have participated, whether we have been in on the making of it or on the responding to it.
We co-create our world, in other words; if not literally, then simply by means of how we respond to "objective" reality. (I am now modifying my own text from that earlier post.) Put another way, our response is itself a world-creative act, a personal participation in the very act of creation which in our more simpleminded moments, if we are religious, we attribute solely to God.

In another passage keyed to Vico's verum factum axiom (p. 135), Frye says the Incarnation of God in the man Jesus of Nazareth "presents God and man as indissolubly locked together in a common enterprise." That enterprise concerns establishing "the reality of God," which at the end of the Bible
... is manifested in a new creation in which man is a participant [by virtue of his] being redeemed, or separated from the predatory and destructive elements acquired from his origin in nature.
The idea of a "new creation" made real at the end of the Bible's narrative arc, in the Book of Revelation, in which God and man are "indissolubly locked together in a common enterprise," is entirely consistent with a "genesis by witness" worldview.

It is as if our participation, our witness, is ultimately needed in ontologically establishing, or re-establishing, "the reality of God." That manifestly wasn't so at the beginning of the Bible, in the Book of Genesis, when God created the world alone. It is only the "new creation" which requires the help of — again, in an absolutely ontological sense — our co-creative power through faith and through genesis by witness.

Sunday, April 24, 2005

Standpoint and Fact

"From where we stand the rain seems random. If we could stand somewhere else, we would see the order in it." — Tony Hillerman, Coyote Waits

Previously, in Randomness by Observership?, I elaborated that idea, which comes down to randomness being not so much a matter of fact as a matter of standpoint. I pointed out that experts cannot agree whether sequences of events such as coin flips are random be their very nature, by the nature of the procedure used to generate them, or simply by virtue of our own ignorance as to their deterministic causes.

Then I mentioned the postition suggested by probability theorist G. Spencer Brown, to the effect that we as observers of candidate sequences determine their randomness. If we are completely "bamboozled" by them, unable to predict what comes next no matter how hard we try to find a formula by which they can have been generated, then they become random for us.

In other words, our standpoint as bamboozled observers is what makes the sequence random. Randomness is not so much a matter of fact as one of standpoint.

At the end of my previous post, I hinted that perhaps "random" events such as genetic mutations, the raw material of evolution, aren't unpredictable at all from a certain standpoint: that of God.

Perhaps God knows what's about to happen in any particular sequence of "random" events. That would seem an epistemological claim — epistemology being the philosophical study of the possibility of knowledge. The epistemological claim would be that sequences of events which are opaque to our ability to know what's coming next are not opaque to God's.

But in that same previous post I also suggested there may be a profound kinship between randomness and quantum events. John Archibald Wheeler, the physicist who coined the term "black hole," has proposed that at a quite deep level the universe undergoes "genesis by observership" (see also the earlier post Genesis by Observership in my A World of Doubt blog).

Wheeler offers a thought experiment in which photons of light traverse multiple possible paths from far off sources in the cosmos. Two curved trajectories arising from the gravitation of intermediate galaxies and converging here at our tiny planet send photons to us probabilistically. Either trajectory is equally likely — until we look at the originating quasar through a telescope. What that happens, Path A is selected over Path B, if Path A is the one we point the telescope toward. If on the other hand we point at Path B, it is the one which the photon uses.

This thought experiment, slightly altered for obvious practical reasons, has been confirmed in the lab. Even though the choice of which way to "point" the observing equipment occurs after the photon begins its journey, our post facto choice is what determines which path the photon traverses.

The sheer act of observing crystallizes actuality out of a panoply of (in this case, two) statistically equiprobable quantum potentialities. This is what "genesis by observership" means.

Notice that we are not talking here just about epistemology: what it is possible to know, and how we can know it. We are talking about ontology, the philosophical study of being. When we say Path A is the one the photon uses, we are making an ontological choice. Namely, the photon using Path A exists, whereas the one using Path B does not.

Our observership creates being — thus the title of the provocative article in the June 2002 issue of the science magazine Discover from which I learned of Wheeler's views: "Does the Universe Exist if We're Not Looking?"

Another way to look at it is that the commonsense line between epistemology and ontology — between what we can know and what is — is really a dotted one. If the line exists at all, that is.

Importing that notion into our consideration of randomness, we can hypothesize that the very existence of randomness (as opposed to a deterministic situation which nonetheless "bamboozles" us) is a matter of standpoint: how we "point the telescope" of our observation and understanding. If we "turn a blind eye" to the possibility that there is an underlying formula or algorithm generating seemingly random sequences of events such as genetic mutations, then indeed there is nothing that determines a priori what the next die roll will be.

If on the other hand we "open our eyes" to the possibility of an invisible determiner of die rolls and the like, then there is, in our universe, such a determiner.

Again, randomness is not so much a matter of fact as a matter of standpoint. Looked at from one standpoint, we find that sheer randomness — indeterminate, undirected, and unpredictable — is radically part of our world. That's like pointing our telescope at Path A. From the other standpoint, there is order hidden in the rain, an order which we can't see but God can. That's like pointing our telescope at Path B.

Either direction, the one toward Path A or the one toward Path B, is valid.

We must choose.

The choice we opt for makes the associated reality so, in that it crystallizes an actuality out of what would otherwise be a mere potentiality.

How counterinutitive!

Saturday, April 23, 2005

Randomness by Observership?

Not long ago I posted Genesis by Observership in my A World of Doubt blog. It pointed out that everything we see in the material world depends on strange-acting quantum phenomena that take place at subatomic levels. One of the strange things about quantum phenomena is that we cannot know them fully. For instance, if we observe the position of an electron, we forsake ever learning its speed or momentum.

Another, even stranger thing is that quantum "events" such as the emission of a photon of light or a radioactive particle are probabilistic, rather than certain — until, that is, the event is observed. Only then does it crystallize into certainty, to the definite exclusion of a host of equiprobable events that could have happened, but didn't.

And it gets weirder. Observations in the present crystallize events that happened in the past, pruning all other possibilities away from the "tree" of the history of the universe seemingly after the fact.

Quantum physicist emeritus John Archibald Wheeler calls this creation of cosmic history "genesis by observership." There is some controversy over whether the observer has to be intelligent — as opposed to some inanimate recording device — but the general idea is that the act of observation, even when it occurs post facto, is absolutely necessary if a plenitude of quantum potentialities is to be turned into crisp, factual history.

I find an echo of this bizarre sort of thinking about observership in Deborah J. Bennett's survey of Randomness. In order to describe it, I'll first have to sneak up on it.

There is a huge problem in specifying exactly what randomness is. We say that certain events such as the flip of a coin are random, and we think we know exactly what we mean by it. But our intuitions of what randomness means quickly break down in the face of certain significant questions.

For example, is the flip of a coin random merely because we don't know all the facts? Is it actually deterministic, based on factors of causation which we simply cannot adequately detect or measure? Or is the outcome of a coin flip radically uncaused, such that no amount of information about causational factors could tell us what the outcome will be?

Scientists and mathematicians are split over how to answer questions like that. Some say the attribution of randomness is either a subjective mental response to our own ignorance of real-world causative factors or a sign of psychological indifference as to what the causative factors may be (see p. 154).

Others claim that randomness truly exists as an objective fact in the world outside the mind. Those who make the latter claim, in that they are not "subjectivists," tend to be "frequentists." Frequentists cite the observable fact that (for example) a coin flipped a large enough number of times will tend to come up heads and tails equally often — i.e., with equal frequency.

This means, frequentists say, that randomness is an objective fact. In turn, this (as I understand it) suggests that the outcome of an individual coin flip is radically uncaused. Accordingly, not all events in the objective world are deterministically caused.

I have to admit that I am personally biased in the direction of the frequentists who assert that randomness is a radical, objective fact about the real world. Put another way, I am biased against the strict determinism which the subjectivist position would seem to entail. The main reason I am thus biased is that I strongly prefer to believe that human beings have free will ... and it is hard to see how they could if all events in the world are deterministically caused.

Be that as it may, the questions I have just discussed are hard to settle to the satisfaction of one and all. So are questions such as this one: is randomness an attribute of whatever generates a sequence of events, or of the sequence itself? As Bennett puts it (p. 165), "Is it the outcome or the process which should determine randomness?"

Take a sequence of coin flips. If the coin is a perfectly "fair" one — a condition that apparently cannot be fully met in the real world — the process of flipping it again and again to generate a string of 1's (for heads) and 0's (for tails) would seem to qualify as random. Such a procedure might well generate, say, 0111010010110001 ... etc.

The sequence of binary numbers looks random. But what if we extended it out to a billion places, and along the way we just happened to generate a string of 1,000 0's in a row. Unlikely, yes. Impossible, no. Yet some would reject this particular outcome as manifestly not random.

It's also possible for a questionable process to generate an apparently random outcome. For instance, using a computer algorithm to expand the decimal represetnation of π outward from the well-known 3.14159 ... generates a sequence of digits that cannot be distinguished from the "truly" random. All the digits from 0 through 9 show up roughly equally often. The decimal expansion never repeats itself, thereby going cyclic. It shows no discernible pattern. Yet many minds balk at calling any sequence "random" that is generated by a deterministic — and, in this case, rather compact — algorithm.

We have prior knowledge of the algorithm that is used in an expansion-of-π random number generator, so the criticism has a basis in known fact. But suppose we were presented with another long sequence of digits that passed the same tests of equal frequency, patternlessness, etc.? How would we know whether it was generated by a deterministic algorithm?

For it has been posited by Kolmogorov and others that, by definition, "a sequence is random if the shortest formula which computes it is extremely long" (see p. 163). By this definition, randomness inheres in a quantitative measure of "information." The sequence in question contains very little information if the formula or computer algorithm which generates it is short. But if the minimum-length generating algorithm is quite lengthy, the resulting sequence is by definition highly complex; it contains a lot of information. At some point, an arbitrary line is crossed into randomness.

Such criteria are actually, Bennett points out (see pp. 164-165), measures of unpredictability. In the Kolmogorov scheme, if we apply "a small set of simple rules" (p. 163) to the candidate sequence of digits and find that the sequence does not conform to any of them, we may declare the sequence random, even though there may be more elaborately complex algorithms among which might be found one that generates the same sequence. Because pinpointing such an algorithm by a process of reverse-engineering is next to impossible, each succeeding digit in the sequence comes as a total surprise.

Taking that outlook a step further are theorists such as G. Spencer Brown (see pp 167ff.). Spencer Brown tends to assume that cases like 1,000 0's appearing in a row in a nominally random sequence are ample justifcation for removing the apparently nonrandom segments from the sequence. More importantly:

He further claims it is neither the sequence nor the process which defines randomness but rather the observer's psychology of disorderliness: we cannot know what is coming based on what we have already seen. Spencer Brown asserts, "Our concept of randomness is merely an attempt to characterize and distinguish the sort of series which bamboozles the most people ... It is thus irrelevant whether a series has been made up by a penny, a calculating machine, a Geiger counter or a practical joker. What matters is the effect on those who see it, not how it was produced. (pp. 167-168)

In other words, some series or sequences of events leave us "bamboozled": uncertain as to how to predict what is coming next. If we have no clue, we as observers perceive "uncertainty" and "disorderliness."

That is, of course, what tends to happen with expansions of π. Then we may be told how the trick was done, and the uncertainty and disorderliness vanish.

Bennett uses as one of her chapter epigraphs this quote from Tony Hillerman's mystery novel Coyote Waits:

From where we stand the rain seems random. If we could stand somewhere else, we would see the order in it.

So it would seem that there is such as thing as randomness by observership. Also, determinism by observership. It all depends on where we stand, and what we know.

Perhaps the "random" mutations that fuel evolution aren't so random, from the standpoint of God.

Thursday, April 21, 2005

Emergent Expectedness, Emergent Evolvability

Normal Curve
or
Bell Curve
Not surprisingly, the normal curve or "bell curve" is a constant topic in Deborah J. Bennett's book on Randomness. For instance, as I mentioned earlier in Yet More on Randomness, if you throw four six-sided dice over and over again and record the frequencies with which each possible total of pips on all four dice turns up, the resultant graph looks like a bell curve every time (see p. 103).

The normal distribution which the bell curve depicts also serves to represent many other types of random events, taken collectively in large enough numbers. For example (see p. 99), when scientists make measurements of a fixed quantity such as the exact position of a star in the sky, there are small random errors in the results. To compensate, the measurements are taken over and over again and averaged ... but there are still bound to be variations of the derived averages from the "true" value of the measurement. Those variations, according to the well-known Central Limit Theorem posited in 1810 by Pierre-Simon, Marquis de Laplace, are approximately normally distributed and conform very closely to the bell curve.

In other words, out of randomness comes lawful order. Any one die roll or observational measurement will be anywhere from slightly to nearly totally random, in terms of its expected outcome. A die roll is almost wholly a matter of chance, though no real-world die is perfectly "fair." A star measurement is only a little bit unpredictable; the expected deviation from the "true" measurement will be small, but still random. Yet when you aggregate the rolls of the dice or the observational errors, they conform to a law. When graphed, the data invariably produce a bell shape that can be foretold in advance.

Out of raw unpredictability, expectedness emerges.

Importantly, most of the randomness which Bennett talks about has to do with collections of events in which each event is not only random, but, crucially, it is independent of the other events. If you flip a coin a million times, each single flip's result, heads or tails, depends not one whit on the outcomes of the other flips. It's exactly as if you tossed a million coins all at once. They would each come down heads or tails without regard to what the other coins are doing.

Now, it is this type of independent random event which a genetic mutation is thought to be. That is, when a cell divides to make an egg or a sperm, whether a particular mutation happens or not is purely a matter of chance, independent of any other mutations which may or may not be occurring. The probability of each possible mutation is very, very low. What's more, every possible mutation has the very same probability as every other possible mutation. In this way does standard Darwinian evolution theory provide natural selection with the hereditary variation it needs to produce life's complexity and diversity.


Because independent random events are, when collected in very large numbers, lawful enough to adhere to rules like the Central Limit Theorem and produce bell-shaped curves, there is at least a modicum of expectability in supposedly unpredictable evolutionary history. It is this idea that theoretical biologist Stuart Kauffman builds on and enlarges in his search for laws of self-organization and complexity, as documented in books like At Home in the Universe.

In essence, Kauffman asks what happens if we relax the assumption that the events we are investigating must be independent. If we instead think of each event as carried out by an autonomous entity that is in some form of communication with other such autonomous entities, can we derive yet richer laws?

Autonomous agents in networks of various sorts "transact business" with other agents in the same network, each acting as if seeking to maximize something it covets. For example, Kauffman shows that proteins brought together in a network of shared mutuality can directly or indirectly catalyze one another's reproduction. Each protein, of course, "wants" very much to be reproduced. So the operation of the autocatalytic network makes the chances of successful reproduction of any given protein anything but random.

The protein reproduction "events" will accordingly not produce a bell curve, I assume. But there are other knowable-in-advance curves that such business-transacting networks are apt to generate. One of them is that of a power-law distribution (see Kauffman, p. 29).

A model for this power-law behavior is the "self-organized criticality" of grains of sand dropped one at a time onyo a pile, with the small and large avalanches they occasionally provoke being measured and their sizes plotted. The smaller the avalanche, the more often one of that size can be expected to occur. By contrast, larger avalanches happen less often. When the avalanche frequency is plotted against the avalanche size, the result is a smooth curve that, plotted logarithmically, is a straight line! The size of the "next" avalanche, however, remains always wholly unpredictable.

Out of raw unpredictability, again, expectedness emerges.


The big difference here is that, unlike with dice and other such generators of unpredictability, the grains of sand in a sandpile are, in effect, a loosely coupled network, such that a disturbance to one grain may or may not produce a disturbance to others. Stuart Kauffman's biochemical and biological models are similar to this purely physical model in being loosely coupled networks. When the degree of interactivity of the agents in his network is neither too high nor too low, interesting, lifelike things happen. His agents and their networks evolve.

Out of unpredictability in interdependent, loosely coupled networks, evolvability emerges.

If the degree of coupling is optimal, evolvability is at its most productive. Kauffman finds that the optimal degree of interdependence is one at which the network poises itself at the "edge of chaos."

This edge of chaos is a locus or domain along a mathematical continuum between frozen order and rank chaos. If a networked system has too much frozen order, it can't change and therefore can't evolve at all. If it has too much chaos, it changes too readily and can't evolve gracefully.

Kauffman hypothesizes that the Earth's biosphere and the various ecosystems that make it up are all networked systems that poise themselves at the edge of chaos. He says the "self-organized criticality" of sandpile avalanches suggests their system — the sandpile — is also at the edge of chaos. Accordingly, he posits that the record of such things as sizes of "extinction events" in the Earth's biohistory probably produces a power-law distribution, just as do the sizes of avalanches in a sandpile.

So, if Kauffman is right, unpredictable events that are loosely coupled rather than isolated, as with independent die rolls, again yield emergent expectedness. The big difference is that the emergent behavior is more richly complex than with independent chance events. It may be that this rich emergent complexity is as characteristic of life on Earth, and as lawful, as the lawfulness associated with random mutations culled by natural selection. Self-organization may be natural selection's "handmaiden" in turning emergent expectedness into emergent evolvability.


The lawfulness of independent, purely random events is entirely bottom up. That is, there is no central authority that in any top-down fashion commands the dice to produce a bell curve.

When you switch to networks whose agents are interdependent rather than independent, bottom-up lawfulness likewise emerges. There is no top-down dictator, for instance, that forces a system to produce a power-law distribution of its large-scale events by poising itself at the edge of chaos.

Yet in such a network there is a form of top-down constraint. The sandpile as a whole "decides" whether a dropped grain of sand produces an avalanche, and if so how big. The grains in the avalanche that may thus be produced then have no choice but to slide down the pile.

Likewise in Kauffman's biochemical/biological models, basically autonomous agents are at the mercy of global, collective behavior in the system as a whole.

So systems capable of evolving meld bottom-up behavior with top-down behavior. This may be a precondition for emergent evolvability.


In such a network, if there is too much independence on the part of individual agents, evolvability suffers. Since agents don't take other agents' circumstances sufficiently into account, system-wide evolvability cannot emerge. Though there may be minor change in local areas, it cannot spread. The network as a whole is effectively frozen. There is too much order.

Evolvability also suffers if the agents are too sensitively tuned in to what's going on with too many of their fellows. Instead of being loosely coupled, such a network is tightly interrelated. That can be a recipe for chaos, letting small "earthquakes" in localized areas of the network spread tsunami-like to swamp the network as a whole.

Tyrannical control from on high, with every agent made to march in lockstep to a single drumbea with every other agent, could solve this chaos problem. If the emphasis shifts too much in that direction, though, there is again too much order, and evolvability suffers.

There would seem to be a lesson here. If evolvability suffers from too much order or too much chaos, the system as a whole will fail to adjust to changed circumstances, and will consequently die. Then where will the would-be autocrat be? The overly individualistic agent?

Hence, out of unpredictability can emerge not only lawfulness, but life itself ... and life lessons as well.

Wednesday, April 20, 2005

Yet More on Randomness

Deborah
J. Bennett's
Randomness
As I continue to peruse Randomness, a fine book by Deborah J. Bennett on a topic that lies at the heart not only of probability theory but also of the theory of evolution, I do so with a sense that I am not really cut out to understand it. I have to admit, I find it rather abstruse.

But I had a crazy idea, as I was reading along. Maybe God does play dice with the universe.

Einstein said just the opposite, of course: that God does not play dice. But maybe this was only Einstein's way of affirming that the Creator does not treat the world as cruelly as the gaming tables treat the high roller on a losing streak.

For one of the things that comes across to me as I read Bennett is that it is next to impossible for humans or their machines to generate truly random results. There is no such thing as a truly "fair" pair of dice, or roulette wheel, or coin to flip — meaning that no matter how many millions of times you toss it, spin it, or flip it, there will always be pesky patterns that show up, if you look hard enough, in the supposedly patternless sequence of results.

An inadvertent illustration of this reality occurs in the figure on 140. It's intended to show how you can estimate the area under an irregular curve by inscribing the curve in a rectangle of known area and generating a plethora of points randomly positioned within the rectangle. The proportion of points under the curve can be used to estimate the fraction of the rectangle's area that lies under the curve.

But the illustrator has obviously taken a shortcut, making a smaller square of scattershot dots and replicating it several times within the larger rectangle. The edges of the squares are decipherable as a faint pattern in the dots — giving a visual analogue of hidden patterns in all seemingly random data.

Another thing that leaps out at even a non-probabilist like myself is that Bennett feels no compulsion to answer the question of whether randomness is real or a byproduct of our clumsiness and ignorance. That is, are events in this world ever non-deterministic, or do they just seem to happen by chance?

Bennett begins her chapter 6 with a discourse on different philosophers' attitudes toward the chance-vs.-necessity question. For example, the ancient Greek Atomists and Stoics both favored determinism, while Epicureans held out for chance as a requisite basis for human free will.

But before Bennett lets us in on which side was right, she instead slides over into a lengthy discussion which takes more than one chapter, and which can be summarized this way: there are laws of chance. For example, if you throw four six-sided dice over and over again and record the frequencies with which each possible total of pips showing turns up, the resultant graph looks like a bell curve every time (see p. 103).

Thus there is always order that emerges from seeming disorder. And it doesn't come from the fact that the dice are not perfect and secretly generate hidden patterns, à la my first point. Rather, it comes from a mathematical law of probability.

So here's my crazy idea. Maybe there is true randomness in the real world, not just the erzatz kind which is the best we can produce with our games and machines. Maybe it is a gift from God, who alone is clever and powerful enough to generate truly random numbers as the basis for truly free processes. Maybe God is content to play dice because He knows that there are laws — which He also made — shaping and constraining whatever emerges from random worldly processes.

To those three points let me add one other: Maybe God sets things up this way because He does not want a wholly deterministic world that cannot be home to creatures who possess free will.

My crazy, half-baked idea would seem to go against the more common intuition that God, if He exists, would insist on a fully deterministic world which He could reliably program to do His will. Most people who object to the theory of evolution do so, in fact, on the basis that it leaves the eventual emergence of intelligent life on Earth to sheer chance, something God would "never" do.

If my crazy, half-baked idea is right, those albeit well-intentioned people are missing something important. Evolution heavily beholden to happenstance may be absolutely necessary in a universe whose intelligent creatures, once they emerge, are to have free will.

Monday, April 18, 2005

The Law of Very Large Numbers

I was touted onto Deborah Bennett's excellent book Randomness (see The Random and the Deliberate) by an endnote in William Dembski's Intelligent Design. The note cited Bennett's citation of a fifteenth century rabbi, Isaac ben Mosheh Aramah. In the Bible, said the Rabbi, Jonah's being found guilty of bringing God's wrath upon his shipmates was legitimate since many lots had been cast in sequence, all falling upon Jonah. Had just one lot been cast, though, the adverse result would have been a matter of chance.

Casting lots was an ancient way of flipping a coin or rolling a die. Other randomizers used in antiquity included the oddly shaped, four-sided, bonelike talus (Latin) or astragalus (Greek) — see Bennett, pp. 19-20. In the first century B.C., the Roman orator-statesman Cicero is on record as having disagreed with the thinking that would be endorsed by Isaac ben Mosheh Aramah some sixteen centuries later. Of the "Venus-throw" — four thrown tali, wherein each talus displays a different one of the four faces — Cicero said multiple successive Venus-throws, however improbable, would be just as much a matter of chance as a single Venus-throw (see Bennett, p. 74).

Modern probability theory agrees with Cicero. The reason, as Bennett points out, is the law of very large numbers. It is not impossible for, say, a tossed coin to come up heads 100 times in a row — just very unlikely. The odds are one in two to the 100th power, and thus vanishingly small, but still greater than zero.

Let's call two to the 100th power a "kazillion." If you tossed the coin a kazillion times, you would expect to get 100 heads in a row exactly once, somewhere along the line. Failing that, if you tossed the coin several kazillion times, the chances of getting a 100-heads sequence at some point would approach certainty. This is the power of the law of very large numbers (sometimes quite erroneously called the "law of averages"):

... with many trials the number of times a particular outcome will occur is very close to mathematical conjecture, or mathematical expectation. And this applies to even the most unlikely events; if it is possible for them to happen, given enough opportunities, eventually they will happen, in accordance with the laws of probability. (Bennett, p. 76)


In comparing Cicero's and Cicero's contemporaries' view of chance — the latter being closer to Rabbi Isaac ben Mosheh Aramah's in feeling that "these things happen at the direction of the gods" — Bennett calls Cicero's understanding "more mature" (p. 74).

She also notes that children in modern times don't get the law of very large numbers of trials:

At very young ages children do not understand this concept. Part of the problem is that young children do not accept the notion of randomness, which is at the heart of any understanding of probability. Piaget and Inhelder [child psychology researchers] found that young children conceive of random results as displaying regulated but hidden rules. (p. 78)
Bennett also links this (shall we call it) "pre-randomness mentality" with notions of fairness. When a game is "fair" by the standards of probability theory, all that is meant is that there is no bias against any player's hopes. A "fair" six-sided die has an equal chance of coming up 1, 2, 3, 4, 5, or 6. In the very, very long run, it will bear that expectation out. However, this does not guarantee that a player at dice will not lose all his money before that happens — a seemingly "unfair" result.

There is nothing in the laws of probability that guarantees that this second definition of fairness will be adhered to. But children, as well as many adults, are prone to the "gambler's fallacy" of thinking otherwise:
... the heart of the gambler's fallacy lies in a misconception about the fairness of the laws of chance. We believe chance to be a self-correcting process — in which deviations in one direction [say, "too many heads" in a row] will soon be countered with a deviation in the other direction [a long run of tails]. But in fact, deviations in the short run are not corrected; they are merely diluted over the long run ... . (p. 79)
There's a pattern here. The more "modern" and "mature" our views of chance, the more we reject the notion of God's determining hand on the dice ... and the more we also reject belief in any personal or impersonal force that very soon will make events come "fair" or "just" by this second, "childish" definition.

Note how much the emphasis is on the word "soon." Events will definitely come fair and just if there can be several kazillion coin tosses or die throws. So says the law of very large numbers.


I have to note, before closing, that the Stuart Kauffman view of evolution which I favor (see Welcome to Beyond Darwin), and which forms the core of his book At Home in the Universe, could well be grounded in a so-called "childish" view "of random results as displaying regulated but hidden rules."

After all, the basic idea Kauffman advances is that there is a previously unsuspected "lawfulness" to evolutionary history. Though at the level of fine detail, the history of life on Earth is indeed unpredictable and "incompressible" — it cannot be reduced to any simpler, more-quickly-run computer algorithm — at a macro level, it might well produce outcomes that are "robust" and eminently predictable.

Among the results that are "robust" and predictable may well be creatures that have our signal qualities, such as intelligence, consciousness, and self-awareness.

Let's say for the sake of argument that in terms of Kauffman's personal psychology — and mine — there is an underlying need not to abandon a "childish" belief in ... how shall I put it? Human fairness over dice fairness? Goodness and justice over blindness? Meaning over meaninglessness?

However we may put it, we must also remind ourselves that science is science because, over time, it factors out personal psychology. Kauffman in his book gives us many "mature" grounds for believing that his vaunted laws of "self-organization" may complement blind-chance mutations, winnowed by natural selection, in guiding evolutionary history. Other researchers — whatever their own biases — can test his bold hypotheses, although some of that will have to wait until science gains more experimental control of the requisite bio-molecules in the lab.


However "immature" some of our biases may be against believing in blind chance and the law of very large numbers, there is also great value in "growing up," probability-wise. I do not necessarily mean that the only "mature" stance is an atheistic one. Rather, I think it vital for the grown-up theist to recognize that we work with God to "make our own luck."

Consider the victim of the "gambler's fallacy" who loses all his or her money out of a conviction that Lady Luck is "fair" and will soon reward the bettor's persistence. Put differently, this equates to the assumption that the bettor can passively rely on an equalizing tendency built into the grand scheme of things. The bettor doesn't actually have to do anything to "change his luck." Which, if worst comes to worst and the bettor loses all, makes him or her a victim, not of chance, but of rank injustice.

Thus does the childish "gambler's fallacy" go along with a victim mentality, when things don't work out.

A more mature mentality is to assume that there is a way for us to work with God to change our outcome into a more just one.

It's not all up to us, as the atheist believes ... but neither is it all up to God, or Lady Luck, or Fortuna, the Roman goddess who gave us the word "fortune" (see Bennett, p. 31). Instead, there is an active, dynamic, world-creative partnership between God and ourselves. In faith, and with God's help, we can create our own luck as we create our own world. We do not have to passively accept all the "bad stuff" that happens to us. This I would call a "mature" faith in God.

Sunday, April 17, 2005

The Random and the Deliberate

Deborah
J. Bennett's
Randomness
As I indicated in On Randomness and Chance, I have a lot of questions about the whole concept of blind chance, so important to Darwin's theory of evolution: in evolutionary history, crucially, genetic mutations are assumed to happen at random. So I am reading an excellent book on the subject, mathematician Deborah J. Bennett's Randomness. Bennett surveys, among other things, the history of belief and practice as concerns games of chance using randomizers such as dice and their precursors.

That includes the casting of lots. In her chapter entitled "When the Gods Played Dice," Bennett provides a quick overview of lot casting and other oracular practices among ancient peoples. She says that "a strong belief existed that the gods controlled the outcome" (p. 28), though the technique used in divination had to be what we would call random "to eliminate the possibility of human manipulation and thereby to give the gods a clear channel through which to express their divine will."

But, she says, the ancient Chinese oracle called the I Ching or Book of Changes did divination with an important difference. It may have started out like other cultures' oracles, assuming "the divine infallibility of a single drawn lot" (p. 41). But as early as the sixth century B.C., a Confucian treatise ...

... called the Shu Ching (Book of History or Book of Documents ...) suggests that maybe one should use one's own judgment in deciding whether to follow an oracle ... . (p. 40)

To wit, ask not just one diviner but three, and "follow the answer of at least the two who are in agreement" (p. 41). This admission of the possibility of error or inconsistent results in supposedly god-directed divination sanctioned individual skepticism and personal deliberation:

Even after the [three] diviners had arrived at their predictions, the treatise continues to allow for doubt and to encourage judgment: "But if you have great doubt, then deliberate upon it, turning to your own heart; deliberate, turning to you rretinue; deliberate, turning to the common people; deliberate, turning to the people who predict and divine." (p. 41)

In other words, I Ching divination has long had a built-in "moral obligation to deliberate on the results of the oracle," a feature which "gradually changed the document from a divinatory to a philosophical text" (p. 41). According to one expert, per Bennett:

[T]he first time a man refused to let the matter of his fate rest but asked, "What can I do to change it?" the book became a book of wisdom. (p. 42)
How like modern times, I'd say, this discussion of the "usual" ancient oracles versus the I Ching is. For example, today we find "conservative" Catholics dead set on (say) the pope's infallibly being, let's face it, the diviner of God's will — no second or third opinions allowed.

We also find, at the other extreme, pure atheists who say God has no influence whatsoever on random processes, for there is no God.

And we find in-between religionists like me, a "liberal" Catholic who believes in the primacy of individual conscience on matters like birth control and abortion. We don't really know or for that matter care whether God preempts chance. For us, what's really important to us is the dynamic, creative partnership between God and the human person.

For we feel that as humans we are at our most highly evovled when we claim our power, in faith, to turn the random into the deliberate!

Accordingly, "turning to your own heart," as the Chinese sage put it, is not only an option, but a necessity in a world in which both submission to God's law and personal deliberation factor into our morality. Exercising spiritual discernment often requires admitting that the line between the two — between unquestioning acceptance of divine dictum and reliance on personal judgment — can be thinly drawn and very blurry indeed.

Odd, then, how a simple discussion like Bennett's about ancient oracular beliefs on randomness can provide us with a template for understanding modern attitudes about religion.