The Chance of the Gaps

William A. Dembski

1. Probabilistic Resources

Statistical reasoning must be capable of eliminating chance when the probability of events gets too small. If not, chance can be invoked to explain anything. Scientists rightly resist invoking the supernatural in scientific explanations for fear of committing a god-of-the-gaps fallacy (the fallacy of using God as a stop-gap for ignorance). Yet without some restriction on the use of chance, scientists are in danger of committing a logically equivalent fallacy—one we may call the chance-of-the-gaps fallacy. Chance, like God, can become a stop-gap for ignorance.
For instance, in the movie This is Spinal Tap, one of the lead characters remarks that a former drummer in his band died by spontaneously combusting. Any one of us could this instant spontaneously combust if all the fast-moving air molecules in our vicinity suddenly converged on us. Such an event, however, is highly improbable, and we do not give it a second thought.

 

Even so, high improbability by itself is not enough to preclude chance. After all, highly improbable events happen all the time. Flip a coin a thousand times, and you will participate in a highly improbable event. Indeed, just about anything that happens is highly improbable once we factor in all the ways what did happen could have happened. Mere improbability therefore fails to rule out chance. In addition, improbability needs to be conjoined with an independently given pattern. An arrow shot randomly at a large blank wall will be highly unlikely to land at any one
place on the wall. Yet land it must, and so some highly improbable event will be realized. But now fix a target on that wall and shoot the arrow. If the arrow lands in the target and the target is sufficiently small, then chance is no longer a reasonable explanation of the arrow’s trajectory.

 

Highly improbable, independently patterned events are said to exhibit specified complexity. The term specified complexity has been around since 1973 when Leslie Orgel introduced it in connection with origins-of-life research: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.”1
More recently, Paul Davies has also used the term in connection with the origin of life: “Living organisms are
mysterious not for their complexity per se, but for their tightly specified complexity.”2 Events are specified if they exhibit an independently given pattern (cf. the target fixed on the wall). Events are complex to the degree that they are improbable. The identification of complexity with improbability here is straightforward. Imagine a combination lock. The more possibilities on the lock, the more complex the mechanism, and correspondingly the more improbable that it can be opened by chance. Note that the “complexity” in “specified complexity” has a particular
probabilistic meaning and is not meant to exhaust the concept of complexity (Seth Lloyd, for instance, records dozens of types of complexity3).

 

The most controversial claim in my writings is that specified complexity is a reliable empirical marker of intelligent agency.4 There are several places to criticize this claim. Elliott Sober criticizes it for failing to meet Bayesian standards of probabilistic coherence.5 Robin Collins criticizes it for hinging on an ill-defined conception of specification.6 Taner Edis criticizes it for admitting a crucial counterexample—the Darwinian mechanism of natural
selection and random variation is supposed to provide a naturalistic mechanism for generating specified complexity.7 None of these criticisms holds up under scrutiny.8 Nevertheless, a persistent worry about small probability arguments remains: Given an independently given pattern, or specification, what level of improbability must be attained before chance can legitimately be precluded? A wall so large that it cannot be missed and a target so large that covers half the wall, for instance, are hardly sufficient to preclude chance (or “beginner’s luck”) as the reason for an archer’s success in hitting the target. The target needs to be small to preclude hitting it by chance.

 

But how small is small enough? To answer this question we need the concept of a probabilistic resource. A probability is never small in isolation but only in relation to a set of probabilistic resources that describe the number of relevant ways an event might occur or be specified. There are thus two types of probabilistic resources, replicational and specificational. To see what is at stake, consider a wall so large that an archer cannot help but hit it. Next, let us say we learn that the archer hit some target fixed to the wall. We want to know whether the archer could reasonably have been expected to hit the target by chance. To determine this we need to know any other targets at which the archer might have been aiming. Also, we need to know how many arrows were in the archer’s quiver and might have been shot at the wall. The targets on the wall constitute the archer’s specificational resources. The arrows in the quiver constitute the archer’s replicational resources.

 

Note that to determine the probability of hitting some target with some arrow by chance, specificational and replicational resources multiply: Suppose the probability of hitting any given target with any one arrow has probability no more than p. Suppose further there are N such targets and M arrows in the quiver. Then the probability of hitting any one of these N targets, taken collectively, with a single arrow by chance is bounded by Np, and the probability of hitting any of these N targets with at least one of the M arrows by chance is bounded by MNp. Thus to preclude chance for a probability p means precluding chance for a probability MNp once M replicational and N specificational resources have been factored in. In practice it is enough that MNp < 1/2 or p < 1/(2MN). The rationale here is that since factoring in all relevant probabilistic resources leaves us with an event of probability less than 1/2, that event is less probable than not, and consequently we should favor the opposite event, which is more probable than not and precludes it.9

 

To recap, probabilistic resources comprise the relevant ways an event can occur (replicational resources) and be specified (specificational resources). The important question therefore is not What is the probability of the event in question? but rather What does its probability become after all the relevant probabilistic resources have been factored in? Probabilities can never be considered in isolation, but must always be referred to a relevant reference class of possible replications and specifications. A seemingly improbable event can become quite probable when placed within the appropriate reference class of probabilistic resources. On the other hand, it may remain improbable even after all the relevant probabilistic resources have been factored in. If it remains improbable (and therefore complex) and if the event is also specified, then it exhibits specified complexity.

 

Link collected : chance_of_the_gaps