Tag Archives: Intelligent Design

Robin Collins and atheist Peter Millican discuss the fine-tuning of the universe for life

British Spitfire and German Messerschmitt Me 109 locked in a dogfight
British Spitfire and German Messerschmitt Me 109 locked in a dogfight

You might remember Peter Millican from the debate he had with William Lane Craig. I ranked that debate as one of the 3 best I have ever seen, along with the first Craig  vs Dacey debate and the second Craig vs Sinnott-Armstrong debate.

Details:

Science has revealed that the fundamental constants and forces of the cosmos appear to be exquisitely fine-tuned to allow a universe in which life can develop. Is God the best explanation of the incredibly improbable odds of the universe we live in being a life-permitting one?

Robin Collins is a Christian philosopher and a leading advocate of the argument for God from cosmic design. Peter Millican is an atheist philosopher at Oxford University. They debate the issues.

From ‘Unbelievable?’ on ‘Premier Christian Radio’, Saturday 19th March 2016.

The debate:

MP3 file is available on the Unbelievable web site.

As usual when the atheist is an expert, there is no snark or paraphrasing in the summary.

Summary

Brierley: What is the fine-tuning argument?

Collins: the fine-tuning is structure of the universe is extremely precisely set to allow the existing of conscious, embodied agents who are capable of moral behavior. There are 3 kinds of fine-tuning: 1) the laws of nature (mathematical formulas), 2) the constants of physics (numbers that are plugged into the equations), 3) the initial conditions of the universe. The fine-tuning exists not just because there are lots of possibilities, but there is something special about the actual state of affairs that we see. Every set of laws, parameters and initial conditions is equally improbable, but the vast majority of permutations do not permit life. The possible explanations: theism or the multiverse.

Brierley: How improbable are the numbers?

Collins: Once case is the cosmological constant (dark energy density), with is 1 part in (10 raised to 120th power). If larger, the universe expands too rapidly for galaxies and stars to form after the Big Bang. If smaller, the universe collapses in on itself before life could form. Another case is the initial distribution of mass energy to give us the low entropy we have that is necessary for life. The fine-tuning there is 1 part in (10 raised to the 10th power raised to the 123rd power).

Brierley: What do you think of the argument?

Millican: The argument is worth taking very seriously. I am a fan of the argument. The other arguments for God’s existence such as the ontological and cosmological arguments are very weak. But the fine-tuning argument has the right structure to deliver the conclusion that theists want. And it is different from the traditional design argument tended to focus on biological nature, which is not a strong argument. But the fine-tuning argument is strong because it precedes any sort of biological evolution. Although the design is present at the beginning of the universe, it is not visible until much later. The argument points to at least deism, and possibly theism. The argument is not based on ignorance, it is rooted in “the latest results from the frontiers of science” (his phrase).

Brierley: Is this the best argument from natural theology?

Collins: The cosmological argument makes theism viable intuitively, but there are some things that are puzzling, like the concept of the necessary being. But the fine-tuning argument is decisive.

Brierley: What’s are some objections to the fine-tuning argument?

Millican: The argument is based on recent physics, so we should be cautious because we maybe we will discover a natural explanation.

Brierley: Respond to that.

Collins: The cosmological constant has been around since 1980. But the direction that physics is moving in is that there are more constants and quantities being discovered that need to be fine-tuned, not less. Even if you had a grand unified theory, that would have to be have the fine-tuning pushed into it.

(BREAK)

Millican: Since we have no experience of other laws and values from other universes, we don’t know whether these values can be other than they are. Psychologically, humans are prone to seeing purpose and patterns where there is none, so maybe that’s happening here.

Brierley: Respond to that.

Collins: It is possible to determine probabilities on a single universe case, for example using multiple ways of calculating Avogadro’s number all converging on the same number makes it more probable.

Millican: Yes, I willing to accept that these constants can take on other values, (“principle of indifference”). But maybe this principle be applied if the improbability were pushed up into the theory?

Collins: Even if you had a grand theory, selecting the grand theory from others would retain the improbability.

Brierley: What about the multiverse?

Millican: What if there are many, many different universes, and we happen to be in the one that is finely-tuned, then we should not be surprised to observe fine-tuning. Maybe a multiverse theory will be discovered in the future that would allow us to have these many universes with randomized constants and quantities. “I do think that it is a little bit of a promissary note”. I don’t think physics is pointing to this right now.

Brierley: Respond to that.

Collins: I agree it’s a promissary note. This is the strongest objection to the fine-tuning argument. But there are objections to the multiverse: 1) the fine-tuning is kicked back up to the multiverse generator has to be set just right to produce universes with different constants, 2) the multiverse is more likely to produce a small universe with Boltzmann brains that pop into existence and then out again, rather than a universe that contains conscious, embodied intelligent agents. I am working on a third response now that would show that the same constants that allow complex, embodied life ALSO allow the universe to be discoverable. This would negate the observer-selection effect required by the multiverse objection.

Brierley: Respond to that.

Millican: I don’t see why the multiverse generator has to be fine-tuned, since we don’t know what the multiverse generator is. I’m not impressed by the Boltzmann brains, but won’t discuss. We should be cautious about inferring design because maybe this is a case where we are seeing purpose and design where there is none.

Brierley: Can you negate the discoverability of the universe by saying that it might be psychological?

Collins: These things are not psychological. The selected value for the cosmic microwave background radiation is fine-tuned for life and for discoverability. It’s not merely a discoverability selection effect, it’s optimal for discoverability. If baryon-photon value were much smaller, we would have known that it was not optimal. So that judgment cannot be explained by

Millican: That’s a very interesting new twist.

Brierley: Give us your best objection.

Millican: I have two. 1) Even if you admit to the fine-tuning, this doesn’t show a being who is omnipotent and omnisicient. What the fine-tuning shows is that the designer is doing the best it can given the constraints from nature. If I were God, I would not have made the universe so big, and I wouldn’t have made it last 14 billion years, just to make one small area that supports life. An all-powerful God would have made the universe much smaller, and much younger. 2) The fine-tuning allows life to exist in other solar systems in other galaxies. What does this alien life elsewhere mean for traditional Christian theology? The existence of other alien civilizations argues against the truth of any one religion.

Brierley: Respond to those.

Collins: First objection: with a finite Creator, you run into the problem of having to push the design of that creature up one level, so you don’t really solve the fine-tuning problem. An unlimited being (non-material, not composed of parts) does not require fine-tuning. The fine-tuning is more compatible with theism than atheism. Second objection: I actually do think that it is likely that are other universes, and life in other galaxies and stars, and the doctrine of the Incarnation is easily adaptable to that, because God can take on multiple natures to appear to different alien civilizations.

Other resources (from WK)

If you liked this discussion, be sure and check out a full length lecture by Robin Collins on the fine-tuning, and a shorter lecture on his very latest work. And also this the Common Sense Atheism podcast, featuring cosmologist Luke Barnes, who answers about a dozen objections to the fine-tuning argument.

John C. Sanford’s genetic entropy hypothesis

Christianity and the progress of science
Christianity and the progress of science

JoeCoder sent me a recent peer-reviewed paper by John C. Sanford, so I’ve been trying to find something written by him at a layman’s level so I could understand what he is talking about.

Dr. Sanford’s CV is posted at the Cornell University web page.

I found this 20-minute video of an interview with him, in which he explains his thesis:

The most important part of that video is Sanford’s assertion that natural selection cannot remove deleterious mutations from a population faster than they arrive.

And I also found a review of a book that he wrote that explains his ideas at the layman level.

It says:

Dr. John Sanford is a plant geneticist and inventor who conducted research at Cornell University for more than 25 years. He is best known for significant contributions to the field of transgenic crops, including the invention of the biolistic process (“gene gun”).

[…]Sanford argues that, based upon modern scientific evidence and the calculations of population geneticists (who are almost exclusively evolutionists), mutations are occurring at an alarmingly high rate in our genome and that the vast majority of all mutations are either harmful or “nearly-neutral” (meaning a loss for the organism or having no discernible fitness gain). Importantly, Sanford also establishes the extreme rarity of any type of beneficial mutations in comparison with harmful or “nearly-neutral” mutations. Indeed, “beneficial” mutations are so exceedingly rare as to not contribute in any meaningful way. [NOTE: “Beneficial” mutations do not necessarily result from a gain in information, but instead, these changes predominantly involve a net loss of function to the organism, which is also not helpful to [Darwinism]; see Behe, 2010, pp. 419-445.] Sanford concludes that the frequency and generally harmful or neutral nature of mutations prevents them from being useful to any scheme of random evolution.

[…]In the next section of the book, Sanford examines natural selection and asks whether “nature” can “select” in favor of the exceedingly rare “beneficial” mutations and against the deleterious mutations. The concept of natural selection is generally that the organisms that are best adapted to their environment will survive and reproduce, while the less fit will not. Sanford points out that this may be the case with some organisms, but more commonly, selection involves chance and luck. But could this process select against harmful mutations and allow less harmful or even beneficial mutations to thrive? According to Sanford, there are significant challenges to this notion.

Stanford is a co-author of an academic book on these issues that has Dembski and Behe as co-authors.

Now, I do have to post something more complicated about this, which you can skip – it’s an abstract of a paper he co-authored from that book:

Most deleterious mutations have very slight effects on total fitness, and it has become clear that below a certain fitness effect threshold, such low-impact mutations fail to respond to natural selection. The existence of such a selection threshold suggests that many low-impact deleterious mutations should accumulate continuously, resulting in relentless erosion of genetic information. In this paper, we use numerical simulation to examine this problem of selection threshold.

The objective of this research was to investigate the effect of various biological factors individually and jointly on mutation accumulation in a model human population. For this purpose, we used a recently-developed, biologically-realistic numerical simulation program, Mendel’s Accountant. This program introduces new mutations into the population every generation and tracks each mutation through the processes of recombination, gamete formation, mating, and transmission to the new offspring. This method tracks which individuals survive to reproduce after selection, and records the transmission of each surviving mutation every generation. This allows a detailed mechanistic accounting of each mutation that enters and leaves the population over the course of many generations. We term this type of analysis genetic accounting.

Across all reasonable parameters settings, we observed that high impact mutations were selected away with very high efficiency, while very low impact mutations accumulated just as if there was no selection operating. There was always a large transitional zone, wherein mutations with intermediate fitness effects accumulated continuously, but at a lower rate than would occur in the absence of selection. To characterize the accumulation of mutations of different fitness effect we developed a new statistic, selection threshold (STd), which is an empirically determined value for a given population. A population’s selection threshold is defined as that fitness effect wherein deleterious mutations are accumulating at exactly half the rate expected in the absence of selection. This threshold is mid-way between entirely selectable, and entirely unselectable, mutation effects.

Our investigations reveal that under a very wide range of parameter values, selection thresholds for deleterious mutations are surprisingly high. Our analyses of the selection threshold problem indicate that given even modest levels of noise affecting either the genotype-phenotype relationship or the genotypic fitness-survival-reproduction relationship, accumulation of low-impact mutations continually degrades fitness, and this degradation is far more serious than has been previously acknowledged. Simulations based on recently published values for mutation rate and effect-distribution in humans show a steady decline in fitness that is not even halted by extremely intense selection pressure (12 offspring per female, 10 selectively removed). Indeed, we find that under most realistic circumstances, the large majority of harmful mutations are essentially unaffected by natural selection and continue to accumulate unhindered. This finding has major theoretical implications and raises the question, “What mechanism can preserve the many low-impact nucleotide positions that constitute most of the information within a genome?”

Now I have been told by JoeCoder that there are many critical responses to his hypothesis, most of which have to do with whether natural selection can overcome the difficulty he is laying out. But since this is not my area of expertise, there is not much I can say to adjudicate here. Take it for what it is.

Positive arguments for Christian theism

Stephen C. Meyer and Marcus Ross lecture on the Cambrian explosion

Cambrian Explosion
Cambrian Explosion

Access Research Network is a group that produces recordings  of lectures and debates related to intelligent design. I noticed that on their Youtube channel they are releasing some of their older lectures and debates for FREE. So I decided to write a summary of one that I really like on the Cambrian explosion. This lecture features Dr. Stephen C. Meyer and Dr. Marcus Ross.

The lecture is about two hours. There are really nice slides with lots of illustrations to help you understand what the speakers are saying, even if you are not a scientist.

Here is a summary of the lecture from ARN:

The Cambrian explosion is a term often heard in origins debates, but seldom completely understood by the non-specialist. This lecture by Meyer and Ross is one of the best overviews available on the topic and clearly presents in verbal and pictorial summary the latest fossil data (including the recent finds from Chengjiang China). This lecture is based on a paper recently published by Meyer, Ross, Nelson and Chien “The Cambrian Explosion: Biology’s Big Bang” in Darwinism, Design and Public Education(2003, Michigan State University Press). This 80-page article includes 127 references and the book includes two additional appendices with 63 references documenting the current state of knowledge on the Cambrian explosion data.

The term Cambrian explosion describes the geologically sudden appearance of animals in the fossil record during the Cambrian period of geologic time. During this event, at least nineteen, and perhaps as many as thirty-five (of forty total) phyla made their first appearance on earth. Phyla constitute the highest biological categories in the animal kingdom, with each phylum exhibiting a unique architecture, blueprint, or structural body plan. The word explosion is used to communicate that fact that these life forms appear in an exceedingly narrow window of geologic time (no more than 5 million years). If the standard earth’s history is represented as a 100 yard football field, the Cambrian explosion would represent a four inch section of that field.

For a majority of earth’s life forms to appear so abruptly is completely contrary to the predictions of Neo-Darwinian and Punctuated Equilibrium evolutionary theory, including:

  • the gradual emergence of biological complexity and the existence of numerous transitional forms leading to new phylum-level body plans;
  • small-scale morphological diversity preceding the emergence of large-scale morphological disparity; and
  • a steady increase in the morphological distance between organic forms over time and, consequently, an overall steady increase in the number of phyla over time (taking into account factors such as extinction).

After reviewing how the evidence is completely contrary to evolutionary predictions, Meyer and Ross address three common objections: 1) the artifact hypothesis: Is the Cambrian explosion real?; 2) The Vendian Radiation (a late pre-Cambrian multicellular organism); and 3) the deep divergence hypothesis.

Finally Meyer and Ross argue why design is a better scientific explanation for the Cambrian explosion. They argue that this is not an argument from ignorance, but rather the best explanation of the evidence from our knowledge base of the world. We find in the fossil record distinctive features or hallmarks of designed systems, including:

  • a quantum or discontinuous increase in specified complexity or information
  • a top-down pattern of scale diversity
  • the persistence of structural (or “morphological”) disparities between separate organizational systems; and
  • the discrete or novel organizational body plans

When we encounter objects that manifest any of these several features and we know how they arose, we invariably find that a purposeful agent or intelligent designer played a causal role in their origin.

Recorded April 24, 2004. Approximately 2 hours including audience Q&A.

You can get a DVD of the lecture and other great lectures from Access Research Network. I recommend their origin of life lectures – I have watched the ones with Dean Kenyon and Charles Thaxton probably a dozen times each. Speaking as an engineer, you never get tired of seeing engineering principles applied to questions like the origin of life.

If you’d like to see Dr. Meyer defend his views in a debate with someone who reviewed his book about the Cambrian explosion, you can find that in this previous post.

Further study

The Cambrian explosion lecture above is a great intermediate-level lecture and will prepare you to be able to understand Dr. Meyer’s new book “Darwin’s Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design“. The Michigan State University book that Dr. Meyer mentions is called “Darwin, Design and Public Education“. That book is one of the two good collections on intelligent design published by academic university presses, the other one being from Cambridge University Press, and titled “Debating Design: From Darwin to DNA“. If you think this lecture is above your level of understanding, then be sure and check out the shorter and more up-to-date DVD “Darwin’s Dilemma“.

The media reported that TRAPPIST-1 planets were “Earth-like”, but were they?

Christianity and the progress of science
Christianity and the progress of science

My assumption whenever I read these headlines from the naturalist mainstream media is that they are just scientific illiterates pushing a science fiction agenda. Naturalists believe that no intelligent designer was required in order to create a planet, a solar system and a galaxy fine-tuned for complex embodied life. The mainstream media tries to help naturalists by trumpeting that make planets that support life look common, so that no designer is needed.

Recently, there was a story about some planets that the mainstream media called “Earth-like”. But were they really Earth-like?

Evolution News reports: (links removed)

Do you recall the hubbub only one month ago about TRAPPIST-1, a dim red dwarf star some 40 light years from Earth? This star has seven planet, three of which, roughly Earth-sized, were announced as being potentially habitable. This led to excited speculation about alien evolution:

  • “Scientists find three new planets where life could have evolved” (Sky News)
  • “Nasa discovers new solar system where life may have evolved on three planets” (The Telegraph)
  • “Nasa’s ‘holy grail’: Entire new solar system that could support alien life discovered” (The Independent)
  • “Seven Alien ‘Earths’ Found Orbiting Nearby Star” (National Geographic)

Well, not so fast. Much of the breathlessness about the system stemmed from a tho

roughly imaginative artist’s rendering courtesy of NASA. The planets are designated by letters, b through h. The middle three planets are depicted as rather inviting, with what appear to be pleasing Earth-like oceans.

Today, the TRAPPIST-1 bubble looks to have popped, with 3D computer climate modeling showing major problems with the system. According to Eric T. Wolf of the University of Colorado’s Laboratory for Atmospheric and Space Physics, the inner three planets would be barren, the outer three frozen. And the middle, planet e? In NASA’s rendering, it looks the most Earth-like. However, in a system like this centering on a dim red dwarf, planet e would need to have been stocked, to start, with seven times the volume of Earth’s oceans.

roughly imaginative artist’s rendering courtesy of NASA. The planets are designated by letters, b through h. The middle three planets are depicted as rather inviting, with what appear to be pleasing Earth-like oceans.

Today, the TRAPPIST-1 bubble looks to have popped, with 3D computer climate modeling showing major problems with the system. According to Eric T. Wolf of the University of Colorado’s Laboratory for Atmospheric and Space Physics, the inner three planets would be barren, the outer three frozen. And the middle, planet e? In NASA’s rendering, it looks the most Earth-like. However, in a system like this centering on a dim red dwarf, planet e would need to have been stocked, to start, with seven times the volume of Earth’s oceans.

Let’s review what’s needed for a planet to support life, so that when these stories come out, we can recognize how many “Earth-like” qualities required for life are not mentioned.

Previously, I blogged about a few of the minimum requirements that a planet must satisfy in order to support complex life.

Here they are:

  • a solar system with a single massive Sun than can serve as a long-lived, stable source of energy
  • a terrestrial planet (non-gaseous)
  • the planet must be the right distance from the sun in order to preserve liquid water at the surface – if it’s too close, the water is burnt off in a runaway greenhouse effect, if it’s too far, the water is permanently frozen in a runaway glaciation
  • the planet has to be far enough from the star to avoid tidal locking and solar flares
  • the solar system must be placed at the right place in the galaxy – not too near dangerous radiation, but close enough to other stars to be able to absorb heavy elements after neighboring stars die
  • a moon of sufficient mass to stabilize the tilt of the planet’s rotation
  • plate tectonics
  • an oxygen-rich atmosphere
  • a sweeper planet to deflect comets, etc.
  • planetary neighbors must have non-eccentric orbits
  • planet mass must be enough to retain an atmosphere, but not so massive to cause a greenhouse effect

Now what happens if we disregard all of those characteristics, and just classify an Earth-like planet as one which is the same size and receives the same amount of radiation from its star? Well, then you end up labeling a whole bunch of planets as “Earth-like” that really don’t permit life.

Peer-reviewed paper: Michael Behe’s “First Rule of Adaptive Evolution”

Christianity and the progress of science
Christianity and the progress of science

Let’s take a look at Mike Behe’s first rule of adaptive evolution, which states that most examples of adaptation in evolutionary experiments involve a loss of function, or a modification of an existing function. Not new functionality.

The paper was published in the Quarterly Review of Biology. I found it on PubMed.

Abstract:

Adaptive evolution can cause a species to gain, lose, or modify a function; therefore, it is of basic interest to determine whether any of these modes dominates the evolutionary process under particular circumstances. Because mutation occurs at the molecular level, it is necessary to examine the molecular changes produced by the underlying mutation in order to assess whether a given adaptation is best considered as a gain, loss, or modification of function. Although that was once impossible, the advance of molecular biology in the past half century has made it feasible. In this paper, I review molecular changes underlying some adaptations, with a particular emphasis on evolutionary experiments with microbes conducted over the past four decades. I show that by far the most common adaptive changes seen in those examples are due to the loss or modification of a pre-existing molecular function, and I discuss the possible reasons for the prominence of such mutations.

By far the most common adaptive changes in the examples we have are due to loss of function or modification of pre-existing function?

Evolution News has a post up about the paper.

Excerpt:

After reviewing the effects of mutations upon Functional Coding ElemenTs (FCTs), Michael Behe’s recent review article in Quarterly Review of Biology, “Experimental Evolution, Loss-of-Function Mutations and ‘The First Rule of Adaptive Evolution’,” offers some conclusions. In particular, as the title suggests, Behe introduces a rule of thumb he calls the “The First Rule of Adaptive Evolution”: “Break or blunt any functional coded element whose loss would yield a net fitness gain.” In essence, what Behe means is that mutations that cause loss-of-FCT are going to be far more likely and thus far more common than those which gain a functional coding element. In fact, he writes: “the rate of appearance of an adaptive mutation that would arise from the diminishment or elimination of the activity of a protein is expected to be 100-1000 times the rate of appearance of an adaptive mutation that requires specific changes to a gene.” Since organisms will tend to evolve along the most likely pathway, they will tend to break or lose an FCT before gaining a new one. He explains:

It is called the “first” rule because the rate of mutations that diminish the function of a feature is expected to be much higher than the rate of appearance of a new feature, so adaptive loss-of-FCT or modification-of-function mutations that decrease activity are expected to appear first, by far, in a population under selective pressure.(Michael J. Behe, “Experimental Evolution, Loss-of-Function Mutations and ‘The First Rule of Adaptive Evolution’,” Quarterly Review of Biology, Vol. 85(4) (December, 2010).)

Behe argues that this point is empirically supported by the research reviews in the paper. He writes:

As seen in Tables 2 through 4, the large majority of experimental adaptive mutations are loss-of-FCT or modification-of-function mutations. In fact, leaving out those experiments with viruses in which specific genetic elements were intentionally deleted and then restored by subsequent evolution, only two gain-of-FCT events have been reported

After asking “Why is this the case?” Behe states, “One important factor is undoubtedly that the rate of appearance of loss-of-FCT mutations is much greater than the rate of construction of new functional coded elements.” He draws sound and defensible conclusions from the observed data:

Leaving aside gain-of-FCT for the moment, the work reviewed here shows that organisms do indeed adapt quickly in the laboratory–by loss-of-FCT and modification-of-function mutations. If such adaptive mutations also arrive first in the wild, as they of course would be expected to, then those will also be the kinds of mutations that are first available to selection in nature. … In general, if a sequence of genomic DNA is initially only one nucleotide removed from coding for an adaptive functional element, then a single simple point mutation could yield a gain-of-FCT. As seen in Table 5, several laboratory studies have achieved thousand to million-fold saturations of their test organisms with point mutations, and most of the studies reviewed here have at least single-fold saturation. Thus, one would expect to have observed simple gain-of-FCT adaptive mutations that had sufficient selective value to outcompete more numerous loss-of- FCT or modification-of-function mutations in most experimental evolutionary studies, if they had indeed been available.

But this stark lack of examples of gain-of-functional coding elements can have important implications:

A tentative conclusion suggested by these results is that the complex genetic systems that are cells will often be able to adapt to selective pressure by effectively removing or diminishing one or more of their many functional coded elements.

Behe doesn’t claim that gain-of-function mutations will never occur, but the clear implication is that neo-Darwinists cannot forever rely on examples of loss or modification-of-FCT mutations to explain molecular evolution. At some point, there must be gain of function.

Now, there was a response to this paper from Jerry Coyne on his blog, and then a rebuttal from Mike Behe in a separate article on Evolution News.