Tag Archives: Intelligent Design

John C. Sanford’s genetic entropy hypothesis

Christianity and the progress of science
Christianity and the progress of science

JoeCoder sent me a recent peer-reviewed paper by John C. Sanford, so I’ve been trying to find something written by him at a layman’s level so I could understand what he is talking about.

Dr. Sanford’s CV is posted at the Cornell University web page.

I found this 20-minute video of an interview with him, in which he explains his thesis:

The most important part of that video is Sanford’s assertion that natural selection cannot remove deleterious mutations from a population faster than they arrive.

And I also found a review of a book that he wrote that explains his ideas at the layman level.

It says:

Dr. John Sanford is a plant geneticist and inventor who conducted research at Cornell University for more than 25 years. He is best known for significant contributions to the field of transgenic crops, including the invention of the biolistic process (“gene gun”).

[…]Sanford argues that, based upon modern scientific evidence and the calculations of population geneticists (who are almost exclusively evolutionists), mutations are occurring at an alarmingly high rate in our genome and that the vast majority of all mutations are either harmful or “nearly-neutral” (meaning a loss for the organism or having no discernible fitness gain). Importantly, Sanford also establishes the extreme rarity of any type of beneficial mutations in comparison with harmful or “nearly-neutral” mutations. Indeed, “beneficial” mutations are so exceedingly rare as to not contribute in any meaningful way. [NOTE: “Beneficial” mutations do not necessarily result from a gain in information, but instead, these changes predominantly involve a net loss of function to the organism, which is also not helpful to [Darwinism]; see Behe, 2010, pp. 419-445.] Sanford concludes that the frequency and generally harmful or neutral nature of mutations prevents them from being useful to any scheme of random evolution.

[…]In the next section of the book, Sanford examines natural selection and asks whether “nature” can “select” in favor of the exceedingly rare “beneficial” mutations and against the deleterious mutations. The concept of natural selection is generally that the organisms that are best adapted to their environment will survive and reproduce, while the less fit will not. Sanford points out that this may be the case with some organisms, but more commonly, selection involves chance and luck. But could this process select against harmful mutations and allow less harmful or even beneficial mutations to thrive? According to Sanford, there are significant challenges to this notion.

Stanford is a co-author of an academic book on these issues that has Dembski and Behe as co-authors.

Now, I do have to post something more complicated about this, which you can skip – it’s an abstract of a paper he co-authored from that book:

Most deleterious mutations have very slight effects on total fitness, and it has become clear that below a certain fitness effect threshold, such low-impact mutations fail to respond to natural selection. The existence of such a selection threshold suggests that many low-impact deleterious mutations should accumulate continuously, resulting in relentless erosion of genetic information. In this paper, we use numerical simulation to examine this problem of selection threshold.

The objective of this research was to investigate the effect of various biological factors individually and jointly on mutation accumulation in a model human population. For this purpose, we used a recently-developed, biologically-realistic numerical simulation program, Mendel’s Accountant. This program introduces new mutations into the population every generation and tracks each mutation through the processes of recombination, gamete formation, mating, and transmission to the new offspring. This method tracks which individuals survive to reproduce after selection, and records the transmission of each surviving mutation every generation. This allows a detailed mechanistic accounting of each mutation that enters and leaves the population over the course of many generations. We term this type of analysis genetic accounting.

Across all reasonable parameters settings, we observed that high impact mutations were selected away with very high efficiency, while very low impact mutations accumulated just as if there was no selection operating. There was always a large transitional zone, wherein mutations with intermediate fitness effects accumulated continuously, but at a lower rate than would occur in the absence of selection. To characterize the accumulation of mutations of different fitness effect we developed a new statistic, selection threshold (STd), which is an empirically determined value for a given population. A population’s selection threshold is defined as that fitness effect wherein deleterious mutations are accumulating at exactly half the rate expected in the absence of selection. This threshold is mid-way between entirely selectable, and entirely unselectable, mutation effects.

Our investigations reveal that under a very wide range of parameter values, selection thresholds for deleterious mutations are surprisingly high. Our analyses of the selection threshold problem indicate that given even modest levels of noise affecting either the genotype-phenotype relationship or the genotypic fitness-survival-reproduction relationship, accumulation of low-impact mutations continually degrades fitness, and this degradation is far more serious than has been previously acknowledged. Simulations based on recently published values for mutation rate and effect-distribution in humans show a steady decline in fitness that is not even halted by extremely intense selection pressure (12 offspring per female, 10 selectively removed). Indeed, we find that under most realistic circumstances, the large majority of harmful mutations are essentially unaffected by natural selection and continue to accumulate unhindered. This finding has major theoretical implications and raises the question, “What mechanism can preserve the many low-impact nucleotide positions that constitute most of the information within a genome?”

Now I have been told by JoeCoder that there are many critical responses to his hypothesis, most of which have to do with whether natural selection can overcome the difficulty he is laying out. But since this is not my area of expertise, there is not much I can say to adjudicate here. Take it for what it is.

Positive arguments for Christian theism

Stephen C. Meyer and Marcus Ross lecture on the Cambrian explosion

Cambrian Explosion
Cambrian Explosion

Access Research Network is a group that produces recordings  of lectures and debates related to intelligent design. I noticed that on their Youtube channel they are releasing some of their older lectures and debates for FREE. So I decided to write a summary of one that I really like on the Cambrian explosion. This lecture features Dr. Stephen C. Meyer and Dr. Marcus Ross.

The lecture is about two hours. There are really nice slides with lots of illustrations to help you understand what the speakers are saying, even if you are not a scientist.

Here is a summary of the lecture from ARN:

The Cambrian explosion is a term often heard in origins debates, but seldom completely understood by the non-specialist. This lecture by Meyer and Ross is one of the best overviews available on the topic and clearly presents in verbal and pictorial summary the latest fossil data (including the recent finds from Chengjiang China). This lecture is based on a paper recently published by Meyer, Ross, Nelson and Chien “The Cambrian Explosion: Biology’s Big Bang” in Darwinism, Design and Public Education(2003, Michigan State University Press). This 80-page article includes 127 references and the book includes two additional appendices with 63 references documenting the current state of knowledge on the Cambrian explosion data.

The term Cambrian explosion describes the geologically sudden appearance of animals in the fossil record during the Cambrian period of geologic time. During this event, at least nineteen, and perhaps as many as thirty-five (of forty total) phyla made their first appearance on earth. Phyla constitute the highest biological categories in the animal kingdom, with each phylum exhibiting a unique architecture, blueprint, or structural body plan. The word explosion is used to communicate that fact that these life forms appear in an exceedingly narrow window of geologic time (no more than 5 million years). If the standard earth’s history is represented as a 100 yard football field, the Cambrian explosion would represent a four inch section of that field.

For a majority of earth’s life forms to appear so abruptly is completely contrary to the predictions of Neo-Darwinian and Punctuated Equilibrium evolutionary theory, including:

  • the gradual emergence of biological complexity and the existence of numerous transitional forms leading to new phylum-level body plans;
  • small-scale morphological diversity preceding the emergence of large-scale morphological disparity; and
  • a steady increase in the morphological distance between organic forms over time and, consequently, an overall steady increase in the number of phyla over time (taking into account factors such as extinction).

After reviewing how the evidence is completely contrary to evolutionary predictions, Meyer and Ross address three common objections: 1) the artifact hypothesis: Is the Cambrian explosion real?; 2) The Vendian Radiation (a late pre-Cambrian multicellular organism); and 3) the deep divergence hypothesis.

Finally Meyer and Ross argue why design is a better scientific explanation for the Cambrian explosion. They argue that this is not an argument from ignorance, but rather the best explanation of the evidence from our knowledge base of the world. We find in the fossil record distinctive features or hallmarks of designed systems, including:

  • a quantum or discontinuous increase in specified complexity or information
  • a top-down pattern of scale diversity
  • the persistence of structural (or “morphological”) disparities between separate organizational systems; and
  • the discrete or novel organizational body plans

When we encounter objects that manifest any of these several features and we know how they arose, we invariably find that a purposeful agent or intelligent designer played a causal role in their origin.

Recorded April 24, 2004. Approximately 2 hours including audience Q&A.

You can get a DVD of the lecture and other great lectures from Access Research Network. I recommend their origin of life lectures – I have watched the ones with Dean Kenyon and Charles Thaxton probably a dozen times each. Speaking as an engineer, you never get tired of seeing engineering principles applied to questions like the origin of life.

If you’d like to see Dr. Meyer defend his views in a debate with someone who reviewed his book about the Cambrian explosion, you can find that in this previous post.

Further study

The Cambrian explosion lecture above is a great intermediate-level lecture and will prepare you to be able to understand Dr. Meyer’s new book “Darwin’s Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design“. The Michigan State University book that Dr. Meyer mentions is called “Darwin, Design and Public Education“. That book is one of the two good collections on intelligent design published by academic university presses, the other one being from Cambridge University Press, and titled “Debating Design: From Darwin to DNA“. If you think this lecture is above your level of understanding, then be sure and check out the shorter and more up-to-date DVD “Darwin’s Dilemma“.

The media reported that TRAPPIST-1 planets were “Earth-like”, but were they?

Christianity and the progress of science
Christianity and the progress of science

My assumption whenever I read these headlines from the naturalist mainstream media is that they are just scientific illiterates pushing a science fiction agenda. Naturalists believe that no intelligent designer was required in order to create a planet, a solar system and a galaxy fine-tuned for complex embodied life. The mainstream media tries to help naturalists by trumpeting that make planets that support life look common, so that no designer is needed.

Recently, there was a story about some planets that the mainstream media called “Earth-like”. But were they really Earth-like?

Evolution News reports: (links removed)

Do you recall the hubbub only one month ago about TRAPPIST-1, a dim red dwarf star some 40 light years from Earth? This star has seven planet, three of which, roughly Earth-sized, were announced as being potentially habitable. This led to excited speculation about alien evolution:

  • “Scientists find three new planets where life could have evolved” (Sky News)
  • “Nasa discovers new solar system where life may have evolved on three planets” (The Telegraph)
  • “Nasa’s ‘holy grail’: Entire new solar system that could support alien life discovered” (The Independent)
  • “Seven Alien ‘Earths’ Found Orbiting Nearby Star” (National Geographic)

Well, not so fast. Much of the breathlessness about the system stemmed from a tho

roughly imaginative artist’s rendering courtesy of NASA. The planets are designated by letters, b through h. The middle three planets are depicted as rather inviting, with what appear to be pleasing Earth-like oceans.

Today, the TRAPPIST-1 bubble looks to have popped, with 3D computer climate modeling showing major problems with the system. According to Eric T. Wolf of the University of Colorado’s Laboratory for Atmospheric and Space Physics, the inner three planets would be barren, the outer three frozen. And the middle, planet e? In NASA’s rendering, it looks the most Earth-like. However, in a system like this centering on a dim red dwarf, planet e would need to have been stocked, to start, with seven times the volume of Earth’s oceans.

roughly imaginative artist’s rendering courtesy of NASA. The planets are designated by letters, b through h. The middle three planets are depicted as rather inviting, with what appear to be pleasing Earth-like oceans.

Today, the TRAPPIST-1 bubble looks to have popped, with 3D computer climate modeling showing major problems with the system. According to Eric T. Wolf of the University of Colorado’s Laboratory for Atmospheric and Space Physics, the inner three planets would be barren, the outer three frozen. And the middle, planet e? In NASA’s rendering, it looks the most Earth-like. However, in a system like this centering on a dim red dwarf, planet e would need to have been stocked, to start, with seven times the volume of Earth’s oceans.

Let’s review what’s needed for a planet to support life, so that when these stories come out, we can recognize how many “Earth-like” qualities required for life are not mentioned.

Previously, I blogged about a few of the minimum requirements that a planet must satisfy in order to support complex life.

Here they are:

  • a solar system with a single massive Sun than can serve as a long-lived, stable source of energy
  • a terrestrial planet (non-gaseous)
  • the planet must be the right distance from the sun in order to preserve liquid water at the surface – if it’s too close, the water is burnt off in a runaway greenhouse effect, if it’s too far, the water is permanently frozen in a runaway glaciation
  • the planet has to be far enough from the star to avoid tidal locking and solar flares
  • the solar system must be placed at the right place in the galaxy – not too near dangerous radiation, but close enough to other stars to be able to absorb heavy elements after neighboring stars die
  • a moon of sufficient mass to stabilize the tilt of the planet’s rotation
  • plate tectonics
  • an oxygen-rich atmosphere
  • a sweeper planet to deflect comets, etc.
  • planetary neighbors must have non-eccentric orbits
  • planet mass must be enough to retain an atmosphere, but not so massive to cause a greenhouse effect

Now what happens if we disregard all of those characteristics, and just classify an Earth-like planet as one which is the same size and receives the same amount of radiation from its star? Well, then you end up labeling a whole bunch of planets as “Earth-like” that really don’t permit life.

Peer-reviewed paper: Michael Behe’s “First Rule of Adaptive Evolution”

Christianity and the progress of science
Christianity and the progress of science

Let’s take a look at Mike Behe’s first rule of adaptive evolution, which states that most examples of adaptation in evolutionary experiments involve a loss of function, or a modification of an existing function. Not new functionality.

The paper was published in the Quarterly Review of Biology. I found it on PubMed.

Abstract:

Adaptive evolution can cause a species to gain, lose, or modify a function; therefore, it is of basic interest to determine whether any of these modes dominates the evolutionary process under particular circumstances. Because mutation occurs at the molecular level, it is necessary to examine the molecular changes produced by the underlying mutation in order to assess whether a given adaptation is best considered as a gain, loss, or modification of function. Although that was once impossible, the advance of molecular biology in the past half century has made it feasible. In this paper, I review molecular changes underlying some adaptations, with a particular emphasis on evolutionary experiments with microbes conducted over the past four decades. I show that by far the most common adaptive changes seen in those examples are due to the loss or modification of a pre-existing molecular function, and I discuss the possible reasons for the prominence of such mutations.

By far the most common adaptive changes in the examples we have are due to loss of function or modification of pre-existing function?

Evolution News has a post up about the paper.

Excerpt:

After reviewing the effects of mutations upon Functional Coding ElemenTs (FCTs), Michael Behe’s recent review article in Quarterly Review of Biology, “Experimental Evolution, Loss-of-Function Mutations and ‘The First Rule of Adaptive Evolution’,” offers some conclusions. In particular, as the title suggests, Behe introduces a rule of thumb he calls the “The First Rule of Adaptive Evolution”: “Break or blunt any functional coded element whose loss would yield a net fitness gain.” In essence, what Behe means is that mutations that cause loss-of-FCT are going to be far more likely and thus far more common than those which gain a functional coding element. In fact, he writes: “the rate of appearance of an adaptive mutation that would arise from the diminishment or elimination of the activity of a protein is expected to be 100-1000 times the rate of appearance of an adaptive mutation that requires specific changes to a gene.” Since organisms will tend to evolve along the most likely pathway, they will tend to break or lose an FCT before gaining a new one. He explains:

It is called the “first” rule because the rate of mutations that diminish the function of a feature is expected to be much higher than the rate of appearance of a new feature, so adaptive loss-of-FCT or modification-of-function mutations that decrease activity are expected to appear first, by far, in a population under selective pressure.(Michael J. Behe, “Experimental Evolution, Loss-of-Function Mutations and ‘The First Rule of Adaptive Evolution’,” Quarterly Review of Biology, Vol. 85(4) (December, 2010).)

Behe argues that this point is empirically supported by the research reviews in the paper. He writes:

As seen in Tables 2 through 4, the large majority of experimental adaptive mutations are loss-of-FCT or modification-of-function mutations. In fact, leaving out those experiments with viruses in which specific genetic elements were intentionally deleted and then restored by subsequent evolution, only two gain-of-FCT events have been reported

After asking “Why is this the case?” Behe states, “One important factor is undoubtedly that the rate of appearance of loss-of-FCT mutations is much greater than the rate of construction of new functional coded elements.” He draws sound and defensible conclusions from the observed data:

Leaving aside gain-of-FCT for the moment, the work reviewed here shows that organisms do indeed adapt quickly in the laboratory–by loss-of-FCT and modification-of-function mutations. If such adaptive mutations also arrive first in the wild, as they of course would be expected to, then those will also be the kinds of mutations that are first available to selection in nature. … In general, if a sequence of genomic DNA is initially only one nucleotide removed from coding for an adaptive functional element, then a single simple point mutation could yield a gain-of-FCT. As seen in Table 5, several laboratory studies have achieved thousand to million-fold saturations of their test organisms with point mutations, and most of the studies reviewed here have at least single-fold saturation. Thus, one would expect to have observed simple gain-of-FCT adaptive mutations that had sufficient selective value to outcompete more numerous loss-of- FCT or modification-of-function mutations in most experimental evolutionary studies, if they had indeed been available.

But this stark lack of examples of gain-of-functional coding elements can have important implications:

A tentative conclusion suggested by these results is that the complex genetic systems that are cells will often be able to adapt to selective pressure by effectively removing or diminishing one or more of their many functional coded elements.

Behe doesn’t claim that gain-of-function mutations will never occur, but the clear implication is that neo-Darwinists cannot forever rely on examples of loss or modification-of-FCT mutations to explain molecular evolution. At some point, there must be gain of function.

Now, there was a response to this paper from Jerry Coyne on his blog, and then a rebuttal from Mike Behe in a separate article on Evolution News.

New study: first life pushed back earlier, leaving less time for naturalist magic

Christianity and the progress of science
Christianity and the progress of science

Whenever you discuss origins with naturalists, it’s very important to get them to explain how the first living organism emerged without any help from an intelligent agent. The origin of life is an information problem. A certain minimal amount of biological information for minimum life function has to be thrown together by chance. No evolutionary mechanisms have the potential to work until replication is already in place.

Evolution News reports on a new study that makes the window for naturalistic forces to create the first self-replicating organism even smaller.

Excerpt:

A paper in Nature reports the discovery of fossil microbes possibly older, even much older, than any found previously. The lead author is biogeochemist Matthew Dodd, a PhD student at University College London. If the paper is right, these Canadian fossils could be 3.77 billion years old, or even as old as — hold onto your hat, in case you’re wearing one — 4.28 billion years.

From the Abstract:

Although it is not known when or where life on Earth began, some of the earliest habitable environments may have been submarine-hydrothermal vents. Here we describe putative fossilized microorganisms that are at least 3,770 million and possibly 4,280 million years old in ferruginous sedimentary rocks, interpreted as seafloor-hydrothermal vent-related precipitates, from the Nuvvuagittuq belt in Quebec, Canada. These structures occur as micrometre-scale haematite tubes and filaments with morphologies and mineral assemblages similar to those of filamentous microorganisms from modern hydrothermal vent precipitates and analogous microfossils in younger rocks. The Nuvvuagittuq rocks contain isotopically light carbon in carbonate and carbonaceous material, which occurs as graphitic inclusions in diagenetic carbonate rosettes, apatite blades intergrown among carbonate rosettes and magnetite–haematite granules, and is associated with carbonate in direct contact with the putative microfossils.

This new paper is interesting to compare with a paper from last year, Nutman et al., “Rapid emergence of life shown by discovery of 3,700-million-year-old microbial structures,” also in Nature, which found microbial structures that are a bit younger.

But the “microbial structures” from Nutman et al. 2016 are different from these new “microfossils” presented by Dodd et al. 2017. In Nutman et al., they only found stromatolite-type structures rather than actual microfossils. Some stromatolite experts were a bit skeptical that what they found were really stromatolites.

But the new paper by Dodd and his colleagues, “Evidence for early life in Earth’s oldest hydrothermal vent precipitates,” seems to offer potential bacteria-like microfossils. They are tiny black carbonaceous spheres and “hematite tubes” which the authors think are biogenically created. We’ve seen more convincing ancient microfossils, but these aren’t bad.

According to Dodd et al., these new finds would be the oldest known microfossils, if that is in fact what they are. Very interesting. If so, that just keeps pushing unquestionable evidence of life’s existence on Earth further and further back, which leaves less and less time for the origin of life to have occurred by unguided chemical evolution after Earth became habitable.

If they are in fact 4.28 billion years old, then that would mean there was life very, very early in Earth’s history — as Cyril Ponnamperuma said, it’s like “instant life.”

Instant life is “rational” for naturalistic fideists, but for evidence-driven people who understand the long odds on generating even a simple protein by chance, it’s irrationality.

Let’s recall exactly how hard it is to make even a simple protein without intelligent agency to select the elements of the sequence.

The odds of creating even a single functional protein

I’ve talked about Doug Axe before when I described how to calculate the odds of getting functional proteins by chance.

Let’s calculate the odds of building a protein composed of a functional chain of 100 amino acids, by chance. (Think of a meaningful English sentence built with 100 scrabble letters, held together with glue)

Sub-problems:

  • BONDING: You need 99 peptide bonds between the 100 amino acids. The odds of getting a peptide bond is 50%. The probability of building a chain of one hundred amino acids in which all linkages involve peptide bonds is roughly (1/2)^99 or 1 chance in 10^30.
  • CHIRALITY: You need 100 left-handed amino acids. The odds of getting a left-handed amino acid is 50%. The probability of attaining at random only L–amino acids in a hypothetical peptide chain one hundred amino acids long is (1/2)^100 or again roughly 1 chance in 10^30.
  • SEQUENCE: You need to choose the correct amino acid for each of the 100 links. The odds of getting the right one are 1 in 20. Even if you allow for some variation, the odds of getting a functional sequence is (1/20)^100 or 1 in 10^65.

The final probability of getting a functional protein composed of 100 amino acids is 1 in 10^125. Even if you fill the universe with pre-biotic soup, and react amino acids at Planck time (very fast!) for 14 billion years, you are probably not going to get even 1 such protein. And you need at least 100 of them for minimal life functions, plus DNA and RNA.

Research performed by Doug Axe at Cambridge University, and published in the peer-reviewed Journal of Molecular Biology, has shown that the number of functional amino acid sequences is tiny:

Doug Axe’s research likewise studies genes that it turns out show great evidence of design. Axe studied the sensitivities of protein function to mutations. In these “mutational sensitivity” tests, Dr. Axe mutated certain amino acids in various proteins, or studied the differences between similar proteins, to see how mutations or changes affected their ability to function properly. He found that protein function was highly sensitive to mutation, and that proteins are not very tolerant to changes in their amino acid sequences. In other words, when you mutate, tweak, or change these proteins slightly, they stopped working. In one of his papers, he thus concludes that “functional folds require highly extraordinary sequences,” and that functional protein folds “may be as low as 1 in 10^77.”

The problem of forming DNA by sequencing nucleotides faces similar difficulties. And remember, mutation and selection cannot explain the origin of the first sequence, because mutation and selection require replication, which does not exist until that first living cell is already in place.