Tag Archives: Specified Complexity

Pro-ID scientist Ann Gauger interviewed on Mike Behe’s latest paper

This is all written up at Evolution News.

First, remember that Behe’s peer-reviewed paper (PDF) was about whether evolutionary mechanisms were capable of creating any new information, that supports new functionality, that confers an evolutionary advantage.

Excerpt:

Losing information is one thing — like accidentally erasing a computer file (say, an embarrassing diplomatic cable) where, it turns out in retrospect, you’re better off now that’s it not there anymore. Gaining information, building it up slowly from nothing, is quite another and more impressive feat. Yet it’s not the loss of function, and the required underlying information, but its gain that Darwinian evolution is primarily challenged to account for.

That’s the paradox highlighted in Michael Behe’s new review essay in Quarterly Review of Biology (“Experimental Evolution, Loss-of-Function Mutations, and “The First Rule of Adaptive Evolution“). It’s one of those peer-reviewed, Darwin-doubting biology journal essays that, as we’re confidently assured by the likes of the aforesaid Jerry Coyne, don’t actually exist. Casey Luskin has been doing an excellent job in this space of detailing Michael Behe’s conclusions. Reviewing the expansive literature dealing with investigations of viral and bacterial evolution, Dr. Behe shows that adaptive instances of the “diminishment or elimination” of Functional Coding ElemenTs (FCTs) in the genome overwhelmingly outnumber “gain-of-FCT events.” Seemingly, under Darwinian assumptions, even as functionality is being painstakingly built up that’s of use to an organism in promoting survival, the same creature should, much faster, be impoverished of function to the point of being driven out of existence.

And then the Evolution News post has an interview with Ann Gauger, (whose peer-reviewed publications have been featured before on this blog).

Here’s one of the questions:

… In your own research with Dr. Seelke, you found that cells chose to “reduce or eliminate function.” But with vastly bigger populations and vastly more time, wouldn’t we be justified in expecting gene fixes too, even if far fewer in number?

And her reply in part:

For most organisms in the wild, the environment is constantly changing. Organisms rarely encounter prolonged and uniform selection in one direction. In turn, changing selection prevents most genetic variants from getting fixed in the population. In addition, most mutations that accumulate in populations are neutral or weakly deleterious, and most beneficial mutations are only weakly beneficial. This means that it takes a very long time, if ever, for a weakly beneficial mutation to spread throughout the population, or for harmful mutations to be eliminated. If more than one mutation is required to get a new function, the problem quickly becomes beyond reach. Evolutionary biologists have begun to realize the problem of getting complex adaptations, and are trying to find answers.

The problem is the level of complexity that is required, from the earliest stages of life. For example, just to modify one protein to perform a new function or interact with a new partner can require multiple mutations. Yet many specialized proteins, adapted to work together with specialized RNAs, are required to build a ribosome. And until you have ribosomes, you cannot translate genes into proteins. We haven’t a clue how this ability evolved.

It sounds this problem of getting beneficial mutations and keeping them around is an intractable problem, at least on a naturalist worldview. It will be interesting to see how the naturalists respond to the peer-reviewed work by Behe and Gauger. The only way to know if Behe and Gauger are right is to let the naturalists talk back. It would be nice to see a formal debate on this evidence, wouldn’t it? I’m sure that the ID people would favor a debate, but the evolutionists probably wouldn’t, since they prefer to silence and expel anyone who disagrees with them.

In addition to the new papers by Michael Behe and Ann Gauger I mentioned above, I wrote about Doug Axe’s recent research paper here. He is the Director of the Biologic Institute, where Ann works.

Debates featuring Mike Behe

Related posts

How can you tell whether something is designed or not?

In honor of William Dembski’s debate tonight at 8 PM Eastern time, I present this article from Access Research Network.

Excerpt:

Instead of looking for such vague properties as “purpose” or “perfection”—which may be construed in a subjective sense—[intelligent design] looks for the presence of what it calls specified complexity, an unambiguously objective standard.

That term sounds like a mouthful, but it’s something we can all recognize without effort. Let’s take an example.

Imagine that a friend hands you a sheet of paper with part of Lincoln’s Gettysburg address written on it:

FOURSCOREANDSEVENYEARSAGOOURFATHERSBROUGHTFORTHONTHISCONTINENTANEWNATIONCONCEIVEDINLIBERTY …

Your friend tells you that he wrote the sentence by pulling Scrabble pieces out of a bag at random.

Would you believe him? Probably not. But why?

One reason is that the odds against it are just too high. There are so many other ways the results could have turned out—so many possible sequences of letters—that the probability of getting that particular sentence is almost nil.

But there’s more to it than that. If our friend had shown us the letters below, we would probably believe his story.

ZOEFFNPBINNGQZAMZQPEGOXSYFMRTEXRNYGRRGNNFVGUMLMTYQXTXWORNBWIGBBCVHPUZMWLONHATQUGOTFJKZXFHP …

Why? Because of the kind of sequence we see. The first string fits a recognizable pattern: It’s a sentence written in English, minus spaces and punctuation. The second string fits no such pattern.

Now we can understand specified complexity. When a design theorist says that a string of letters is specified, he’s saying that it fits a recognizable pattern. And when he says it’s complex, he’s saying there are so many different ways the object could have turned out that the chance of getting any particular outcome by accident is hopelessly small.

Thus, we see design in our Gettysburg sentence because it is both specified and complex. We see no such design in the second string. Although it is complex, it fits no recognizable pattern. And if our friend had shown us a string of letters like “BLUE” we would have said that it was specified but not complex. It fits a pattern, but because the number of letter is so short, the likelihood of getting such a string is relatively high. Four slots don’t give you as many possible letter combinations as 143, which is the length of our Gettysburg sentence.

So that’s the basic notion of specified complexity.

This is something you really need to understand in order to understand the arguments from biological building blocks and biological information in DNA.

Michael Behe and Stephen Barr debate intelligent design

Michael Behe is Catholic and Stephen Barr seems to be a theistic evolutionist (naturalist). (H/T Evolution News via ECM)

The main page is here, and it has the video.

There is an MP3 file here, 71 minutes long.

Michael Behe goes first, then Stephen Barr.

Keep in mind that the dividing line in the debate on intelligent design vs. Darwinism is between open-minded scientists who think that there might be objective evidence that material cause-and-effect may not be able to account for specific kinds of complexity (specified complexity) in nature, and philosophers who believe that is never permissible to overturn the philosophical assumption of materialism, regardless of what the scientific evidence shows.

So the pro-ID side is like “let’s look at the evidence and see what naturalism can and can’t do” and the anti-ID side is “the presupposition of materialism is absolute for we cannot allow a Divine Foot in the door”. It’s ID scientists vs naturalist materialism pre-supposers. Reason vs faith. Inquiry vs dogmatism.

UPDATE:

Upcoming conference features pro-ID scholars and theistic evolutionists in Austin, Texas in October.