Tag Archives: Peer-Review

Pro-ID scientist Ann Gauger interviewed on Mike Behe’s latest paper

This is all written up at Evolution News.

First, remember that Behe’s peer-reviewed paper (PDF) was about whether evolutionary mechanisms were capable of creating any new information, that supports new functionality, that confers an evolutionary advantage.

Excerpt:

Losing information is one thing — like accidentally erasing a computer file (say, an embarrassing diplomatic cable) where, it turns out in retrospect, you’re better off now that’s it not there anymore. Gaining information, building it up slowly from nothing, is quite another and more impressive feat. Yet it’s not the loss of function, and the required underlying information, but its gain that Darwinian evolution is primarily challenged to account for.

That’s the paradox highlighted in Michael Behe’s new review essay in Quarterly Review of Biology (“Experimental Evolution, Loss-of-Function Mutations, and “The First Rule of Adaptive Evolution“). It’s one of those peer-reviewed, Darwin-doubting biology journal essays that, as we’re confidently assured by the likes of the aforesaid Jerry Coyne, don’t actually exist. Casey Luskin has been doing an excellent job in this space of detailing Michael Behe’s conclusions. Reviewing the expansive literature dealing with investigations of viral and bacterial evolution, Dr. Behe shows that adaptive instances of the “diminishment or elimination” of Functional Coding ElemenTs (FCTs) in the genome overwhelmingly outnumber “gain-of-FCT events.” Seemingly, under Darwinian assumptions, even as functionality is being painstakingly built up that’s of use to an organism in promoting survival, the same creature should, much faster, be impoverished of function to the point of being driven out of existence.

And then the Evolution News post has an interview with Ann Gauger, (whose peer-reviewed publications have been featured before on this blog).

Here’s one of the questions:

… In your own research with Dr. Seelke, you found that cells chose to “reduce or eliminate function.” But with vastly bigger populations and vastly more time, wouldn’t we be justified in expecting gene fixes too, even if far fewer in number?

And her reply in part:

For most organisms in the wild, the environment is constantly changing. Organisms rarely encounter prolonged and uniform selection in one direction. In turn, changing selection prevents most genetic variants from getting fixed in the population. In addition, most mutations that accumulate in populations are neutral or weakly deleterious, and most beneficial mutations are only weakly beneficial. This means that it takes a very long time, if ever, for a weakly beneficial mutation to spread throughout the population, or for harmful mutations to be eliminated. If more than one mutation is required to get a new function, the problem quickly becomes beyond reach. Evolutionary biologists have begun to realize the problem of getting complex adaptations, and are trying to find answers.

The problem is the level of complexity that is required, from the earliest stages of life. For example, just to modify one protein to perform a new function or interact with a new partner can require multiple mutations. Yet many specialized proteins, adapted to work together with specialized RNAs, are required to build a ribosome. And until you have ribosomes, you cannot translate genes into proteins. We haven’t a clue how this ability evolved.

It sounds this problem of getting beneficial mutations and keeping them around is an intractable problem, at least on a naturalist worldview. It will be interesting to see how the naturalists respond to the peer-reviewed work by Behe and Gauger. The only way to know if Behe and Gauger are right is to let the naturalists talk back. It would be nice to see a formal debate on this evidence, wouldn’t it? I’m sure that the ID people would favor a debate, but the evolutionists probably wouldn’t, since they prefer to silence and expel anyone who disagrees with them.

In addition to the new papers by Michael Behe and Ann Gauger I mentioned above, I wrote about Doug Axe’s recent research paper here. He is the Director of the Biologic Institute, where Ann works.

Debates featuring Mike Behe

Related posts

What’s more complex? Your brain or the Internet?

Check it out… a new peer-reviewed paper discussed at CNET News. (H/T Darwin’s God via ECM)

Excerpt:

The human brain is truly awesome.

A typical, healthy one houses some 200 billion nerve cells, which are connected to one another via hundreds of trillions of synapses. Each synapse functions like a microprocessor, and tens of thousands of them can connect a single neuron to other nerve cells. In the cerebral cortex alone, there are roughly 125 trillion synapses, which is about how many stars fill 1,500 Milky Way galaxies.

These synapses are, of course, so tiny (less than a thousandth of a millimeter in diameter) that humans haven’t been able to see with great clarity what exactly they do and how, beyond knowing that their numbers vary over time. That is until now.

Researchers at the Stanford University School of Medicine have spent the past few years engineering a new imaging model, which they call array tomography, in conjunction with novel computational software, to stitch together image slices into a three-dimensional image that can be rotated, penetrated and navigated. Their work appears in the journal Neuron this week.

To test their model, the team took tissue samples from a mouse whose brain had been bioengineered to make larger neurons in the cerebral cortex express a fluorescent protein (found in jellyfish), making them glow yellow-green. Because of this glow, the researchers were able to see synapses against the background of neurons.

They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study:

One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.

Smith adds that this gives us a glimpse into brain tissue at a level of detail never before attained: “The entire anatomical context of the synapses is preserved. You know right where each one is, and what kind it is.”

I wonder why we consider evolution to be a “done deal” when we don’t have testable Darwinian accounts for how things like this can evolve. They are making the claim that it evolved. I am asking for some evidence.

NYT reports on new study showing no link between disasters and AGW

Click for larger image
Click for larger image

Story here in the radically leftist New York Times.

Excerpt:

A new analysis of nearly two dozen papers assessing trends in disaster losses in light of climate change finds no convincing link. The author concludes that, so far, the rise in disaster losses is mainly a function of more investments getting in harm’s way as communities in places vulnerable to natural hazards grow.

The paper — “Have disaster losses increased due to anthropogenic climate change?” — is in press in the Bulletin of the American Meteorological Society. It was written by Laurens M. Bouwer, a researcher at Vrije University in Amsterdam focused on climate and water resources (and a lead author of a chapter in the 2001 assessment from the Intergovernmental Panel on Climate Change). You can read more about the paper at the blog of Roger Pielke, Jr., which drew my attention to this work.

Here’s more from from Roger Pielke’s blog post.

The Bulletin of the American Meteorological Society has just put online a review paper (peer reviewed) by Laurens Bouwer, of the Institute for Environmental Studies at  Vrije Universiteit in Amsterdam, titled, “Have disaster losses increased due to anthropogenic climate change?“.

Readers of this blog already know the answer to this question, and here is Bouwers’ conclusion:

The analysis of twenty-two disaster loss studies shows that economic losses from various weather related natural hazards, such as storms, tropical cyclones, floods, and small-scale weather events such as wildfires and hailstorms, have increased around the globe. The studies show no trends in losses, corrected for changes (increases) in population and capital at risk, that could be attributed to anthropogenic climate change. Therefore it can be concluded that anthropogenic climate change so far has not had a significant impact on losses from natural disasters.

Bouwers rightly acknowledges that there are uncertainties in such studies, and in particular, there will be a need to refine efforts to evaluate changing vulnerability and exposure in future such work, especially as the signal of greenhouse gas driven climate change is expected to become larger.  However, such uncertainties are not presently so large as to undercut Bouwers’ conclusion, e.g.,

A rigorous check on the potential introduction of bias from a failure to consider vulnerability reduction in normalization methods is to compare trends in geophysical variables with those in the normalized data. Normalized hurricane losses for instance match with variability in hurricane landfalls (Pielke et al. 2008). If vulnerability reduction would have resulted in a bias, it would show itself as a divergence between the geophysical and normalized loss data. In this case, the effects of vulnerability reduction apparently are not so large as to introduce a bias.

A pre-publication version of the paper is available here in PDF.

I hope this means that we can finally drill in Alaska now. Because I am tired of sending money and jobs overseas to people who really may not like us very much. We’re not going to explode the planet, and if we make our own energy here, not only do we get the jobs, but we can do it cleaner than they can.