The Economist: some problems with the peer-review process

From The Economist, of all places.

Excerpt:

The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.

It is tempting to see the priming fracas as an isolated case in an area of science—psychology—easily marginalised as soft and wayward. But irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

Let’s take a look at some of the problems from the article.

Problems with researcher bias:

Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”, says Sandy Pentland, a computer scientist at the Massachusetts Institute of Technology.

Problems with journal referees:

Another experiment at the BMJ showed that reviewers did no better when more clearly instructed on the problems they might encounter. They also seem to get worse with experience. Charles McCulloch and Michael Callaham, of the University of California, San Francisco, looked at how 1,500 referees were rated by editors at leading journals over a 14-year period and found that 92% showed a slow but steady drop in their scores.

As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Problems with fraud:

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. Dr Fanelli has looked at 21 different surveys of academics (mostly in the biomedical sciences but also in civil engineering, chemistry and economics) carried out between 1987 and 2008. Only 2% of respondents admitted falsifying or fabricating data, but 28% of respondents claimed to know of colleagues who engaged in questionable research practices.

Problems releasing data:

Reproducing research done by others often requires access to their original methods and data. A study published last month inPeerJ by Melissa Haendel, of the Oregon Health and Science University, and colleagues found that more than half of 238 biomedical papers published in 84 journals failed to identify all the resources (such as chemical reagents) necessary to reproduce the results. On data, Christine Laine, the editor of the Annals of Internal Medicine, told the peer-review congress in Chicago that five years ago about 60% of researchers said they would share their raw data if asked; now just 45% do. Journals’ growing insistence that at least some raw data be made available seems to count for little: a recent review by Dr Ioannidis which showed that only 143 of 351 randomly selected papers published in the world’s 50 leading journals and covered by some data-sharing policy actually complied.

Critics of global warming have had problems getting at data before, as Nature reported here:

Since 2002, McIntyre has repeatedly asked Phil Jones, director of CRU, for access to the HadCRU data. Although the data are made available in a processed gridded format that shows the global temperature trend, the raw station data are currently restricted to academics. While Jones has made data available to some academics, he has refused to supply McIntyre with the data. Between 24 July and 29 July of this year, CRUreceived 58 freedom of information act requests from McIntyre and people affiliated with Climate Audit. In the past month, the UK Met Office, which receives a cleaned-up version of the raw data from CRU, has received ten requests of its own.

Why would scientists hide their data? Well, recall that the Climategate scandal resulted from unauthorized release of the code used to generate the data used to promote global warming alarmism. The leaked code showed that the scientists had been generating faked data using a “fudge factor”.

Elsewhere, leaked e-mailed from global warmists revealed that they do indeed suppress articles that are critical of global warming alarmism:

As noted previously, the Climategate letters and documents show Jones and the Team using the peer review process to prevent publication of adverse papers, while giving softball reviews to friends and associates in situations fraught with conflict of interest. Today I’ll report on the spectacle of Jones reviewing a submission by Mann et al.

Let’s recall some of the reviews of articles daring to criticize CRU or dendro:

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting (Briffa to Cook)

If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, (Cook to Briffa)

Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. (Jones to Mann)

One last quote from the Economist article. One researcher submitted a completely bogus paper to many journals, and many of them accepted it:

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication.

Dr Bohannon’s sting was directed at the lower tier of academic journals. But in a classic 1998 study Fiona Godlee, editor of the prestigious British Medical Journal, sent an article containing eight deliberate mistakes in study design, analysis and interpretation to more than 200 of the BMJ’s regular reviewers. Not one picked out all the mistakes. On average, they reported fewer than two; some did not spot any.

The Economist article did not go into the problem of bias due to worldview presuppositions, though. So let me say something about that.

A while back Casey Luskin posted a list of problems with peer review.

Here was one that stuck out to me:

Point 5: The peer-review system is often biased against non-majority viewpoints.
The peer-review system is largely devoted to maintaining the status quo. As a new scientific theory that challenges much conventional wisdom, intelligent design faces political opposition that has nothing to do with the evidence. In one case, pro-ID biochemist Michael Behe submitted an article for publication in a scientific journal but was told it could not be published because “your unorthodox theory would have to displace something that would be extending the current paradigm.” Denyse O’Leary puts it this way: “The overwhelming flaw in the traditional peer review system is that it listed so heavily toward consensus that it showed little tolerance for genuinely new findings and interpretations.”

Recently, I summarized a podcast on the reviewer bias problem featuring physcist Frank Tipler. His concern in that podcast was that peer-review would suppress new ideas, even if they were correct. He gave examples of this happening. Even a paper by Albert Einstein was rejected by a peer-reviewed journal. Elsewhere, Tipler was explicitly told to remove positive references to intelligent design in order to get his papers published. Tipler’s advice was for people with new ideas to bypass the peer-reviewed journal system entirely.

Speaking about the need to bypass peer-review, you might remember that the Darwinian hierarchy is not afraid to have people sanctioned if they criticize Darwinism in peer-reviewed literature.

Recall the case of Richard Sternberg.

Excerpt:

In 2004, in my capacity as editor of The Proceedings of the Biological Society of Washington, I authorized “The Origin of Biological Information and the Higher Taxonomic Categories” by Dr. Stephen Meyer to be published in the journal after passing peer-review. Because Dr. Meyer’s article presented scientific evidence for intelligent design in biology, I faced retaliation, defamation, harassment, and a hostile work environment at the Smithsonian’s National Museum of Natural History that was designed to force me out as a Research Associate there. These actions were taken by federal government employees acting in concert with an outside advocacy group, the National Center for Science Education. Efforts were also made to get me fired from my job as a staff scientist at the National Center for Biotechnology Information.

So those are some of the issues to consider when thinking about the peer-review process. My view is that peer-reviewed evidence does count for something in a debate situation, but as you can see from the Economist article, it may not count for as much as it used to. I think my view of science in general has been harmed by what I saw from physicist Lawrence Krauss in his third debate with William Lane Craig. If a scientist can misrepresent another scientist and not get fired by his employer, then I think we really need to be careful about the level of honesty in the academy.

One thought on “The Economist: some problems with the peer-review process”

  1. I would go even further, WK: I would argue that, just as we are in a post-modern world, so we are in a post-science world – and probably for the very same reasons. Really, is there anyone out there today who could have even shined the shoes of a Newton, Faraday, or Maxwell?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s