Tag Archives: Intelligent Design

Formation of life-permitting elements carbon and oxygen is fine-tuned

First, let’s review the structure of the fine-tuning argument.

The argument goes like this:

  1. The fine-tuning of the universe to support life is either due to law, chance or design
  2. It is not due to law or chance
  3. Therefore, the fine-tuning is due to design

Here are the facts on the fine-tuning:

  • Life has certain minimal requirements; long-term stable source of energy, a large number of different chemical elements, an element that can serve as a hub for joining together other elements into compounds, a universal solvent, etc.
  • In order to meet these minimal requirements, the physical constants, (such as the gravitational constant), and the ratios between physical constants, need to be withing a narrow range of values in order to support the minimal requirements for life of any kind.
  • Slight changes to any of the physical constants, or to the ratios between the constants, will result in a universe inhospitable to life.
  • The range of possible values spans 70 orders of magnitude.
  • The constants are selected by whoever creates the universe. They are not determined by physical laws. And the extreme probabilities involved required put the fine-tuning beyond the reach of chance.
  • Although each individual selection of constants and ratios is as unlikely as any other selection, the vast majority of these possibilities do not support the minimal requirements of life of any kind. (In the same way as any hand of 5 cards that is dealt is as likely as any other, but you are overwhelmingly likely NOT to get a royal flush. In our case, a royal flush is a life-permitting universe).

Carbon is that element that can serve as a hub, and oxygen is also a vital element, since it is a component of water, which is required for life. So both carbon (the hub of large molecules) and oxygen (a building block of water) are required for complex life of any imaginable kind.

Now for the study.

Mysterious Jen, who blogs at Victory Rolls and V8s sent me this amazing article on Science Daily about a new peer-reviewed study that supports the fine-tuning argument.

Here’s an excerpt:

Life as we know it is based upon the elements of carbon and oxygen. Now a team of physicists, including one from North Carolina State University, is looking at the conditions necessary to the formation of those two elements in the universe. They’ve found that when it comes to supporting life, the universe leaves very little margin for error.

Both carbon and oxygen are produced when helium burns inside of giant red stars. Carbon-12, an essential element we’re all made of, can only form when three alpha particles, or helium-4 nuclei, combine in a very specific way. The key to formation is an excited state of carbon-12 known as the Hoyle state, and it has a very specific energy — measured at 379 keV (or 379,000 electron volts) above the energy of three alpha particles. Oxygen is produced by the combination of another alpha particle and carbon.

NC State physicist Dean Lee and German colleagues Evgeny Epelbaum, Hermann Krebs, Timo Laehde and Ulf-G. Meissner had previously confirmed the existence and structure of the Hoyle state with a numerical lattice that allowed the researchers to simulate how protons and neutrons interact. These protons and neutrons are made up of elementary particles called quarks. The light quark mass is one of the fundamental parameters of nature, and this mass affects particles’ energies.

In new lattice calculations done at the Juelich Supercomputer Centre the physicists found that just a slight variation in the light quark mass will change the energy of the Hoyle state, and this in turn would affect the production of carbon and oxygen in such a way that life as we know it wouldn’t exist.

[…]The researchers’ findings appear in Physical Review Letters.

So that’s the latest research that supports the fine-tuning argument. But how effective is this argument really? Is it only admitted by theists, or do atheists accept the fine-tuning as well?

Is the fine-tuning real?

Yes, it’s real and it is conceded by the top-rank of atheist physicists. Let me give you a citation from the best one of all, Martin Rees. Martin Rees is an atheist and a qualified astronomer. He wrote a book called “Just Six Numbers: The Deep Forces That Shape The Universe”, (Basic Books: 2001). In it, he discusses 6 numbers that need to be fine-tuned in order to have a life-permitting universe.

Rees writes here:

These six numbers constitute a ‘recipe’ for a universe. Moreover, the outcome is sensitive to their values: if any one of them were to be ‘untuned’, there would be no stars and no life. Is this tuning just a brute fact, a coincidence? Or is it the providence of a benign Creator?

There are some atheists who deny the fine-tuning, but these atheists are in firm opposition to the progress of science. The more science has progressed, the more constants, ratios and quantities we have discovered that need to be fine-tuned. Science is going in a theistic direction. Next, let’s see how atheists try to account for the fine-tuning, on atheism.

Atheistic responses to the fine-tuning argument

There are two common responses among atheists to this argument.

The first is to speculate that there are actually an infinite number of other universes that are not fine-tuned, (i.e. – the gambler’s fallacy). All these other universes don’t support life. We just happen to be in the one universe is fine-tuned for life. The problem is that there is no way of directly observing these other universes and no independent evidence that they exist.

Here is an excerpt from an article in Discover magazine, (which is hostile to theism and Christianity).

Short of invoking a benevolent creator, many physicists see only one possible explanation: Our universe may be but one of perhaps infinitely many universes in an inconceivably vast multiverse. Most of those universes are barren, but some, like ours, have conditions suitable for life.

The idea is controversial. Critics say it doesn’t even qualify as a scientific theory because the existence of other universes cannot be proved or disproved. Advocates argue that, like it or not, the multiverse may well be the only viable non­religious explanation for what is often called the “fine-tuning problem”—the baffling observation that the laws of the universe seem custom-tailored to favor the emergence of life.

The second response by atheists is that the human observers that exist today, 14 billion years after the universe was created out of nothing, actually caused the fine-tuning. This solution would mean that although humans did not exist at the time the of the big bang, they are going to be able to reach back in time at some point in the future and manually fine-tune the universe.

Here is an excerpt from and article in the New Scientist, (which is hostile to theism and Christianity).

…maybe we should approach cosmic fine-tuning not as a problem but as a clue. Perhaps it is evidence that we somehow endow the universe with certain features by the mere act of observation… observers are creating the universe and its entire history right now. If we in some sense create the universe, it is not surprising that the universe is well suited to us.

So, there are two choices for atheists. Either an infinite number of unobservable universes that are not fine-tuned, or humans go back in time at some future point and fine-tune the beginning of the universe, billions of years in the past. I think I will prefer the design explanation to those alternatives.

What makes a planet suitable for supporting complex life?

The Circumstellar Habitable Zone (CHZ)

What do you need in order to have a planet that supports complex life? First, you need liquid water at the surface of the planet. But there is only a narrow range of temperatures that can support liquid water. It turns out that the size of the star that your planet orbits around has a lot to do with whether you get liquid water or not. A heavy, metal-rich star allows you to have a habitable planet far enough from the star so  the planet can support liquid water on the planet’s surface while still being able to spin on its axis. The zone where a planet can have liquid water at the surface is called the circumstellar habitable zone (CHZ). A metal-rich star like our Sun is very massive, which moves the habitable zone out further away from the star. If our star were smaller, we would have to orbit much closer to the star in order to have liquid water at the surface. Unfortunately, if you go too close to the star, then your planet becomes tidally locked, like the moon is tidally locked to Earth. Tidally locked planets are inhospitable to life.

Circumstellar Habitable Zone
Circumstellar Habitable Zone

Here, watch a clip from The Privileged Planet: (Clip 4 of 12, full playlist here)

But there’s more.

The Galactic Habitable Zone (GHZ)

So, where do you get the heavy elements you need for your heavy metal-rich star?

You have to get the heavy elements for your star from supernova explosions – explosions that occur when certain types of stars die. That’s where heavy elements come from. But you can’t be TOO CLOSE to the dying stars, because you will get hit by nasty radiation and explosions. So to get the heavy elements from the dying stars, your solar system needs to be in the galactic habitable zone (GHZ) – the zone where you can pickup the heavy elements you need but not get hit by radiation and explosions. The GHZ lies between the spiral arms of a spiral galaxy. Not only do you have to be in between the arms of the spiral galaxy, but you also cannot be too close to the center of the galaxy. The center of the galaxy is too dense and you will get hit with massive radiation that will break down your life chemistry. But you also can’t be too far from the center, because you won’t get enough heavy elements because there are fewer dying stars the further out you go. You need to be in between the spiral arms, a medium distance from the center of the galaxy.

Like this:

Galactic Habitable Zone
Galactic Habitable Zone and Solar Habitable Zone

Here, watch a clip from The Privileged Planet: (Clip 10 of 12, full playlist here)

The GHZ is based on a discovery made by astronomer Guillermo Gonzalez, which made the front cover of Scientific American in 2001. That’s right, the cover of Scientific American. I actually stole the image above of the GHZ and CHZ (aka solar habitable zone) from his Scientific American article (linked above).

These are just a few of the things you need in order to get a planet that supports life.

Here are a few of the more well-known ones:

  • a solar system with a single massive Sun than can serve as a long-lived, stable source of energy
  • a terrestrial planet (non-gaseous)
  • the planet must be the right distance from the sun in order to preserve liquid water at the surface – if it’s too close, the water is burnt off in a runaway greenhouse effect, if it’s too far, the water is permanently frozen in a runaway glaciation
  • the solar system must be placed at the right place in the galaxy – not too near dangerous radiation, but close enough to other stars to be able to absorb heavy elements after neighboring stars die
  • a moon of sufficient mass to stabilize the tilt of the planet’s rotation
  • plate tectonics
  • an oxygen-rich atmosphere
  • a sweeper planet to deflect comets, etc.
  • planetary neighbors must have non-eccentric orbits

By the way, you can watch a lecture with Guillermo Gonzalez explaining his ideas further. This lecture was delivered at UC Davis in 2007. That link has a link to the playlist of the lecture, a bio of the speaker, and a summary of all the topics he discussed in the lecture. An excellent place to learn the requirements for a suitable habitat for life.

The Economist: some problems with the peer-review process

From The Economist, of all places.

Excerpt:

The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.

It is tempting to see the priming fracas as an isolated case in an area of science—psychology—easily marginalised as soft and wayward. But irreproducibility is much more widespread. A few years ago scientists at Amgen, an American drug company, tried to replicate 53 studies that they considered landmarks in the basic science of cancer, often co-operating closely with the original researchers to ensure that their experimental technique matched the one used first time round. According to a piece they wrote last year in Nature, a leading scientific journal, they were able to reproduce the original results in just six. Months earlier Florian Prinz and his colleagues at Bayer HealthCare, a German pharmaceutical giant, reported in Nature Reviews Drug Discovery, a sister journal, that they had successfully reproduced the published results in just a quarter of 67 seminal studies.

Let’s take a look at some of the problems from the article.

Problems with researcher bias:

Other data-heavy disciplines face similar challenges. Models which can be “tuned” in many different ways give researchers more scope to perceive a pattern where none exists. According to some estimates, three-quarters of published scientific papers in the field of machine learning are bunk because of this “overfitting”, says Sandy Pentland, a computer scientist at the Massachusetts Institute of Technology.

Problems with journal referees:

Another experiment at the BMJ showed that reviewers did no better when more clearly instructed on the problems they might encounter. They also seem to get worse with experience. Charles McCulloch and Michael Callaham, of the University of California, San Francisco, looked at how 1,500 referees were rated by editors at leading journals over a 14-year period and found that 92% showed a slow but steady drop in their scores.

As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyse the data presented from scratch, contenting themselves with a sense that the authors’ analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.

Problems with fraud:

Fraud is very likely second to incompetence in generating erroneous results, though it is hard to tell for certain. Dr Fanelli has looked at 21 different surveys of academics (mostly in the biomedical sciences but also in civil engineering, chemistry and economics) carried out between 1987 and 2008. Only 2% of respondents admitted falsifying or fabricating data, but 28% of respondents claimed to know of colleagues who engaged in questionable research practices.

Problems releasing data:

Reproducing research done by others often requires access to their original methods and data. A study published last month inPeerJ by Melissa Haendel, of the Oregon Health and Science University, and colleagues found that more than half of 238 biomedical papers published in 84 journals failed to identify all the resources (such as chemical reagents) necessary to reproduce the results. On data, Christine Laine, the editor of the Annals of Internal Medicine, told the peer-review congress in Chicago that five years ago about 60% of researchers said they would share their raw data if asked; now just 45% do. Journals’ growing insistence that at least some raw data be made available seems to count for little: a recent review by Dr Ioannidis which showed that only 143 of 351 randomly selected papers published in the world’s 50 leading journals and covered by some data-sharing policy actually complied.

Critics of global warming have had problems getting at data before, as Nature reported here:

Since 2002, McIntyre has repeatedly asked Phil Jones, director of CRU, for access to the HadCRU data. Although the data are made available in a processed gridded format that shows the global temperature trend, the raw station data are currently restricted to academics. While Jones has made data available to some academics, he has refused to supply McIntyre with the data. Between 24 July and 29 July of this year, CRUreceived 58 freedom of information act requests from McIntyre and people affiliated with Climate Audit. In the past month, the UK Met Office, which receives a cleaned-up version of the raw data from CRU, has received ten requests of its own.

Why would scientists hide their data? Well, recall that the Climategate scandal resulted from unauthorized release of the code used to generate the data used to promote global warming alarmism. The leaked code showed that the scientists had been generating faked data using a “fudge factor”.

Elsewhere, leaked e-mailed from global warmists revealed that they do indeed suppress articles that are critical of global warming alarmism:

As noted previously, the Climategate letters and documents show Jones and the Team using the peer review process to prevent publication of adverse papers, while giving softball reviews to friends and associates in situations fraught with conflict of interest. Today I’ll report on the spectacle of Jones reviewing a submission by Mann et al.

Let’s recall some of the reviews of articles daring to criticize CRU or dendro:

I am really sorry but I have to nag about that review – Confidentially I now need a hard and if required extensive case for rejecting (Briffa to Cook)

If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won’t be easy to dismiss out of hand as the math appears to be correct theoretically, (Cook to Briffa)

Recently rejected two papers (one for JGR and for GRL) from people saying CRU has it wrong over Siberia. Went to town in both reviews, hopefully successfully. (Jones to Mann)

One last quote from the Economist article. One researcher submitted a completely bogus paper to many journals, and many of them accepted it:

John Bohannon, a biologist at Harvard, recently submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. An unusual move; but it was an unusual paper, concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this dog’s dinner from a fictitious researcher at a made up university, 157 of the journals accepted it for publication.

Dr Bohannon’s sting was directed at the lower tier of academic journals. But in a classic 1998 study Fiona Godlee, editor of the prestigious British Medical Journal, sent an article containing eight deliberate mistakes in study design, analysis and interpretation to more than 200 of the BMJ’s regular reviewers. Not one picked out all the mistakes. On average, they reported fewer than two; some did not spot any.

The Economist article did not go into the problem of bias due to worldview presuppositions, though. So let me say something about that.

A while back Casey Luskin posted a list of problems with peer review.

Here was one that stuck out to me:

Point 5: The peer-review system is often biased against non-majority viewpoints.
The peer-review system is largely devoted to maintaining the status quo. As a new scientific theory that challenges much conventional wisdom, intelligent design faces political opposition that has nothing to do with the evidence. In one case, pro-ID biochemist Michael Behe submitted an article for publication in a scientific journal but was told it could not be published because “your unorthodox theory would have to displace something that would be extending the current paradigm.” Denyse O’Leary puts it this way: “The overwhelming flaw in the traditional peer review system is that it listed so heavily toward consensus that it showed little tolerance for genuinely new findings and interpretations.”

Recently, I summarized a podcast on the reviewer bias problem featuring physcist Frank Tipler. His concern in that podcast was that peer-review would suppress new ideas, even if they were correct. He gave examples of this happening. Even a paper by Albert Einstein was rejected by a peer-reviewed journal. Elsewhere, Tipler was explicitly told to remove positive references to intelligent design in order to get his papers published. Tipler’s advice was for people with new ideas to bypass the peer-reviewed journal system entirely.

Speaking about the need to bypass peer-review, you might remember that the Darwinian hierarchy is not afraid to have people sanctioned if they criticize Darwinism in peer-reviewed literature.

Recall the case of Richard Sternberg.

Excerpt:

In 2004, in my capacity as editor of The Proceedings of the Biological Society of Washington, I authorized “The Origin of Biological Information and the Higher Taxonomic Categories” by Dr. Stephen Meyer to be published in the journal after passing peer-review. Because Dr. Meyer’s article presented scientific evidence for intelligent design in biology, I faced retaliation, defamation, harassment, and a hostile work environment at the Smithsonian’s National Museum of Natural History that was designed to force me out as a Research Associate there. These actions were taken by federal government employees acting in concert with an outside advocacy group, the National Center for Science Education. Efforts were also made to get me fired from my job as a staff scientist at the National Center for Biotechnology Information.

So those are some of the issues to consider when thinking about the peer-review process. My view is that peer-reviewed evidence does count for something in a debate situation, but as you can see from the Economist article, it may not count for as much as it used to. I think my view of science in general has been harmed by what I saw from physicist Lawrence Krauss in his third debate with William Lane Craig. If a scientist can misrepresent another scientist and not get fired by his employer, then I think we really need to be careful about the level of honesty in the academy.