Tag Archives: Fine Tuning

Five reasons why the multiverse is not a good explanation for cosmic fine-tuning

Apologetics and the progress of science
Apologetics and the progress of science

This post by J. Warner Wallace appeared at his Cold Case Christianity blog. It features 5 reasons why the multiverse hypothesis is not a good explanation for the astonishing degree of fine-tuning we find in the cosmic constants and quantities in the universe that allow complex, embodied intelligent life of any conceivable kind.

Here is the list:

  1. This Explanation Lacks Evidential Confirmation
  2. This Explanation Requires Fine-Tuning
  3. This Explanation Relies on Speculative Notions of Time
  4. This Explanation Results in Absurdities Common to “Infinites”
  5. This Explanation Acknowledges an “External” Creative Cause

Let’s take a closer look at numbers two and three:

This Explanation Requires Fine-Tuning
If there is a multiverse vacuum capable of such creative activity, it would be reasonable for us to askhow the physics of such an environment could be so fine-tuned to create a life-permitting universe. As Oxford philosopher Richard Swinburne observes, any proposed multiverse mechanism “needs to have a certain form rather than innumerable possible other forms, and probably constants too that need fine-tuning in the narrow sense . . . if that diversity of universes is to result.” Theoretical physicist, Stephen Hawking, when assessing “eternal inflation” models as a source for the multiverse, admits the same problem of fine-tuning: “The problem is, for our theoretical models of inflation to work, the initial state of the universe had to be set up in a very special and highly improbable way. Thus traditional inflation theory resolves one set of issues but creates another—the need for a very special initial state.”

This Explanation Relies on Speculative Notions of Time
Theorists who propose a pre-existing vacuum must account for the nature of time in this setting. All descriptions of this vacuum describe it as temporal (with bubble universes emerging or quantum events occurring over time). But the Standard Cosmological Model indicates time, as we know it,began with our universe. Physicist Alexander Vilenkin describes the dilemma this way: “There is no matter and no space in this very peculiar state. Also, there is no time . . . In the absence of space and matter, time is impossible to define. And yet, the state of ‘nothing’ cannot be identified with absolute nothingness.” Multiverse explanations must provide an account for the temporal nature of the vacuum lying at the core of their theory.

Regarding  Wallace’s first point, here is MIT physicist Alan Lightman talking about the multiverse’s evidential problems in Harper’s Magazine.

He writes:

The… conjecture that there are many other worlds… [T]here is no way they can prove this conjecture. That same uncertainty disturbs many physicists who are adjusting to the idea of the multiverse. Not only must we accept that basic properties of our universe are accidental and uncalculable. In addition, we must believe in the existence of many other universes. But we have no conceivable way of observing these other universes and cannot prove their existence. Thus, to explain what we see in the world and in our mental deductions, we must believe in what we cannot prove.

Sound familiar? Theologians are accustomed to taking some beliefs on faith. Scientists are not. All we can do is hope that the same theories that predict the multiverse also produce many other predictions that we can test here in our own universe. But the other universes themselves will almost certainly remain a conjecture.

It’s not a good explanation of the data, it’s just desperate speculation. Don’t be one of these people that finds a way to believe what you want to believe. Look through the telescope for yourself. Believe what you can see with your own eyes – that’s the right way to get to the truth.

What are Boltzmann brains, and what challenge do they pose to the multiverse hypothesis?

Apologetics and the progress of science
Apologetics and the progress of science

I thought I would turn to the atheist theoretical physicist Sean Carroll, who has previously debated William Lane Craig, to explain to us what a Boltzmann brain is, and what threat it posts to the multiverse hypothesis.

Here is Sean Caroll, quoted by About.com:

Ludwig Boltzmann was one of the founders of the field of thermodynamics in the nineteenth century.

One of the key concepts was the second law of thermodynamics, which says that the entropy of a closed system always increases. Since the universe is a closed system, we would expect the entropy to increase over time. This means that, given enough time, the most likely state of the universe is one where everything is the in thermodynamic equilibrium … but we clearly don’t exist in a universe of this type since, after all, there is order all around us in various forms, not the least of which is the fact that we exist.

With this in mind, we can apply the anthropic principle to inform our reasoning by taking into account that we do, in fact, exist.

Here the logic gets a little confusing, so I’m going to borrow the words from a couple of more detailed looks at the situation. As described by cosmologist Sean Carroll in From Eternity to Here:

Boltzmann invoked the anthropic principle (although he didn’t call it that) to explain why we wouldn’t find ourselves in one of the very common equilibrium phases: In equilibrium, life cannot exist. Clearly, what we want to do is find the most common conditions within such a universe that are hospitable to life. Or, if we want to be more careful, perhaps we should look for conditions that are not only hospitable to life, but hospitable to the particular kind of intelligent and self-aware life that we like to think we are….

We can take this logic to its ultimate conclusion. If what we want is a single planet, we certainly don’t need a hundred billion galaxies with a hundred billion stars each. And if what we want is a single person, we certainly don’t need an entire planet. But if in fact what we want is a single intelligence, able to think about the world, we don’t even need an entire person–we just need his or her brain.

So the reductio ad absurdum of this scenario is that the overwhelming majority of intelligences in this multiverse will be lonely, disembodied brains, who fluctuate gradually out of the surrounding chaos and then gradually dissolve back into it. Such sad creatures have been dubbed “Boltzmann brains” by Andreas Albrecht and Lorenzo Sorbo….

In a 2004 paper, Albrecht and Sorbo discussed “Boltzmann brains” in their essay:

A century ago Boltzmann considered a “cosmology” where the observed universe should be regarded as a rare fluctuation out of some equilibrium state. The prediction of this point of view, quite generically, is that we live in a universe which maximizes the total entropy of the system consistent with existing observations. Other universes simply occur as much more rare fluctuations. This means as much as possible of the system should be found in equilibrium as often as possible.

From this point of view, it is very surprising that we find the universe around us in such a low entropy state. In fact, the logical conclusion of this line of reasoning is utterly solipsistic. The most likely fluctuation consistent with everything you know is simply your brain (complete with “memories” of the Hubble Deep fields, WMAP data, etc) fluctuating briefly out of chaos and then immediately equilibrating back into chaos again. This is sometimes called the “Boltzmann’s Brain” paradox.

[…]Now that you understand Boltzmann brains as a concept, though, you have to proceed a bit to understanding the “Boltzmann brain paradox” that is caused by applying this thinking to this absurd degree. Again, as formulated by Carroll:

Why do we find ourselves in a universe evolving gradually from a state of incredibly low entropy, rather than being isolated creatures that recently fluctuated from the surrounding chaos?

Unfortunately, there is no clear explanation to resolve this … thus why it’s still classified as a paradox.

Naturalists like to propose the multiverse as a way of explaining away the fine-tuning that we see, and explaining why complex, embodied intelligent beings like ourselves exist. But even if the multiverse hypothesis were true, we still would not expect to observe stars, planets, and conscious embodied intelligent beings. It is far more likely on a multiverse scenario that any observers we had would be “Boltzmann” brains in an empty universe. The multiverse hypothesis doesn’t explain the universe we have, which contains “a hundred billion galaxies with a hundred billion stars each” – not to mention our bodies which are composed of heavy elements, all of which require fine-tuning piled on fine-tuning piled on fine-tuning.

William Lane Craig answered a question about Boltzmann brains a while back, so let’s look at his answer since we saw what his debate opponent said above.

He writes:

Incredible as it may sound, today the principal–almost the only–alternative to a Cosmic Designer to explain the incomprehensibly precise fine tuning of nature’s constants and fundamental quantities is the postulate of a World Ensemble of (a preferably infinite number of) randomly ordered universes. By thus multiplying one’s probabilistic resources, one ensures that by chance alone somewhere in this infinite ensemble finely tuned universes like ours will appear.

Now comes the key move: since observers can exist only in worlds fine-tuned for their existence, OF COURSE we observe our world to be fine-tuned! The worlds which aren’t finely tuned have no observers in them and so cannot be observed! Hence, our observing the universe to be fine-tuned for our existence is no surprise: if it weren’t, we wouldn’t be here to be surprised. So this explanation of fine tuning relies on (i) the hypothesis of a World Ensemble and (ii) an observer self-selection effect.

Now apart from objections to (i) of a direct sort, this alternative faces a very formidable objection to (ii), namely, if we were just a random member of a World Ensemble, then we ought to be observing a very different universe. Roger Penrose has calculated that the odds of our solar system’s forming instantaneously through the random collision of particles is incomprehensibly more probable that the universe’s being fine-tuned, as it is. So if we were a random member of a World Ensemble, we should be observing a patch of order no larger than our solar system in a sea of chaos. Worlds like that are simply incomprehensibly more plentiful in the World Ensemble than worlds like ours and so ought to be observed by us if we were but a random member of such an ensemble.

Here’s where the Boltzmann Brains come into the picture. In order to be observable the patch of order needn’t be even as large as the solar system. The most probable observable world would be one in which a single brain fluctuates into existence out of the quantum vacuum and observes its otherwise empty world. The idea isn’t that the brain is the whole universe, but just a patch of order in the midst of disorder. Don’t worry that the brain couldn’t persist long: it just has to exist long enough to have an observation, and the improbability of the quantum fluctuations necessary for it to exist that long will be trivial in comparison to the improbability of fine tuning.

In other words, the observer self-selection effect is explanatorily vacuous. It does not suffice to show that only finely tuned worlds are observable. As Robin Collins has noted, what needs to be explained is not just intelligent life, but embodied, interactive, intelligent agents like ourselves. Appeal to an observer self-selection effect accomplishes nothing because there is no reason whatever to think that most observable worlds are worlds in which that kind of observer exists. Indeed, the opposite appears to be true: most observable worlds will be Boltzmann Brain worlds.

Allen Hainline explained some of the OTHER problems with the multiverse in a post on Cross Examined’s blog. I recommend taking a look at those as well, because I feel funny even talking about Boltzmann brains. I would rather just say that there is no experimental evidence for the multiverse hypothesis, as I blogged before, and leave it at that. But if the person you are talking to fights you on it, you can disprove the multiverse with the Boltzmann brains.

Why is the universe so big, and why is so much of it hostile to life?

Chris Kyle, Navy SEAL
Chris Kyle, Navy SEAL, can hit a very small target from a mile away – very improbable

Review: In case you need a refresher on the cosmological and fine-tuning arguments, as presented by a professor of particle physics at Stanford University, then click this link and watch the lecture.

If you already know about the standard arguments for theism from cosmology, then take a look at this post on Uncommon Descent.

Summary:

In my previous post, I highlighted three common atheistic objections to to the cosmological fine-tuning argument. In that post, I made no attempt to answer these objections. My aim was simply to show that the objections were weak and inconclusive.

Let’s go back to the original three objections:

1. If the universe was designed to support life, then why does it have to be so BIG, and why is it nearly everywhere hostile to life? Why are there so many stars, and why are so few orbited by life-bearing planets? (Let’s call this the size problem.)

2. If the universe was designed to support life, then why does it have to be so OLD, and why was it devoid of life throughout most of its history? For instance, why did life on Earth only appear after 70% of the cosmos’s 13.7-billion-year history had already elapsed? And Why did human beings (genus Homo) only appear after 99.98% of the cosmos’s 13.7-billion-year history had already elapsed? (Let’s call this the age problem.)

3. If the universe was designed to support life, then why does Nature have to be so CRUEL? Why did so many animals have to die – and why did so many species of animals have to go extinct (99% is the commonly quoted figure), in order to generate the world as we see it today? What a waste! And what about predation, parasitism, and animals that engage in practices such as serial murder and infant cannibalism? (Let’s call this the death and suffering problem.)

In today’s post, I’m going to try to provide some positive answers to the first two questions: the size problem and the age problem.

Here’s an excerpt for the size argument:

(a) The main reason why the universe is as big as it currently is that in the first place, the universe had to contain sufficient matter to form galaxies and stars, without which life would not have appeared; and in the second place, the density of matter in the cosmos is incredibly fine-tuned, due to the fine-tuning of gravity. To appreciate this point, let’s go back to the earliest time in the history of the cosmos that we can meaningfully talk about: the Planck time, when the universe was 10^-43 seconds old. If the density of matter at the Planck time had differed from the critical density by as little as one part in 10^60, the universe would have either exploded so rapidly that galaxies wouldn’t have formed, or collapsed so quickly that life would never have appeared. In practical terms: if our universe, which contains 10^80 protons and neutrons, had even one more grain of sand in it – or one grain less – we wouldn’t be here.

If you mess with the size of the universe, you screw up the mass density fine-tuning. We need that to have a universe that expands at the right speed in order to form galaxies, stars and planets. You need planets to have a place to form life – a place with liquid water at the surface.

And an excerpt for the age argument:

(a) One reason why we need an old universe is that billions of years were required for Population I stars (such as our sun) to evolve. These stars are more likely to harbor planets such as our Earth, because they contain lots of “metals” (astronomer-speak for elements heavier than helium), produced by the supernovae of the previous generation of Population II stars. According to currently accepted models of Big Bang nucleosynthesis, this whole process was absolutely vital, because the Big Bang doesn’t make enough “metals”, including those necessary for life: carbon, nitrogen, oxygen, phosphorus and so on.

Basically, you need heavy elements to make stars that burn slow and steady, as well as to make PEOPLE! And heavy elements have to be built up slowly through several iterations of the stellar lifecycle, including the right kinds of stellar death: supernovae.

Read the rest! These arguments come up all the time in debates with village atheists like Christopher Hitchens and Richard Dawkins. It’s a smokescreen they put up, but you’ve got to be able to answer it using the scientific evidence we have today. They always want to dismiss God with their personal preferences about what God should or should not do. But the real issue is the design of the cosmological constants that allow life to anywhere. That’s the part that’s designed. And that’s not a matter of personal preference, it’s a matter of mathematics and experimental science.

One last parting shot. If God made the universe have life everywhere, the first thing atheists would say is “See? Life evolves fine by itself without any God!” The only way to recognize a marksman is when he hits a narrow target (not hostile to life) from a wide range of possibilities that have no value (hostile to life). We don’t credit Chris Kyle for hitting the wall above an Islamic terrorist from a mile away, we credit Chris Kyle for hitting an Islamic terrorist a mile away. The design is not how much of the universe is hospitable to life versus how much is hostile to life. The design is in the cosmological constants – where we are in the narrow band that is hospitable to life and not in the huge regions that are hostile to life.

You can read the best explanation of the design argument in this lecture featuring Robin Collins. That link goes to my post which has a summary of the lecture. He has a new lecture that I also blogged about where he extends the fine-tuning argument down to the level of particle physics. I have a summary of that one as well.