Tag Archives: Science

Physicist George Ellis interviewed by science writer John Horgan in Scientific American

I’m mentioned George Ellis on this blog before, specifically on his doubts about the speculative multiverse cosmology.

Here he is interviewed in Scientific American. (H/T Think Apologetics)

Intro:

Biologist Rupert Sheldrake, whom I interviewed in my last post, wasn’t the only fascinating scientist I hung out with recently at Howthelightgetsin, a festival hosted by the Institute of Arts & Ideas. I also befriended George F. R. Ellis, the physicist-mathematician-cosmologist, an authority on the Big Bang and other cosmic mysteries. Ellis and I hit it off initially because we share some—how shall I put it?—concerns about the direction of physics, but I soon discovered that his interests range far beyond physics. He has published papers and books not only on physics and cosmology (including the 1973 classic The Large-Scale Structure of Space-Time, co-authored with Stephen Hawking) but also on philosophy, complexity theory, neuroscience, education and even low-income housing. (See his website, and his terrific 2011 critique of multiverse theories in Scientific American.) A native of South Africa, Ellis is professor emeritus at the University of Cape Town, where he taught for decades, and has also held positions at Cambridge, the University of Texas, the Fermi Institute and other institutions around the globe.

Awesome:

Horgan: Lawrence Krauss, in A Universe from Nothing, claims that physics has basically solved the mystery of why there is something rather than nothing. Do you agree?

Ellis: Certainly not.  He is presenting untested speculative theories of how things came into existence out of a pre-existing complex of entities, including variational principles, quantum field theory, specific symmetry groups, a bubbling vacuum, all the components of the standard model of particle physics, and so on. He does not explain in what way these entities could have pre-existed the coming into being of the universe, why they should have existed at all, or why they should have had the form they did.  And he gives no experimental or observational process whereby we could test these vivid speculations of the supposed universe-generation mechanism. How indeed can you test what existed before the universe existed? You can’t.

Thus what he is presenting is not tested science. It’s a philosophical speculation, which he apparently believes is so compelling he does not have to give any specification of evidence that would confirm it is true. Well, you can’t get any evidence about what existed before space and time came into being.  Above all he believes that these mathematically based speculations solve thousand year old philosophical conundrums, without seriously engaging those philosophical issues. The belief that all of reality can be fully comprehended in terms of physics and the equations of physics is a fantasy. As pointed out so well by Eddington in his Gifford lectures, they are partial and incomplete representations of physical, biological, psychological, and social reality.

And above all Krauss does not address why the laws of physics exist, why they have the form they have, or in what kind of manifestation they existed before the universe existed  (which he must believe if he believes they brought the universe into existence). Who or what dreamt up symmetry principles, Lagrangians, specific symmetry groups, gauge theories, and so on? He does not begin to answer these questions.

It’s very ironic when he says philosophy is bunk and then himself engages in this kind of attempt at philosophy. It seems that science education should include some basic modules on Plato, Aristotle, Kant, Hume, and the other great philosophers, as well as writings of more recent philosophers such as Tim Maudlin and David Albert.

He likes fine-tuning, but not multiverse and string theory:

Horgan: Are you a fan of multiverse theories? String theory? The anthropic principle?

No (may be true but unproveable, much too much untestable speculation about existence of infinities of entities, ill defined and untestable probability measures), no (too much speculative introduction of very complex unseeable entities, treats gravity just like any other force), yes (however one responds to it, it’s a real question that deserves consideration).  Fine tuning of fundamental physics parameters is required in order that we can exist. Examining this issue has led to many very interesting studies.

Horgan: Physicist Sean Carroll has argued that falsifiability is overrated as a criterion for judging whether theories should be taken seriously. Do you agree?

Ellis: This is a major step backwards to before the evidence-based scientific revolution initiated by Galileo and Newton.  The basic idea is that our speculative theories, extrapolating into the unknown and into untestable areas from well-tested areas of physics, are so good they have to be true. History proves that is the path to delusion: just because you have a good theory does not prove it is true. The other defence is that there is no other game in town. But there may not be any such game.

Scientists should strongly resist such an attack on the very foundations of its own success. Luckily it is a very small subset of scientists who are making this proposal.

Free will is real:

Horgan: In some of your writings, you warn against excessive determinism in physics, and science. Could you summarize your concerns?

Many scientists are strong reductionists who believe that physics alone determines outcomes in the real world, This is demonstrably untrue – for example the computer on which I am writing this could not possibly have come into being through the agency of physics alone.

The issue is that these scientists are focusing on some strands in the web of causation that actually exist, and ignoring others that are demonstrably there – such as ideas in our minds, or algorithms embodied in computer programs. These demonstrably act in a top-down way to cause physical effects in the real world. All these processes and actual outcomes are contextually dependent, and this allows the effectiveness of processes such as adaptive selection that are the key to the emergence of genuine complexity.

As I stated above, mathematical equations only represent part of reality, and should not be confused with reality.  A specific related issue: there is a group of people out there writing papers based on the idea that physics is a computational process.  But a physical law is not an algorithm. So who chooses the computational strategy and the algorithms that realise a specific physical law? (Finite elements perhaps?) What language is it written in? (Does Nature use Java or C++? What machine code is used?) Where is the CPU? What is used for memory, and in what way are read and write commands executed? Additionally if it’s a computation, how does Nature avoid the halting problem? It’s all a very bad analogy that does not work.

Horgan: Einstein, in the following quote, seemed to doubt free will: “If the moon, in the act of completing its eternal way around the Earth, were gifted with self-consciousness, it would feel thoroughly convinced that it was traveling its way of its own accord…. So would a Being, endowed with higher insight and more perfect intelligence, watching man and his doings, smile about man’s illusion that he was acting according to his own free will.” Do you believe in free will?

Ellis: Yes. Einstein is perpetuating the belief that all causation is bottom up. This simply is not the case, as I can demonstrate with many examples from sociology, neuroscience, physiology, epigenetics, engineering, and physics.  Furthermore if Einstein did not have free will in some meaningful sense, then he could not have been responsible for the theory of relativity – it would have been a product of lower level processes but not of an intelligent mind choosing between possible options.

I find it very hard to believe this to be the case – indeed it does not seem to make any sense. Physicists should pay attention to Aristotle’s four forms of causation – if they have the free will to decide what they are doing. If they don’t, then why waste time talking to them? They are then not responsible for what they say.

Sometimes, you have to just point out that a speculation is a speculation, and that we do not prefer speculations over experimental results. The data we have is consistent with an origin of the universe out of nothing, and the best explanation of this effect is a supernatural cause. Period.

What does the new Guzzo study tell us about the instability of cohabitation?

I blogged about a new study on cohabitation earlier in the month, but I only had the abstract. Now more details are out, from Family-Studies.org.

First, some context:

In a new paper, Bowling Green State University sociologist Karen Guzzo analyzes how the odds of cohabitation leading to either getting married or breaking up have changed over the years. Before getting to her findings, let’s review some of the cohabitation trends she highlights in her report (based on prior studies).

  1. The majority of people in their 30s have lived with someone outside of marriage.
  2. Cohabitation, rather than marriage, is now the more common form of first union.
  3. Fewer marriages than in the past start out with the couple having intentions to marry.
  4. People are more likely than ever to cohabit with multiple partners in succession—what I have called “CohabiDating.”
  5. More children than ever before are born to cohabiting couples, and this explains most of the rise in the number of children being born out of wedlock.

Guzzo notes, as have others, that cohabiting has become a normative experience in the romantic and sexual lives of young adults. As young adults put off marriage until later in life, cohabitation has inhabited much of the space that used to be made up of married couples. I think this dramatic change in how relationships form matters for at least two reasons. First, many cohabiting couples have children, but they are less likely than married couples to have planned to have children and they are much less likely to remain together after having children… Second, most people want lasting love in life, and most people still intend to accomplish that in marriage.

Here is the main finding of the new paper:

To simplify and summarize, what Guzzo found is that the increasing diversity in the types of cohabitation and cohabiters does not explain much about why things are so different from the past when it comes to increased odds that cohabiting couples will break up or not marry. Rather, on average, all types of cohabiting couples have become more likely than in the past to break up or not transition into marriage.

Here’s a quote from her paper (pg. 834):

Relative to cohabitations formed between 1990 and 1994, cohabitations formed from 1995–1999, 2000–2004, and 2005 and later were 13%, 49%, and 87%, respectively, more likely to dissolve than remain intact. The lower risk of marriage over remaining intact occurred only for the last two cohabitation cohorts (2000–2004 and 2005 and later), which were about 18% and 31% less likely to marry than remain intact, respectively.

Moving in together is becoming less and less likely to lead to having a future together. That’s not to say that all cohabiters are in the same boat regarding their destination. Those who are engaged (or have clear plans to marry) before moving in together are far more likely to eventually marry—but as Guzzo shows, even they are becoming less likely to do so. Related to this, my colleagues and I have shown, in numerous studies, that couples with clear plans to marry before cohabiting, along with those who marry without cohabiting, tend to have happier marriages and lower odds of divorce than those who move in together before having a clearly settled commitment to the future in marriage. (We believe this is largely because, while cohabiting unions obviously break up often, they are harder to break off than dating relationships because it becomes harder to move out and move on. So some people get stuck in a relationship they would otherwise have not remained in.)

[…]Cohabitation is fundamentally ambiguous. In fact, that is part—but just part—of why I believe it has become so popular. Sure, there are many cohabiting couples for whom living together was understood as a step-up in commitment, but, on average, research shows it is not associated with an increase in dedication to one’s partner.

So those are the findings from the latest study. You can find more studies on cohabitation linked here in my previous post on this topic.

A must-read series of posts on cosmic fine-tuning by Allen Hainline

There are four posts in the series, so far. I think Allen might be done, so I’m going to link to all four and snip something I like from each one.

The first post is on whether the fine-tuning is real, and whether a multiverse explains the fine-tuning so that there is no need for a cosmic Designer.

I just have to choose this quote from the atheist Stephen Hawking on the fine-tuning:

The remarkable fact is that the values of these numbers [i.e. the constants of physics] seem to have been very finely adjusted to make possible the development of life. For example, if the electric charge of the electron had been only slightly different, stars would have been unable to burn hydrogen and helium, or else they would not have exploded. It seems clear that there are relatively few ranges of values for the numbers [i.e. the constants of nature] that would allow for development of any form of intelligent life.

And from Luke Barnes, who I’ve mentioned before on this blog:

In my years of researching this topic, I’m amazed at how few scientists who have studied the fine-tuning details disagree with this core claim that the subset of life-permitting physics is a tiny fraction among possibilities. Since Luke Barnes is a top researcher on this topic, consider his input on the level of acceptance of the fine-tuning claim: “I’ve published a review of the scientific literature, 200+ papers, and I can only think of a handful that oppose this conclusion, and piles and piles that support it.[3]

And on the multiverse as a way to escape the fine-tuning:

The key issue though is that for the multiverse to be an adequate explanation for the fine-tuning it requires the conjunction of several hypotheses for which we lack any empirical evidence:

  1. A universe-generating mechanism that generates a plethora of universes
  2. That this mechanism doesn’t itself require fine-tuning
  3. The many-worlds interpretation of quantum physics
  4. The ability to widely vary constants in those universes. If you think that it’s a foregone conclusion that String Theory/M-Theory[8] will come to the rescue in this area, you should watch this video clip by Oxford physicist Roger Penrose where he exclaims that “it’s not even a theory … it’s a collection of hopes”.

Occam’s razor therefore does seem to favor design over the multiverse. When one accounts for the extensive problems in affirming premise 2 and how these multiverse theories make predictions incompatible with our universe, the hypothesis that God designed the physics of the universe to bring about life is more plausible.

Here’s the second post, where he explains the fine-tuning argument philosophically, and gives an example of one of the constants that has to be fine-tuned in order to support complex, embodied intelligence of any kind.

The cosmological constant:

The inference to design will be more easily recognized if we shed some light as to the specialness of the required values. Consider the size of the bull’s eye and wall based on just 1 parameter – the cosmological constant. There is a natural range for possible values for this constant because there are known contributions that are 10120times larger than the overall net value. (There is a near perfect but inexact cancellation of contributions accurate to 120 decimal places). Let’s use the most conservative numbers in the physics literature that indicate a fine-tuning to 1 part in 1053. If the cosmological constant, which governs the expansion rate of the universe, had been larger than its current value by this tiny fraction, then the universe would have expanded so fast that no stars or planets would have formed and therefore no life. If the value were smaller by this amount then the universe would have rapidly collapsed before the universe cooled sufficiently to allow for stable information storage which is required by any self-replicating system such as life.

In the third post, he responds to objections to the fine-tuning argument. One objection you hear from atheists who don’t understand the science is that any selection of constants and quantities is as likely as any other, so our life-permitting set is just random. Now, first off, there are only 10 to the 80 atoms in the visible universe, so if the cosmological constant is fine-tuned to 1 in 10 to the 120, it’s not rational to say “it just happened randomly”.

But here is Allen’s response:

However, the assumption that any set of constants is just as likely as any other is the very thing that we want to know. Starting off with that as an assumption begs the question against design. As Luke Barnes articulates in this excellent podcast dealing with responses to the fine-tuning claim, suppose we’re playing poker and every time I deal I get a royal flush. If this continues to happen, you become increasingly convinced that I’m likely to be cheating. If I responded to an accusation of cheating by just saying “well any set of 5 cards is just as likely as any other so you can’t accuse me of cheating” you would be rational to reject this explanation. The question is not “how likely is any set of 5 cards?” but rather “how likely is it I’m cheating if I just dealt myself 10 straight royal flushes?” This question accounts for the possibility that I’m cheating which would almost certainly be true in this scenario. So the right fine-question is “given the fine-tuning evidence, how likely is it that the constants were set at random?” The values for physical constants conform to a very particular pattern – that which supports life. The fact that we have so many finely-tuned constants makes it unlikely that they were all set at random (at least in the single universe scenario and I’ve already shown some of the problems/challenges in multiverse explanations.)

Every 5-card hand that you draw is equally unlikely, but the royal flush is the highest hand in the game and always wins. Every hand you draw is unlikely, but whatever you draw is overwhelmingly likely to not be a royal flush.

Finally, the fourth post deals with the objection that the constants and quantities could not have been other than they are.

He quotes physicist John Barrow giving 5 reasons why the constants can vary, and then this:

Even if the constants and laws of physics couldn’t vary, there is even more reason to think that there were many physically possible sets of initial conditions. Paul Davies states this emphatically:

“Even if the laws of physics were unique, it doesn’t follow that the physical universe itself is unique…the laws of physics must be augmented by cosmic initial conditions…there is nothing in present ideas about ‘laws of initial conditions’ remotely to suggest that their consistency with the laws of physics would imply uniqueness. Far from it…it seems, then, that the physical universe does not have to be the way it is: it could have been otherwise.[4]”

John A. Wheeler agrees: “Never has physics come up with a way to tell with what initial conditions the universe was started off. On nothing is physics clearer than what is not physics.”

The constants and quantities are not determined by physics. They were selected by whoever created nature in the first place.

So that’s the series. I noticed that he kept linking to this Common Sense Atheism podcast featuring famous cosmologist Luke Barnes. I grabbed it to listen this weekend, and you might want to get it, too. It’s over an hour. It seems like it is one stop shopping to understand common objections to the fine-tuning argument, and how strong each one is.