Category Archives: Polemics

Will computers and robots ever become self-aware?

Lets take a closer look at a puzzle
Lets take a closer look at a puzzle

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Alexander Vilenkin: “All the evidence we have says that the universe had a beginning”

I’ve decided to explain why physicists believe that there was a creation event in this post. That is to say, I’ve decided to let famous cosmologist Alexander Vilenkin do it.

From Uncommon Descent.

Excerpt:

Did the cosmos have a beginning? The Big Bang theory seems to suggest it did, but in recent decades, cosmologists have concocted elaborate theories – for example, an eternally inflating universe or a cyclic universe – which claim to avoid the need for a beginning of the cosmos. Now it appears that the universe really had a beginning after all, even if it wasn’t necessarily the Big Bang.

At a meeting of scientists – titled “State of the Universe” – convened last week at Cambridge University to honor Stephen Hawking’s 70th birthday, cosmologist Alexander Vilenkin of Tufts University in Boston presented evidence that the universe is not eternal after all, leaving scientists at a loss to explain how the cosmos got started without a supernatural creator. The meeting was reported in New Scientist magazine (Why physicists can’t avoid a creation event, 11 January 2012).

[…]In his presentation, Professor Vilenkin discussed three theories which claim to avoid the need for a beginning of the cosmos.

The three theories are chaotic inflationary model, the oscillating model and quantum gravity model. Regular readers will know that those have all been addressed in William Lane Craig’s peer-reviewed paper that evaluates alternatives to the standard Big Bang cosmology.

But let’s see what Vilenkin said.

More:

One popular theory is eternal inflation. Most readers will be familiar with the theory of inflation, which says that the universe increased in volume by a factor of at least 10^78 in its very early stages (from 10^−36 seconds after the Big Bang to sometime between 10^−33 and 10^−32 seconds), before settling into the slower rate of expansion that we see today. The theory of eternal inflation goes further, and holds that the universe is constantly giving birth to smaller “bubble” universes within an ever-expanding multiverse. Each bubble universe undergoes its own initial period of inflation. In some versions of the theory, the bubbles go both backwards and forwards in time, allowing the possibility of an infinite past. Trouble is, the value of one particular cosmic parameter rules out that possibility:

But in 2003, a team including Vilenkin and Guth considered what eternal inflation would mean for the Hubble constant, which describes mathematically the expansion of the universe. They found that the equations didn’t work (Physical Review Letters, DOI: 10.1103/physrevlett.90.151301). “You can’t construct a space-time with this property,” says Vilenkin. It turns out that the constant has a lower limit that prevents inflation in both time directions. “It can’t possibly be eternal in the past,” says Vilenkin. “There must be some kind of boundary.”

A second option explored by Vilenkin was that of a cyclic universe, where the universe goes through an infinite series of big bangs and crunches, with no specific beginning. It was even claimed that a cyclic universe could explain the low observed value of the cosmological constant. But as Vilenkin found, there’s a problem if you look at the disorder in the universe:

Disorder increases with time. So following each cycle, the universe must get more and more disordered. But if there has already been an infinite number of cycles, the universe we inhabit now should be in a state of maximum disorder. Such a universe would be uniformly lukewarm and featureless, and definitely lacking such complicated beings as stars, planets and physicists – nothing like the one we see around us.

One way around that is to propose that the universe just gets bigger with every cycle. Then the amount of disorder per volume doesn’t increase, so needn’t reach the maximum. But Vilenkin found that this scenario falls prey to the same mathematical argument as eternal inflation: if your universe keeps getting bigger, it must have started somewhere.

However, Vilenkin’s options were not exhausted yet. There was another possibility: that the universe had sprung from an eternal cosmic egg:

Vilenkin’s final strike is an attack on a third, lesser-known proposal that the cosmos existed eternally in a static state called the cosmic egg. This finally “cracked” to create the big bang, leading to the expanding universe we see today. Late last year Vilenkin and graduate student Audrey Mithani showed that the egg could not have existed forever after all, as quantum instabilities would force it to collapse after a finite amount of time (arxiv.org/abs/1110.4096). If it cracked instead, leading to the big bang, then this must have happened before it collapsed – and therefore also after a finite amount of time.

“This is also not a good candidate for a beginningless universe,” Vilenkin concludes.

So at the end of the day, what is Vilenkin’s verdict?

“All the evidence we have says that the universe had a beginning.”

This is consistent with the Borde-Guth-Vilenkin Theorem, which I blogged about before, and which William Lane Craig leveraged to his advantage in his debate with Peter Millican.

The Borde-Guth-Vilenkin (BGV) proof shows that every universe that expands must have a space-time boundary in the past. That means that no expanding universe, no matter what the model, can be eternal into the past. No one denies the expansion of space in our universe, and so we are left with a cosmic beginning. Even speculative alternative cosmologies do not escape the need for a beginning.

Conclusion

If the universe came into being out of nothing, which seems to be the case from science, then the universe has a cause. Things do not pop into being, uncaused, out of nothing. The cause of the universe must be transcendent and supernatural. It must be uncaused, because there cannot be an infinite regress of causes. It must be eternal, because it created time. It must be non-physical, because it created space. There are only two possibilities for such a cause. It could be an abstract object or an agent. Abstract objects cannot cause effects. Therefore, the cause is an agent.

Now, let’s have a discussion about this science in our churches, and see if we can’t train Christians to engage with non-Christians about the evidence so that everyone accepts what science tells us about the origin of the universe.

What are Boltzmann brains, and what challenge do they pose to the multiverse hypothesis?

Christianity and the progress of science
Christianity and the progress of science

I thought I would turn to the atheist theoretical physicist Sean Carroll, who has previously debated William Lane Craig, to explain to us what a Boltzmann brain is, and what threat it posts to the multiverse hypothesis.

Here is Sean Caroll, quoted by About.com:

Ludwig Boltzmann was one of the founders of the field of thermodynamics in the nineteenth century.

One of the key concepts was the second law of thermodynamics, which says that the entropy of a closed system always increases. Since the universe is a closed system, we would expect the entropy to increase over time. This means that, given enough time, the most likely state of the universe is one where everything is the in thermodynamic equilibrium … but we clearly don’t exist in a universe of this type since, after all, there is order all around us in various forms, not the least of which is the fact that we exist.

With this in mind, we can apply the anthropic principle to inform our reasoning by taking into account that we do, in fact, exist.

Here the logic gets a little confusing, so I’m going to borrow the words from a couple of more detailed looks at the situation. As described by cosmologist Sean Carroll in From Eternity to Here:

Boltzmann invoked the anthropic principle (although he didn’t call it that) to explain why we wouldn’t find ourselves in one of the very common equilibrium phases: In equilibrium, life cannot exist. Clearly, what we want to do is find the most common conditions within such a universe that are hospitable to life. Or, if we want to be more careful, perhaps we should look for conditions that are not only hospitable to life, but hospitable to the particular kind of intelligent and self-aware life that we like to think we are….

We can take this logic to its ultimate conclusion. If what we want is a single planet, we certainly don’t need a hundred billion galaxies with a hundred billion stars each. And if what we want is a single person, we certainly don’t need an entire planet. But if in fact what we want is a single intelligence, able to think about the world, we don’t even need an entire person–we just need his or her brain.

So the reductio ad absurdum of this scenario is that the overwhelming majority of intelligences in this multiverse will be lonely, disembodied brains, who fluctuate gradually out of the surrounding chaos and then gradually dissolve back into it. Such sad creatures have been dubbed “Boltzmann brains” by Andreas Albrecht and Lorenzo Sorbo….

In a 2004 paper, Albrecht and Sorbo discussed “Boltzmann brains” in their essay:

A century ago Boltzmann considered a “cosmology” where the observed universe should be regarded as a rare fluctuation out of some equilibrium state. The prediction of this point of view, quite generically, is that we live in a universe which maximizes the total entropy of the system consistent with existing observations. Other universes simply occur as much more rare fluctuations. This means as much as possible of the system should be found in equilibrium as often as possible.

From this point of view, it is very surprising that we find the universe around us in such a low entropy state. In fact, the logical conclusion of this line of reasoning is utterly solipsistic. The most likely fluctuation consistent with everything you know is simply your brain (complete with “memories” of the Hubble Deep fields, WMAP data, etc) fluctuating briefly out of chaos and then immediately equilibrating back into chaos again. This is sometimes called the “Boltzmann’s Brain” paradox.

[…]Now that you understand Boltzmann brains as a concept, though, you have to proceed a bit to understanding the “Boltzmann brain paradox” that is caused by applying this thinking to this absurd degree. Again, as formulated by Carroll:

Why do we find ourselves in a universe evolving gradually from a state of incredibly low entropy, rather than being isolated creatures that recently fluctuated from the surrounding chaos?

Unfortunately, there is no clear explanation to resolve this … thus why it’s still classified as a paradox.

Naturalists like to propose the multiverse as a way of explaining away the fine-tuning that we see, and explaining why complex, embodied intelligent beings like ourselves exist. But even if the multiverse hypothesis were true, we still would not expect to observe stars, planets, and conscious embodied intelligent beings. It is far more likely on a multiverse scenario that any observers we had would be “Boltzmann” brains in an empty universe. The multiverse hypothesis doesn’t explain the universe we have, which contains “a hundred billion galaxies with a hundred billion stars each” – not to mention our bodies which are composed of heavy elements, all of which require fine-tuning piled on fine-tuning piled on fine-tuning.

William Lane Craig answered a question about Boltzmann brains a while back, so let’s look at his answer since we saw what his debate opponent said above.

He writes:

Incredible as it may sound, today the principal–almost the only–alternative to a Cosmic Designer to explain the incomprehensibly precise fine tuning of nature’s constants and fundamental quantities is the postulate of a World Ensemble of (a preferably infinite number of) randomly ordered universes. By thus multiplying one’s probabilistic resources, one ensures that by chance alone somewhere in this infinite ensemble finely tuned universes like ours will appear.

Now comes the key move: since observers can exist only in worlds fine-tuned for their existence, OF COURSE we observe our world to be fine-tuned! The worlds which aren’t finely tuned have no observers in them and so cannot be observed! Hence, our observing the universe to be fine-tuned for our existence is no surprise: if it weren’t, we wouldn’t be here to be surprised. So this explanation of fine tuning relies on (i) the hypothesis of a World Ensemble and (ii) an observer self-selection effect.

Now apart from objections to (i) of a direct sort, this alternative faces a very formidable objection to (ii), namely, if we were just a random member of a World Ensemble, then we ought to be observing a very different universe. Roger Penrose has calculated that the odds of our solar system’s forming instantaneously through the random collision of particles is incomprehensibly more probable that the universe’s being fine-tuned, as it is. So if we were a random member of a World Ensemble, we should be observing a patch of order no larger than our solar system in a sea of chaos. Worlds like that are simply incomprehensibly more plentiful in the World Ensemble than worlds like ours and so ought to be observed by us if we were but a random member of such an ensemble.

Here’s where the Boltzmann Brains come into the picture. In order to be observable the patch of order needn’t be even as large as the solar system. The most probable observable world would be one in which a single brain fluctuates into existence out of the quantum vacuum and observes its otherwise empty world. The idea isn’t that the brain is the whole universe, but just a patch of order in the midst of disorder. Don’t worry that the brain couldn’t persist long: it just has to exist long enough to have an observation, and the improbability of the quantum fluctuations necessary for it to exist that long will be trivial in comparison to the improbability of fine tuning.

In other words, the observer self-selection effect is explanatorily vacuous. It does not suffice to show that only finely tuned worlds are observable. As Robin Collins has noted, what needs to be explained is not just intelligent life, but embodied, interactive, intelligent agents like ourselves. Appeal to an observer self-selection effect accomplishes nothing because there is no reason whatever to think that most observable worlds are worlds in which that kind of observer exists. Indeed, the opposite appears to be true: most observable worlds will be Boltzmann Brain worlds.

Allen Hainline explained some of the OTHER problems with the multiverse in a post on Cross Examined’s blog. I recommend taking a look at those as well, because I feel funny even talking about Boltzmann brains. I would rather just say that there is no experimental evidence for the multiverse hypothesis, as I blogged before, and leave it at that. But if the person you are talking to fights you on it, you can disprove the multiverse with the Boltzmann brains.