Can computers become conscious by increasing processing power?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article. (H/T Sarah)

Searle is writing about the IBM computer that was programmed to play Jeopardy. His Chinese room example shows why no one should be concerned about computers acting like humans. There is no thinking computer. There never will be a thinking computer. And you cannot build up to a thinking computer my adding more hardware and software.


Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

UPDATE: Drew sent me a link to the full article by Searle.

Here’s an article by Christian philosopher Jay Richards.


Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

Jay Richards is my all-round favorite Christian scholar. He has the Ph.D in philosophy from Princeton.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

10 thoughts on “Can computers become conscious by increasing processing power?”

  1. I began reading AI books around 1980 – by big names, who sadly, are still selling their drivel. The predictions that were made back then made the AI technology of the best sci-fi movies of our time look like Model T’s. The certainty of these predictions exceeded the predictions themselves.

    On the other hand, I do like it when my cell phone corrects my spelling. Isn’t that what we are talking about here when it comes to consciousness? :-)

    You are correct when you say that the estimates for successful AI are worldview driven.


  2. I love this argument, and have explained it to many people now. The few refutations I’ve seen for it dwell very heavily on the differences between “syntax”, “symbols”, and tend to have very actual explanatory power. There may be some good counter arguments, but I’ve never encountered any.

    My favorite part about this argument is how easy it is for anyone (computer science types like me, or non-computer science types like my wife) to understand it.


  3. True AI would seem to be a logical impossibility anyway – you can’t tell something to think for itself.


  4. Thanks, Wintery Knight. Good post.

    However, I just wanted to please make a quick correction on something you said in your post:

    “Jay Richards is my all-round favorite Christian scholar. He has the Ph.D in philosophy from Princeton.”

    Unfortunately, this is mistaken. Richards has a PhD from Princeton Theological Seminary, not Princeton University. PTS is not the same as Princeton University.

    Here is the relevant section from PTS’s FAQs page:

    1. Is Princeton Seminary part of Princeton University?

    No, Princeton Seminary is a free-standing graduate school of the Presbyterian Church (USA). It was established in 1812 as a post-graduate professional school of theology in the interest of advancing and extending the theological curricula and educating ministers of the church to serve in the expanding western frontier of the new nation. The College of New Jersey, chartered in 1746 also by Presbyterians to educate ministers, later expanded its mission beyond training clergy and grew to become Princeton University.

    The Seminary and the University have a collegial and supportive relationship, with some exchange privileges in course enrollment as approved by faculty. The University stands at the center of the Princeton community, but it is not affiliated with the Seminary.

    Of course, I’m only pointing this out as clarification. It’s not at all to denigrate any of Richards’ fine achievements as a Christian scholar. In fact, I respect Richards quite a bit, and appreciate much of his work.


  5. I totally agree that “consciousness is qualitatively different from the type of computation that we have developed in computers.” This doesn’t mean strong AI is impossible. It just means we need a different kind of computer.

    The AI will not be an algorithmic program, but it will be a neural network. It won’t be coded in a top-down manner, but it will evolve, just as we did.

    Certainly it’s true that AI advocates have been too enthusiastic and unrealistic, but we should not err in the opposite direction either. There’s no reason to think strong AI is impossible.


    1. One of the interesting results in computer science / mathematics is that all turing machines are essentially equivalent, and all current algorithmic programming is equivalent to turing machines. For a neural network to fall outside of the current paradigm, it would have to be shown that the code generated is somehow not a turing machine. For this to be true, it would have to start off not being a turing machine, which means we’re talking about something outside of the current computer science paradigm.

      As far as I know, bottom up coding would never break out of the turing machine set, because essentially its recombinations of existing code. In other words, if you consider the class of turing machines, and your neural network starts off in this class, any evolved networks are still part of the class of turing machines and therefore equivalent to a standalone algorithm that could have been used instead.


  6. Consciousness is a huge problem for science as well as AI research. Quantum mechanics, the bedrock of physics, is based on the idea of consciousness – its dressed up as ‘the observer’, but when you get right down to it, the only way you can collapse wave functions is by invoking a concept that is semantically indistinguishable from consciousness.

    This is extremely problematic for any scientist that looks into it – it implies that there really is a ghost in the machine, and that ghost cannot (currently?) be explained in physical or materialist terms.

    AI research that posits that a sufficiently advanced alogrithm will become conscious is wishful thinking, as it would ‘prove’ materialism for its adherents. I suspect though that many of them know its simply not possible.

    Its why I think evangelical atheists like Dawkins are hypocrites and liars – they are intelligent enough to know that there is a deep problem with the foundations of science, but for their own reasons choose not to be entirely truthful about its nature and the logical consequences.


  7. Again I agree that an algorithm will not become strong AI, but a neural network is different from an algorithmic system. Of course it’s possible to describe a neural network in terms of algorithm, but the key point is whether the algorithm comes first and guides the function, or whether the algorithm comes after and simply describes the function. That’s the whole difference between a sentient being and an unconscious robot.


  8. An animal walks across the sand leaving tracks behind. A locomotive goes on tracks previously laid by workers. No one knows which way the animal will turn, but his tracks betray him later. Everyone knows which way the train will turn. The tracks determine the train’s path.

    This applies to quantum theory as well. When you look at a tiny particle, you’re seeing its tracks in the sand. At that scale, there’s no way to see the train’s tracks out in front of the moving train.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s