Will robots and machines ever have consciousness like humans?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”?Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

9 thoughts on “Will robots and machines ever have consciousness like humans?”

  1. I have a problem with the Chinese room analogy. While as given your analogy is apt, what if it isn’t as given with the human in total isolation? The human being doesn’t know Chinese and without further input we can reasonably assume he could never learn Chinese. Realistically however, people can’t be kept in total isolation. Who brings him food? Who removes his waste? Is he allowed exercise? Who fills his cavities or puts a cast on his broken limb?

    The idea that a computer can “never” interact—i.e.— receive input—from unexpected sources and somehow intuitively bridge gaps in understanding is I think mistaken thinking. Right now the idea of an intelligent machine is absurd. Absolutely so. When I was a kid the idea of somebody flipping open a communicator and talking to somebody in space was equally absurd—Star Trek—so who’s to say what could or couldn’t happen in the future. The Moon? Done. Mars? I think in our lifetimes. Mechanical replacements for our organs?

    http://www.dailymail.co.uk/sciencetech/article-2637158/Humans-fitted-kidneys-3D-printers.html

    We live in the most interesting of times, don’t you think?

    1. Hey Jack,

      I think the point of the analogy is that, even with the appearance of understanding, nothing in the system nor the system itself understands Chinese. The person in the box could easily take a break to head down to Starbucks, and hop back in the box later and the analogy would still be valid.

      A computer that speaks Chinese, composes music, or empathetically gives timely and helpful advice will ultimately always be like this man/box system. The idea is that we will get closer to computers that behave as if they were conscious, but there is an entirely different driver (binary computation) of that same behavior.

  2. I want to believe that it will never happen, but I’m afraid that much like trying to create life in a test tube, we are going to eventually figure out how to put consciousness in a computer. Oh, it won’t be “life” so to speak, but it will be artificial intelligence and it will probably become aware and start controlling us. Look at us now, computers already tell us what to do. As they become more sophisticated, learn more from interacting with us, we’ll start answering to them without even being aware of it.

  3. In your last paragraph it sounds like you’re trying to bully people with your authority. You’ve got your BS and MS in computer science! Wow. So you assure us that consciousness is not on the radar and that computers will never do anything but follow instructions.

    OK, Mr. Expert Man, I guess I should just trust you and not try to learn anything on my own about artificial intelligence. We’ll keep that whole field under lock and key, with you standing guard over it. AI couldn’t possibly be a threat to Christian theology – no sir!

      1. Not really, because if we create a true AI machine that is fully conscious, then Christians have two easy options: (1) They can deny that the machine really is conscious despite what anyone else thinks, or (2) they can claim that God bestowed an immaterial soul on the machine.

        I don’t think either of these options is refutable, and therefore Christianity is safe from AI. Wintery can relax.

  4. Reflecting on the nature of consciousness, and the logical implications of atheism regarding consciousness, really put the nail in the coffin of a naturalistic worldview for me when I was coming to believe in God.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s