There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article. (Full text available at archive.md)
In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.
Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”
People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.
Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.
And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.
Here is a link to the full article by John Searle on the Chinese room illustration.
By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.
Here’s a related article on “strong AI” by Christian philosopher Jay Richards.
Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.
The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.
This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.
The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.
We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.
In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.
Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.
AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.
When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion. The computer’s behavior is just the determined result of its programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.
4 thoughts on “Will computers and robots ever become self-aware?”
I always tell people the same thing. A computer has no ability to ask the true questions, we have no idea how to program true abstract thought into a computer. Even animals are hard to prove if they get abstract thought. Deeper thinking requires abstract thought and a true ability to synthesize ideas
LikeLiked by 1 person
This discussion always reminds me of the movie “Transcendence” staring Johnny Depp. The premise of the movie is that while dying of cancer, Depp’s character (Will Caster) and his team of computer scientists, perform numerous scans of his brain that allow them to recreate his mind as a computer program following his death.
When Caster dies, the program begins to run and to the initial delight of his partner, Evelyn Hall, it appears that they’ve successfully “saved” Caster. The program works and it has all the thoughts and memories of Caster. She believes (thought later comes to question it) that this program is Caster brought back from the dead.
Later on in the film, as Evelyn brings in several other scientists to evaluate what’s going on, one of Caster’s former partners, Joseph Tagger (played by Morgan Freeman) asks the program, “Can you prove you’re self-aware?”
The program (displaying an image of Caster at the time), laughs and answers, “That’s an interesting question Dr. Tagger. Can you prove that you are?”
This raises, I think, a greater philosophical conundrum that most materialists have no answer for. Consciousness itself is not something that can be empirically verified through science. It can only be experienced.
Even if AI advances to a point that interacting with an AI becomes indistinguishable from interaction with another human (and it’s getting scarily close), one will never be able to verify that the AI actually experiences consciousness anymore than you can verify that any other human experiences it. Sure, you might rightly assume that since you experience consciousness, every other human being experiences it as well, but this is not something you can verify through empirical science. No amount of brain scans and physical studies of another persons physical body can possibly demonstrate that there exists within them a “self” that experiences private mental states.
As such, this definitively proves to the materialist that there is at least one, non-physical thing that exists, namely their own consciousness. It is something that only they can verify through their own experience (and not through empirical experience mind you, since one does not use any of their 5 senses to experience their own private mental states). Thus, unless you’re going to argue that only physical things exist, you’d have to deny you’re own consciousness, in which case, why are we having this conversation.
LikeLiked by 1 person
I’m surprised sometimes by how many people don’t see the bigger problem with scenarios like this. Even if the computer was conscious, it wouldn’t be the same person as the one who died. It would be a replica. So they weren’t saving anybody. They were creating a new person and giving it all the memories and personality of the original person. After all, having scanned his brain, they could’ve stated the computer BEFORE the original actually died.
Jehovah’s Witnesses face this same problem because of their denial of an immaterial soul that survives the death of the body combined with their belief that the resurrected body has nothing to do with the body that died. Essentially, Jehovah’s creates a perfect replica of the person from his perfect memory of them. If the Jehovah’s Witnesses are right, it would mean nobody is actually raised from the dead. Once you die, you’re gone for good. In the future, God will create perfect replicas.
Star Trek transporter technology suffers from the same problem of identity.
LikeLiked by 2 people
One issue I have with AI propagandists is that many of them are promoting the concept that soon AI programs like ChatGPT will be “sentient”, vaguely implying that an AI program could have a soul… Such assertions should be seen as nothing more than PROPAGANDA. Thus, talk of “AI consciousness” should be seen as an attempt to put AI on a pedestal and give it an importance and (especially) an authority as a propaganda tool that it should not have.
LikeLiked by 1 person