Tag Archives: Soul

Will computers and robots ever become self-aware?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article. (Full text available at archive.md)

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion. The computer’s behavior is just the determined result of its programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Will computers and robots ever become self-aware?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article. (Full text available at archive.md)

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion. The computer’s behavior is just the determined result of its programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Six reasons why you should believe in non-physical souls

This podcast is a must-listen. Please take the time to download this podcast and listen to it. I guarantee that you will love this podcast. I even recommended it to my Dad and I almost never do that.

Details:

In this podcast, J. Warner examines the evidence for the existence of the mind (and inferentially, the soul) as he looks at six classic philosophical arguments. Jim also briefly discusses Thomas Nagel’s book, Mind and Cosmos and discusses the limitations of physicalism.

The MP3 file is here. (67 MB, 72 minutes)

Topics:

  • Atheist Thomas Nagel’s latest book “Mind and Cosmos” makes the case that materialism cannot account for the evidence of mental phenomena
  • Nagel writes in this recent New York Times article that materialism cannot account for the reality of consciousness, meaning, intention and purpose
  • Quote from the Nagel article:

Even though the theistic outlook, in some versions, is consistent with the available scientific evidence, I don’t believe it, and am drawn instead to a naturalistic, though non-materialist, alternative. Mind, I suspect, is not an inexplicable accident or a divine and anomalous gift but a basic aspect of nature that we will not understand until we transcend the built-in limits of contemporary scientific orthodoxy.

  • When looking at this question, it’s important to not have our conclusions pre-determined by presupposing materialism or atheism
  • If your mind/soul doesn’t exist and you are a purely physical being then that is a defeater for Christianity, so we need to respond
  • Traditionally, Christians have been committed to a view of human nature called “dualism” – human beings are souls who have bodies
  • The best way* to argue for the existence of the soul is using philosophical arguments

The case:

  • The law of identity says that if A = B’ if A and B have the exact same properties
  • If A = the mind and B = the brain, then is A identical to B?
  • Wallace will present 6 arguments to show that A is not identical to B because they have different properties

Not everyone of the arguments below might make sense to you, but you will probably find one or two that strike you as correct. Some of the points are more illustrative than persuasive, like #2. However, I do find #3, #5 and #6 persuasive.

1) First-person access to mental properties

  • Thought experiment: Imagine your dream car, and picture it clearly in your mind
  • If we invited an artist to come and sketch out your dream car, then we could see your dream car’s shape on paper
  • This concept of your dream car is not something that people can see by looking at your brain structure
  • Physical properties can be physically accessed, but the properties of your dream care and privately accessed

2) Our experience of consciousness implies that we are not our bodies

  • Common sense notion of personhood is that we own our bodies, but we are not our bodies

3) Persistent self-identity through time

  • Thought experiment: replacing a new car with an old car one piece at a time
  • When you change even the smallest part of a physical object, it changes the identity of that object
  • Similarly, your body is undergoing changes constantly over time
  • Every cell in your body is different from the body you had 10 years ago
  • Even your brain cells undergo changes (see this from New Scientist – WK)
  • If you are the same person you were 10 years ago, then you are not your physical body

4) Mental properties cannot be measured like physical objects

  • Physical objects can be measured (e.g. – use physical measurements to measure weight, size, etc.)
  • Mental properties cannot be measured

5) Intentionality or About-ness

  • Mental entities can refer to realities that are physical, something outside of themselves
  • A tree is not about anything, it just is a physical object
  • But you can have thoughts about the tree out there in the garden that needs water

6) Free will and personal responsibility

  • If humans are purely physical, then all our actions are determined by sensory inputs and genetic programming
  • Biological determinism is not compatible with free will, and free will is required for personal responsibility
  • Our experience of moral choices and moral responsibility requires free will, and free will requires minds/souls

He spends the last 10 minutes of the podcast responding to naturalistic objections to the mind/soul hypothesis.

*Now in the podcast, Wallace does say that scientific evidence is not the best kind of evidence to use when discussing this issue of body/soul and mind/brain. But I did blog before about two pieces of evidence that I think are relevant to this discussion: corroborated near-death experiences and mental effort.

You might remember that Dr. Craig brought up the issue of substance dualism, and the argument from intentionality (“aboutness”), in his debate with the naturalist philosopher Alex Rosenberg, so this argument about dualism is battle-ready. You can add it to your list of arguments for Christian theism along with all the other arguments like the Big Bang, the fine-tuning, the origin of life, stellar habitability, galactic habitability, irreducible complexity, molecular machines, the Cambrian explosion, the moral argument, the resurrection, biological convergence, and so on.