Tag Archives: Watson

Will computers and robots ever become self-aware?

Lets take a closer look at a puzzle
Lets take a closer look at a puzzle

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Study: the early Earth’s atmosphere contained oxygen

Christianity and the progress of science
Christianity and the progress of science

Here’s a paper published in the prestigious peer-reviewed science journal Nature, entitled “The oxidation state of Hadean magmas and implications for early Earth’s atmosphere”. This paper is significant because it undermines naturalistic scenarios for the origin of life.

Evolution News explains what the paper is about.

Excerpt:

A recent Nature publication reports a new technique for measuring the oxygen levels in Earth’s atmosphere some 4.4 billion years ago. The authors found that by studying cerium oxidation states in zircon, a compound formed from volcanic magma, they could ascertain the oxidation levels in the early earth. Their findings suggest that the early Earth’s oxygen levels were very close to current levels.

[…]Miller and Urey conducted experiments to show that under certain atmospheric conditions and with the right kind of electrical charge, several amino acids could form from inorganic compounds such as methane, ammonia, and water. Several experiments have been done using various inorganic starting materials, all yielding a few amino acids; however, one key aspect of all of these experiments was the lack of oxygen.

If the atmosphere has oxygen (or other oxidants) in it, then it is an oxidizing atmosphere. If the atmosphere lacks oxygen, then it is either inert or a reducing atmosphere. Think of a metal that has been left outside, maybe a piece of iron. That metal will eventually rust. Rusting is the result of the metal being oxidized. With organic reactions, such as the ones that produce amino acids, it is very important that no oxygen be present, or it will quench the reaction. Scientists, therefore, concluded that the early Earth must have been a reducing environment when life first formed (or the building blocks of life first formed) because that was the best environment for producing amino acids. The atmosphere eventually accumulated oxygen, but life did not form in an oxidative environment.

The problem with this hypothesis is that it is based on the assumption that organic life must have formed from inorganic materials. That is why the early Earth must have been a reducing atmosphere. Research has been accumulating for more than thirty years, however, suggesting that the early Earth likely did have oxygen present.

[…]Their findings not only showed that oxygen was present in the early Earth atmosphere, something that has been shown in other studies, but that oxygen was present as early as 4.4 billion years ago. This takes the window of time available for life to have begun, by an origin-of-life scenario like the RNA-first world, and reduces it to an incredibly short amount of time. Several factors need to coincide in order for nucleotides or amino acids to form from purely naturalistic circumstances (chance and chemistry). The specific conditions required already made purely naturalist origin-of-life scenarios highly unlikely. Drastically reducing the amount of time available, adding that to the other conditions needing to be fulfilled, makes the RNA world hypothesis or a Miller-Urey-like synthesis of amino acids simply impossible.

So here’s where we stand. If you are a materialist, then you need a reducing environment on the early Earth in order to get organic building blocks (amino acids) from inorganic materials. However, the production of these organic building blocks (amino acids) requires that the early Earth atmosphere be oxygen-free. And the problem with this new research, which confirms previous research, is that the early Earth contained huge amounts of oxygen – the same amount of oxygen as we have today. This is lethal to naturalistic scenarios for creating the building blocks of life on the Earth’s surface.

Other problems

If you would like to read a helpful overview of the problems with a naturalistic scenario for the origin of life, check out this article by Casey Luskin.

Excerpt:

The “origin of life” (OOL) is best described as the chemical and physical processes that brought into existence the first self-replicating molecule. It differs from the “evolution of life” because Darwinian evolution employs mutation and natural selection to change organisms, which requires reproduction. Since there was no reproduction before the first life, no “mutation – selection” mechanism was operating to build complexity. Hence, OOL theories cannot rely upon natural selection to increase complexity and must create the first life using only the laws of chemistry and physics.

There are so many problems with purely natural explanations for the chemical origin of life on earth that many scientists have already abandoned all hopes that life had a natural origin on earth. Skeptical scientists include Francis Crick (solved the 3-dimensional structure of DNA) and Fred Hoyle (famous British cosmologist and mathematician), who, in an attempt to retain their atheistic worldviews, then propose outrageously untestable cosmological models or easily falsifiable extra-terrestrial-origin-of-life / panspermia scenarios which still do not account for the natural origin of life. So drastic is the evidence that Scientific American editor John Horgan wrote, “[i]f I were a creationist, I would cease attacking the theory of evolution … and focus instead on the origin of life. This is by far the weakest strut of the chassis of modern biology.”3

The article goes over the standard problems with naturalistic scenarios of the origin of life: wrong atmosphere, harmful UV radiation, interfering cross-reactions, oxygen levels, meteorite impacts, chirality, etc.

Most people who are talking about intelligent design at the origin of life talk about the information problem – how do you get the amino acids to form proteins and how do you get nucleotide bases to code for amino acids? But the starting point for solving the sequencing problem is the construction of the amino acids – there has to be a plausible naturalistic scenario to form them.

Will computers and robots ever become self-aware?

Lets take a closer look at a puzzle
Lets take a closer look at a puzzle

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.