Tag Archives: Watson

Will computers and robots ever become self-aware?

Lets take a closer look at a puzzle
Lets take a closer look at a puzzle

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Study: the early Earth’s atmosphere contained oxygen

Apologetics and the progress of science
Apologetics and the progress of science

Here’s a paper published in the prestigious peer-reviewed science journal Nature, entitled “The oxidation state of Hadean magmas and implications for early Earth’s atmosphere”. This paper is significant because it undermines naturalistic scenarios for the origin of life.

Evolution News explains what the paper is about.

Excerpt:

A recent Nature publication reports a new technique for measuring the oxygen levels in Earth’s atmosphere some 4.4 billion years ago. The authors found that by studying cerium oxidation states in zircon, a compound formed from volcanic magma, they could ascertain the oxidation levels in the early earth. Their findings suggest that the early Earth’s oxygen levels were very close to current levels.

[…]Miller and Urey conducted experiments to show that under certain atmospheric conditions and with the right kind of electrical charge, several amino acids could form from inorganic compounds such as methane, ammonia, and water. Several experiments have been done using various inorganic starting materials, all yielding a few amino acids; however, one key aspect of all of these experiments was the lack of oxygen.

If the atmosphere has oxygen (or other oxidants) in it, then it is an oxidizing atmosphere. If the atmosphere lacks oxygen, then it is either inert or a reducing atmosphere. Think of a metal that has been left outside, maybe a piece of iron. That metal will eventually rust. Rusting is the result of the metal being oxidized. With organic reactions, such as the ones that produce amino acids, it is very important that no oxygen be present, or it will quench the reaction. Scientists, therefore, concluded that the early Earth must have been a reducing environment when life first formed (or the building blocks of life first formed) because that was the best environment for producing amino acids. The atmosphere eventually accumulated oxygen, but life did not form in an oxidative environment.

The problem with this hypothesis is that it is based on the assumption that organic life must have formed from inorganic materials. That is why the early Earth must have been a reducing atmosphere. Research has been accumulating for more than thirty years, however, suggesting that the early Earth likely did have oxygen present.

[…]Their findings not only showed that oxygen was present in the early Earth atmosphere, something that has been shown in other studies, but that oxygen was present as early as 4.4 billion years ago. This takes the window of time available for life to have begun, by an origin-of-life scenario like the RNA-first world, and reduces it to an incredibly short amount of time. Several factors need to coincide in order for nucleotides or amino acids to form from purely naturalistic circumstances (chance and chemistry). The specific conditions required already made purely naturalist origin-of-life scenarios highly unlikely. Drastically reducing the amount of time available, adding that to the other conditions needing to be fulfilled, makes the RNA world hypothesis or a Miller-Urey-like synthesis of amino acids simply impossible.

So here’s where we stand. If you are a materialist, then you need a reducing environment on the early Earth in order to get organic building blocks (amino acids) from inorganic materials. However, the production of these organic building blocks (amino acids) requires that the early Earth atmosphere be oxygen-free. And the problem with this new research, which confirms previous research, is that the early Earth contained huge amounts of oxygen – the same amount of oxygen as we have today. This is lethal to naturalistic scenarios for creating the building blocks of life on the Earth’s surface.

Other problems

If you would like to read a helpful overview of the problems with a naturalistic scenario for the origin of life, check out this article by Casey Luskin.

Excerpt:

The “origin of life” (OOL) is best described as the chemical and physical processes that brought into existence the first self-replicating molecule. It differs from the “evolution of life” because Darwinian evolution employs mutation and natural selection to change organisms, which requires reproduction. Since there was no reproduction before the first life, no “mutation – selection” mechanism was operating to build complexity. Hence, OOL theories cannot rely upon natural selection to increase complexity and must create the first life using only the laws of chemistry and physics.

There are so many problems with purely natural explanations for the chemical origin of life on earth that many scientists have already abandoned all hopes that life had a natural origin on earth. Skeptical scientists include Francis Crick (solved the 3-dimensional structure of DNA) and Fred Hoyle (famous British cosmologist and mathematician), who, in an attempt to retain their atheistic worldviews, then propose outrageously untestable cosmological models or easily falsifiable extra-terrestrial-origin-of-life / panspermia scenarios which still do not account for the natural origin of life. So drastic is the evidence that Scientific American editor John Horgan wrote, “[i]f I were a creationist, I would cease attacking the theory of evolution … and focus instead on the origin of life. This is by far the weakest strut of the chassis of modern biology.”3

The article goes over the standard problems with naturalistic scenarios of the origin of life: wrong atmosphere, harmful UV radiation, interfering cross-reactions, oxygen levels, meteorite impacts, chirality, etc.

Most people who are talking about intelligent design at the origin of life talk about the information problem – how do you get the amino acids to form proteins and how do you get nucleotide bases to code for amino acids? But the starting point for solving the sequencing problem is the construction of the amino acids – there has to be a plausible naturalistic scenario to form them.

Can naturalism account for the origin of the 20 amino acids in living systems?

Do the Miller-Urey experiments simulate the early Earth?
Do the Miller-Urey experiments simulate the early Earth?

The origin of life

There are two problems related to the origin of the first living cell, on naturalism:

  1. The problem of getting the building blocks needed to create life – i.e. the amino acids
  2. The problem of creating the functional sequences of amino acids and proteins that can support the minimal operations of a simple living cell

Normally, I concede the first problem and grant the naturalist all the building blocks he needs. This is because step 2 is impossible. There is no way, on naturalism, to form the sequences of amino acids that will fold up into proteins, and then to form the sequences of proteins that can be used to form everything else in the cell, including the DNA itself. But that’s a topic for a separate post.

Today, let’s take a look at the problems with step 1.

The problem of getting the building blocks of life

Now you may have heard that some scientists managed to spark some gasses to generate most of the 20 amino acids found in living systems. These experiments are called the “Miller-Urey” experiments.

The IDEA center has a nice summary of origin-of-life research that explains a few of the main problems with step 1.

Miler and Urey used the wrong gasses:

Miller’s experiment requires a reducing methane and ammonia atmosphere,11, 12 however geochemical evidence says the atmosphere was hydrogen, water, and carbon dioxide (non-reducing).15, 16 The only amino acid produced in a such an atmosphere is glycine (and only when the hydrogen content is unreasonably high), and could not form the necessary building blocks of life.11

Miller and Urey didn’t account for UV of molecular instability:

Not only would UV radiation destroy any molecules that were made, but their own short lifespans would also greatly limit their numbers. For example, at 100ºC (boiling point of water), the half lives of the nucleic acids Adenine and Guanine are 1 year, uracil is 12 years, and cytozine is 19 days20 (nucleic acids and other important proteins such as chlorophyll and hemoglobin have never been synthesized in origin-of-life type experiments19).

Miller and Urey didn’t account for molecular oxygen:

We all have know ozone in the upper atmosphere protects life from harmful UV radiation. However, ozone is composed of oxygen which is the very gas that Stanley Miller-type experiments avoided, for it prevents the synthesis of organic molecules like the ones obtained from the experiments! Pre-biotic synthesis is in a “damned if you do, damned if you don’t” scenario. The chemistry does not work if there is oxygen because the atmosphere would be non-reducing, but if there is no UV-light-blocking oxygen (i.e. ozone – O3) in the atmosphere, the amino acids would be quickly destroyed by extremely high amounts of UV light (which would have been 100 times stronger than today on the early earth).20, 21, 22 This radiation could destroy methane within a few tens of years,23 and atmospheric ammonia within 30,000 years.15

And there were three other problems too:

At best the processes would likely create a dilute “thin soup,”24 destroyed by meteorite impacts every 10 million years.20, 25 This severely limits the time available to create pre-biotic chemicals and allow for the OOL.

Chemically speaking, life uses only “left-handed” (“L”) amino acids and “right-handed” (“R)” genetic molecules. This is called “chirality,” and any account of the origin of life must somehow explain the origin of chirality. Nearly all chemical reactions produce “racemic” mixtures–mixtures with products that are 50% L and 50% R.

Two more problems are not mentioned in the article. A non-peptide bond anywhere in the chain will ruin the chain. You need around 200 amino acids to make a protein. If any of the bonds is not a peptide bond, the chain will not work in a living system. Additionally, the article does not mention the need for the experimenter to intervene in order to prevent interfering cross-reactions that would prevent the amino acids from forming.

Usually when you hear the origin of life debated, they sort of skirt about the problem of where the amino acids come from, but there is no reason not to make that an issue. The naturalist has to explain how the first living cell could come about naturalistically.

Positive arguments for Christian theism

 

Will robots and machines ever have consciousness like humans?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”?Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

How likely is it for blind forces to sequence a functional protein by chance?

How likely is it that you could swish together amino acids randomly and come up with a sequence that would fold up into a functional protein?

Evolution News reports on research performed by Doug Axe at Cambridge University, and published in the peer-reviewed Journal of Molecular Biology.

Excerpt:

Doug Axe’s research likewise studies genes that it turns out show great evidence of design. Axe studied the sensitivities of protein function to mutations. In these “mutational sensitivity” tests, Dr. Axe mutated certain amino acids in various proteins, or studied the differences between similar proteins, to see how mutations or changes affected their ability to function properly.10 He found that protein function was highly sensitive to mutation, and that proteins are not very tolerant to changes in their amino acid sequences. In other words, when you mutate, tweak, or change these proteins slightly, they stopped working. In one of his papers, he thus concludes that “functional folds require highly extraordinary sequences,” and that functional protein folds “may be as low as 1 in 10^77.”11 The extreme unlikelihood of finding functional proteins has important implications for intelligent design.

Just so you know, those footnotes say this:

[10.] Douglas D. Axe, “Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds,” Journal of Molecular Biology, 1-21 (2004); Douglas D. Axe, “Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors,” Journal of Molecular Biology, Vol. 301:585-595 (2000).

[11.] Douglas D. Axe, “Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds,” Journal of Molecular Biology, 1-21 (2004).

And remember, you need a lot more than just 1 protein in order to create even the simplest living system. Can you generate that many proteins in the short time between when the Earth cools and the first living cells appear? Even if we spot the naturalist a prebiotic soup as big as the universe, and try to make sequences as fast as possible, it’s unlikely to generate even one protein in the time before first life appears.

Here’s Doug Axe to explain his research:

If you are building a protein for the FIRST TIME, you have to get it right all at once – not by building up to it gradually using supposed Darwinian mechanisms. That’s because there is no replication before you have the first replicator. The first replicator cannot rely on explanations that require replication to already be in place.