Tag Archives: Machine

Will computers and robots ever become self-aware?

Lets take a closer look at a puzzle
Lets take a closer look at a puzzle

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”? Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. Now, let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Robot Rubio parrots identical talking point 4 times at ABC News debate

Marco Rubio with his allies: Democrat Churck Schumer and RINO John McCain
Marco Rubio with his allies: Democrat Chuck Schumer and RINO John McCain

The big exchange of the ABC News debate in New Hampshire last night was Chris Christie taking on Marco Rubio for his habit of using canned 25-second responses like some sort of conservative talking points robot. Basically, Chris Christie pointed out to the audience that Marco Rubio never speaks in specifics, but instead just repeats the same 25-second conservative talking point over and over. And, amazingly, Rubio immediately repeated the same talking point again, and again, and again. Christie kept interrupting to point it out to the audience.

Watch:

Even establishment RINO Hugh Hewitt could not defend Rubio:

Radio talk show host Hugh Hewitt and MSNBC’s Chris Matthews debate about Marco Rubio’s debate performance on Sunday morning’s Meet The Press. Hewitt, a Rubio supporter, says that after his talked-about over-repetition of a line about President Obama’s nefarious intent in last night’s New Hampshire debate, Rubio will be preparing for a “South Carolina brouhaha.”

Matthews challenges Hewitt on Rubio’s performance: “Is there a logic to doing it four times in a row?” Matthews asked. “Why did he do it four times in a row?”

Hewitt admits what Chris Christie said during the debate is true, Rubio’s “staff had trained him” to say it that way.

FOUR TIMES IN A ROW:

Someone programmed Rubio bot to speak that line!

Rubio campaigned for the Senate in Florida saying that he was opposed to amnesty, then, when elected, he literally led the effort to give 20 million illegal immigrants a path to citizenship – so they could vote for bigger government. When running, he was trained by his staff to speak anti-amnesty talking points, when elected he led the fight for amnesty.

Here’s the full list of Rubio errors:

Cruz fought against amnesty, opposes all bailouts, opposes all subsidies, e.g. – ethanol, and he got an A- rating on his response to the gay marriage Supreme Court decision.

This talking point parroting mistake has really given me pause about Rubio. I know that when he was running for Senate in Florida, he parroted a lot of talking points against amnesty. Then he co-sponsored the bill to give citizenship and voting rights to 20 million illegal immigrants. It makes me question whether to believe him about anything else, e.g. – pro life. I know that he is being trained on pro-life rhetoric, but he’s short on pro-life accomplishments. Fool me one, shame on you, fool me twice, shame on me.

Reactions to the Robot Rubio meltdown

I found several lists of “winners and losers” for Saturday’s debate, as well. This one is from the Washington Post, no friend of Ted Cruz:

LOSERS

Marco Rubio: Where to start here? Rubio has been such a strong debater so far — and a steady hand on the campaign trail in general. And then he ran into Christie. The New Jersey governor hit Rubio for never having been a chief executive and for not having much to show for his time in the Senate. He seemed to knock Rubio off his game so much that Rubio wound up repeating a stock answer about President Obama — that Obama knows exactly what he’s doing in driving the country to the left — three times. It was conspicuous and very not-smooth.

They also thought that Ted Cruz won the debate, and that his very unscripted, authentic answer about his half-sister, which I talked about in a previous post, was “memorable”.

I don’t want Rubio being the nominee and debating Hillary Clinton. He’s not ready to debate her, but Ted Cruz will wipe the floor with her. He excels at debate – he was national debate team champion, among other things.

Will robots and machines ever have consciousness like humans?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”?Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

John Searle and the Chinese room illustration

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. People have been talking about how this Jeopardy-playing computer seems human. But Searle disagrees. And his famous Chinese room example (discussed in the article) shows why no one should be concerned about computers acting like humans.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

Here is a link to the full article by John Searle.

Here’s an article by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Can computers become conscious by increasing processing power?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article. (H/T Sarah)

Searle is writing about the IBM computer that was programmed to play Jeopardy. His Chinese room example shows why no one should be concerned about computers acting like humans. There is no thinking computer. There never will be a thinking computer. And you cannot build up to a thinking computer my adding more hardware and software.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

UPDATE: Drew sent me a link to the full article by Searle.

Here’s an article by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

Jay Richards is my all-round favorite Christian scholar. He has the Ph.D in philosophy from Princeton.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.