Kant Stop Husserling | Issue 05

Kant Stop Husserling | Issue 05

Can Robots Think?

Imagine you get a new job. You sit in a room all day with a book containing a detailed code. Occasionally, a piece of paper is pushed under the door with indecipherable squiggles written on it, and you must find these squiggles in your book of code. The book will then direct you to write a new set of squiggles, which you transcribe and push back under the door.

Unbeknown to you, the squiggles were actually Mandarin, and you have been taking part in a conversation with the person on the other side of the door. This is John Searle’s “Chinese room” thought experiment, which argues that programmed robots are incapable of “thinking” in the conventional sense. Although you have been performing intelligent-seeming actions, you have been doing so unthinkingly, the way a computer would.

The argument, while superficially attractive, doesn’t hold much water. For one thing, the person in the Chinese room is merely the computer’s processor, not the computer as a whole. It might be plausible to claim that the room as a whole is a thinking entity. This is equivalent to the claim that human thought can’t be separated from all the inputs that inform our thought, such as emotions, brain physiology and physical sense-data.

For another, it doesn’t address the possibility of constructing synthetic brains that “learn” in a similar way to humans. This is the vision of films like Blade Runner (incidentally, the second-best film ever made). If we discount the existence of immaterial “souls,” then the mere fact that humans are made from biological material whereas robots are not doesn’t really matter.

The problems with defining the idea of “thinking” inform the influential “Turing test.” In 1950 Alan Turing claimed that the idea of “thinking” is so vague that the only meaningful test of a computer’s ability to think is its ability to imitate humans. If we can have a text-based conversation with a computer and not be able to tell if it’s human or not (an idea picked up on, again, by Blade Runner), then we have no basis for claiming that that machine can’t “think.”

So how close are we to designing such machines? Not very. The 2012 Loebner Prize, an annual competition based on the Turing test, featured some hilarious entries. “Hi, how are you?” asked a judge. “Please rephrase as a proper question, instead of ‘Jim likes P,’” came the reply. Another machine asked, “Did you hold funerals for your relatives when they died?” A third asked for a hug. “I really like Lady Gaga,” said a fourth. “I think it’s the combination of the sound and the fashion-look that appeals to me. I’m a cat.”
This article first appeared in Issue 5, 2013.
Posted 6:30pm Sunday 24th March 2013 by Erma Dag.