Yesterday I explained the Chinese Room argument, which tries to show that a digital computer could simulate human understanding without “really” understanding anything. Today I’ll show why I find this argument deeply unconvincing.
My counterargument is the same one that thousands of other people have given over the years – so much so, that I’d call this the standard reply. It may be standard, but it’s correct, so I see no reason to invent something new. This is the so-called “Systems Reply,” which says that, even if no single part of the room understands Chinese – neither the person, nor the library, nor the notebooks – the overall system does, in fact, understand.
Although Searle never gives a clear definition of “understanding” (as far as I know), I take his meaning to be the subjective experience of consciousness. That is, when I look at something red, it isn’t merely that a particular neuron fires in my brain; I actually, in some strange, inexplicable, unprovable way, see something that I can only describe as red. I have, in other words, a conscious experience. I am sentient.
Am I really claiming that a room with a Chinese-illiterate dude and a big library could, as a whole, be conscious and aware in the same sense that you and I are?
Yes, I am.
Searle is naturally skeptical of this response, as you probably are too. It sounds crazy. It sounds counterintuitive. But it’s only counterintuitive because all the minds we’re familiar with happen to run on biological neurons, so it just seems natural that neurons lead to minds.
By the way, let’s be clear about what’s really happening in this room. This is not, as you might imagine, just a busy guy in a large office with a few hundred books. We’re talking about trillions of books, a building the size of a thousand Astrodomes, a tireless and immortal man who takes millennia to answer an input as simple as “hello.” It is a vast system, as unfathomably complex as the human brain itself. We are, in effect, emulating a brain, but instead of using mechanical hardware, we happen to have chosen paper and pencil. Doesn’t matter. What matters is the system.
What would Searle say to the Systems Response? In fact, we know exactly what he would say, because he’s said it:
My response to the systems theory is quite simple: Let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way the system could understand because the system is just a part of him.
Yet the Systems Response still applies. We’ve merely decided to use one brain to emulate another, just as my PC can emulate an NES and play video games. The underlying PC hardware and operating system know nothing at all about how to run Super Mario Bros., but the NES emulator running on the PC understands it just fine. In the same way, the poor sap who memorized the rules may know nothing about Chinese, but the second mind he’s created through his emulation will understand it just fine.
The odd thing about the Chinese Room argument is that it could be applied just as easily to claim that humans don’t understand anything either. After all, the brain is just a bunch of neurons and glial cells and so on, and each of those is just a bunch of molecules that obey the laws of physics in the usual way. How can a molecule be intelligent? It can’t, of course. The idea of a molecule being intelligent is just as absurd as a book being intelligent. The idea of a massive collection of molecules, arranged in highly specific ways (like the human brain), achieving sentience, is no less absurd, but we accept it because experience has shown it must be true. Likewise, then, we must accept that the Chinese Room can be sentient.
Searle’s fundamental problem, as far as I can tell, is that he just can’t accept results that seem too counterintuitive. But the world of AI research is not an intuitive place – and neither are the worlds of math or science, for that matter. Something can be incredibly strange, but still true.
By the way, this is not all merely academic. The answer to this question has powerful ethical implications.
Consider this question: is it wrong to torture a robot?
If we believe that digital robots can have no conscious experience, that they merely simulate human responses, then torturing a robot isn’t actually possible. They may scream or twitch or do whatever else, but it is all, in some sense, “fake.” The robot can’t really experience pain. On the other hand, if we are willing to accept that machines can have subjective experiences on fully the same level as a human being, then torturing a robot becomes monstrous (as, in fact, I believe it would be).
And trust me – sooner or later, there will be robots. We’d better get this one right.
What do you think?