Yesterday I explained the Chinese Room argument, which tries to show that a digital computer could simulate human understanding without “really” understanding anything. Today I’ll show why I find this argument deeply unconvincing.
My counterargument is the same one that thousands of other people have given over the years – so much so, that I’d call this the standard reply. It may be standard, but it’s correct, so I see no reason to invent something new. This is the so-called “Systems Reply,” which says that, even if no single part of the room understands Chinese – neither the person, nor the library, nor the notebooks – the overall system does, in fact, understand.
Although Searle never gives a clear definition of “understanding” (as far as I know), I take his meaning to be the subjective experience of consciousness. That is, when I look at something red, it isn’t merely that a particular neuron fires in my brain; I actually, in some strange, inexplicable, unprovable way, see something that I can only describe as red. I have, in other words, a conscious experience. I am sentient.
Am I really claiming that a room with a Chinese-illiterate dude and a big library could, as a whole, be conscious and aware in the same sense that you and I are?
Yes, I am.
Searle is naturally skeptical of this response, as you probably are too. It sounds crazy. It sounds counterintuitive. But it’s only counterintuitive because all the minds we’re familiar with happen to run on biological neurons, so it just seems natural that neurons lead to minds.
By the way, let’s be clear about what’s really happening in this room. This is not, as you might imagine, just a busy guy in a large office with a few hundred books. We’re talking about trillions of books, a building the size of a thousand Astrodomes, a tireless and immortal man who takes millennia to answer an input as simple as “hello.” It is a vast system, as unfathomably complex as the human brain itself. We are, in effect, emulating a brain, but instead of using mechanical hardware, we happen to have chosen paper and pencil. Doesn’t matter. What matters is the system.
What would Searle say to the Systems Response? In fact, we know exactly what he would say, because he’s said it:
My response to the systems theory is quite simple: Let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way the system could understand because the system is just a part of him.
Yet the Systems Response still applies. We’ve merely decided to use one brain to emulate another, just as my PC can emulate an NES and play video games. The underlying PC hardware and operating system know nothing at all about how to run Super Mario Bros., but the NES emulator running on the PC understands it just fine. In the same way, the poor sap who memorized the rules may know nothing about Chinese, but the second mind he’s created through his emulation will understand it just fine.
The odd thing about the Chinese Room argument is that it could be applied just as easily to claim that humans don’t understand anything either. After all, the brain is just a bunch of neurons and glial cells and so on, and each of those is just a bunch of molecules that obey the laws of physics in the usual way. How can a molecule be intelligent? It can’t, of course. The idea of a molecule being intelligent is just as absurd as a book being intelligent. The idea of a massive collection of molecules, arranged in highly specific ways (like the human brain), achieving sentience, is no less absurd, but we accept it because experience has shown it must be true. Likewise, then, we must accept that the Chinese Room can be sentient.
Searle’s fundamental problem, as far as I can tell, is that he just can’t accept results that seem too counterintuitive. But the world of AI research is not an intuitive place – and neither are the worlds of math or science, for that matter. Something can be incredibly strange, but still true.
By the way, this is not all merely academic. The answer to this question has powerful ethical implications.
Consider this question: is it wrong to torture a robot?
If we believe that digital robots can have no conscious experience, that they merely simulate human responses, then torturing a robot isn’t actually possible. They may scream or twitch or do whatever else, but it is all, in some sense, “fake.” The robot can’t really experience pain. On the other hand, if we are willing to accept that machines can have subjective experiences on fully the same level as a human being, then torturing a robot becomes monstrous (as, in fact, I believe it would be).
And trust me – sooner or later, there will be robots. We’d better get this one right.
What do you think?
I think I’ll take a look at your last question- Is it wrong to torture a robot. As for the system argument, I agree with your conclusion for now, but I also know very little of this subject.
Although robots could easily be given intelligence once someone (Maybe you) figures out intelligence and how to create sentient, sapient machinery. However, robots would not necisarily all be given sentience. So it would depend upon the type of robot. THerefore, assuming you have reason to torture the robot, and it wasn’t sentient, it wouldn’t have any ethical issues. However, if it did have sentience, then we would be responsible for treating it much the same as another human being, albeit with metal parts and circuits.
THose are my thoughts, at least.
I agree, it all comes down to whether the robot has sentience, the subjective experience of consciousness. Figuring out which robots have that, though, could turn out to be very difficult – if not impossible. After all, to this day we have no idea what – if anything – insects actually “feel” when they’re hurt.
“Let the individual internalize all of these elements of the system.”
This is kind of what I was getting at yesterday… Since we do not know how ‘understanding’ works I reject the assumption that this can be done without achieving something that we would call understanding.
Similarly I reject the assumption that the books cannot have understanding… if we accept the stipulation that the books can be created to function as they have to for the experiment they defy all manner of ‘common sense’ already.
On torturing robots…
It’s interesting to consider what torture is in this case… a massive override command that plays contradictory directives against each other.
We will have to be careful not to define it too broadly or we could have our Undo Keys taken away.
Yeah, you’re right. Fundamentally it’s very hard to say what does or does not constitute “understanding.” Our common sense fails us in cases like these.
The robot torture question is one that bothers me, actually. As I’m building my own AI, I’ve incorporated the concepts of positive and negative feedback – that is, punishment and reward. When I click the negative feedback button, does the AI feel pain? Right now, the mind is still very simple, so I’m confident the answer is “no.” But at what point does it become yes? If I was hurting it, how would I know?
And what is an okay/necessary amount of pain?
Is there any good pain? You might smack it to keep it from falling off a cliff. Perhaps in ‘torture’ also relies on intent.
For an AI confusion could be a kind of pain but that seems to be a necessary part of learning and/or operating without complete knowledge.
If there is a continuum from agony to ecstasy can you calibrate for neutral? Or are you morally obligated to provide ecstasy wherever possible?
Sorry for the late response, but couldn’t we define “understanding” as possessing instructions for interpreting symbols? Those who “understand” Chinese acquired instructions for interpreting Chinese symbols as they developed in their social group. When they “understand” Chinese, they are referencing their stored instructions for interpreting the symbols. The same thing is happening in the Chinese Room, but the person in the room doesn’t have instructions for interpreting Chinese. He has instructions for writing down certain symbols when he sees certain symbols on a sheet of paper. It’s just the the symbols on the paper have other meanings to others who acquired a different set of instructions. Symbols can be interpreted as anything. They require instructions to interpret them a particular way.
But how do we decide whether the Chinese Room system (not just the person, but the whole system) possesses instructions for interpreting symbols? Surely the most logical way is to show it a symbol and request interpretation. “How do you interpret this symbol?” And then the system responds with a convincing interpretation.