A variation of the Chinese Room intuition

One question that the Philosophy of Mind deals with is "how can we know if a computer thinks, is alive, or has some kind of consciousness?"
Nowadays, computers do a lot of things that seem very human-like, but in principle, we believe that "things"  don't usually have any consciousness (I don't think that my TV is conscious, although it has many functions). So, one can see there is a tension between these two ways of thinking. The question of whether things actually think or feel is more challenging as machines become increasingly complex. In order to, hopefully, shed some light on the matter, there is a thought experiment (an intuition) that has helped philosophers to discuss the problem: the Chinese Room. John Searle originally proposed this experiment in 1980.
In the following paragraphs I will present my modified version of it; afterwards, I will point out the change I made and the reason for it.      


Imagine that there is a closed room, without windows, filled with file cabinets, and each drawer has the image of various Chinese characters. Outside the room there are Chinese speakers that can introduce a paper (through the closed door) with a question that must be answered by the entity inside the room. The people outside the room must judge whether what is inside the room understands what is being asked by giving the appropriate answers. So far, they seem totally convinced that what lies inside the room and answers the questions could very well be a person. The outsiders decide that there is understanding inside the room.

But, actually, what is inside the room, is a very clever dog that is trained to recognize the shape in the paper and bring the content of the file cabinet to the door. The files contain the most reasonable answer to the question, like an encyclopedia, for example.

As the example stands, the conclusion is that, although to the outsiders it may look as if there is someone understanding the questions and answering them, in fact, the dog inside doesn't understand  what he is doing at all. And although the dog might get better in the response time by bringing things faster, it can never learn the content of what he is answering.
 

This thought experiment is designed to show that, although it might seem that Artificial Intelligence is conscious to an outsider, the mechanism inside doesn't require understanding to perform a seemingly intelligent act. My contribution to the Chinese Room example is that it originally had a person who doesn't understand Chinese inside the room. However, it is a bit unclear at what point the human doesn't understand what he is performing. With a dog it is more intuitive that it doesn't really understand Chinese or what he is doing.

I hope that my version is slightly clearer than the original. I must mention that John Searle has better alternative scenarios, but this one is, hopefully, more intuitive. 





     

Comments

Popular Posts