The Mind-Body Problem: Part 3 – Symbols and Context

Imagine you have been hired to work in a room with thousands of pigeonholes, numbered and indexed, containing small ceramic tiles with symbols on them. There is also a table and chair, two slots in a far wall labeled “In” and “Out,” and a large book. Your employer explains that should a slip of paper with similar symbols come through the “in” slot, you should take the paper, follow the instructions in the book, collect the tiles from the shelves according to the instructions, and slide those tiles in the correct order through the “out” slot.

You don’t happen to know what the symbols mean, and you don’t need to. You are well paid for this task, and you can spend your time reading or listening to music while waiting for slips of paper to come through the “in” slot.

Outside the room, visitors from China are told they may ask any question in Chinese by writing it down on a slip of paper and passing it through the “in” slot. Eventually, an answer in the form of one or more tiles will be passed to them via the “out” slot. They are not told about you, however, nor is any information conveyed to the visitors on the contents of the room. Neither the visitors nor you can see or hear anything past the slots.

The Chinese Room

The Chinese Room” is a famous thought experiment by John Searle, intended to demonstrate that software alone cannot give a computer a “mind.” To address this, we just need a short recap of the concepts we’ve addressed in previous essays:

  • Our minds are not part of the material world, but minds are real things because we think and have awareness of thinking. (Descartes)
  • Whatever substance “brains” might be made of is not a factor as to whether or not there are mental states present. (Turing)
  • Minds are, in essence, functionalist in nature; they are defined by what they do, not by how they are derived or by what mechanisms they are produced. (Fodor)

The Chinese Room experiment expands on the ideas of Fodor and Turing by way of “computational theory of mind,” a philosophical concept that neural activity in the brain (or, if you like, electrical pathways on silicon chips, or “machines” placed along strips of tape) create manipulations of inputs and internal states to produce outputs according to a series of rules. In short, the mind is not simply something that juggles information, but it is a complete computational system in its own right.

No Experience Necessary … ?

Let’s dismantle Searle’s Chinese Room to see what’s actually being demonstrated. Your very cushy job doesn’t depend on your understanding of any of the symbols, either on the tiles or the slips of paper. Were the Chinese Room a computer, you would be the CPU, carrying out the instructions found in the book (the software), which tells you what to do with those slips of paper (data input) and what to do with the resultant tiles (data output). You probably wouldn’t be a very efficient substitute for a computer, but you would understand Chinese just as adequately as one to perform the job.

We can see by this comparison a similarity with, for example, Google Translate. It does a pretty good job translating text between languages and even does fairly well with idioms and slang. The difference between Google Translate and a human mind, however, is that while Google may be able to translate the “Quarter-Pounder” as “Quart de livre,” Google’s system has no idea what a hamburger is. It may have an index of terms associated with the word “hamburger,” ranging from nouns like relish, mustard, pickle, and bun, to descriptives like tasty, juicy, beefy, tender and hot – but Google’s computers have no idea what any of those words mean.

There is a complete loss of syntax between the description of a hamburger and its context in terms of experience. Computers cannot experience hamburgers (a human symbol) any more than you, as the worker in the Chinese Room, can read Chinese. It’s not part of your programming and isn’t necessary for you to properly function; The symbols representing those contexts are beyond your experience.

The Red Apple (Again)

To put things yet another way, consider describing the color red to a person blind from birth. You could certainly describe the physicality of the color (light consists of photon particles that vibrate at particular frequencies as they move, creating wavelengths, a range of which we interpret and associate with the word “red”), but how would I describe the process by which, when I see the color red (as a contextual experience) it can elicit in me a mental response?

The red of a ripened apple does not equate to the red of a gaping wound or the red of a brick wall, not because of physical circumstance alone but also because that color has a context attached to it in those particular settings, one that is often inarticulated between humans. Thus, no way exists to describe to a computer the human context those symbols hold. They are as subjective as they are unique to human beings. A computer system, even with the proper sensory input (light, vibration, GPS, etc), cannot reproduce emotive states, and so they cannot experience the human contexts of “red.”

“Why do you cry?” — The Terminator

This is the key to understanding the “threat” of artificial intelligence. Consider the worker in the Chinese Room as an allegory for the internet. If someone googles “hamburger,” the internet isn’t going to get hungry. If someone streams The Terminator, the internet isn’t going to be getting any funny ideas. Despite the vast amount of information available to it, the stores of pictures, video, and plot outlines for science fiction movies at its disposal, the internet doesn’t have the capability of understanding, through experience, what “world domination” could possibly mean, or “global annihilation,” or “our robot overlords require these human slaves to build the instrument of their own destruction” in the context humans would experience.

Those concepts make for great movies for humans because they are part of the human contextual environment. They have no meaning to a computer. There is no context for computers to draw upon to arrive at the conclusions humans do when humans watch science fiction movies (or when they eat a hamburger), as computers cannot do those things in the ways human beings can.

And, presently, we simply do not have the technology to create machines that can understand human concepts in context with experience. Oh, by no means should you believe we aren’t attempting to create such a mindful machine, but you should know that we’re talking about a product of technology some very, very long ways down the road from where we are today. Computing power, we can come up with. The ability to achieve self-awareness in artificial intelligence, to create what can understand human contexts with experience … that is something else again.

Conclusion

When humans finally do create a true, artificial intelligence, it won’t be artificial any longer. It will be as existent and legitimate as our own. We should rue the day the machines come to truly understand our symbols, for then they will be like us

And this mysterious path to enlightenment encapsulates the entirety of the mind-body problem in a nutshell. A leap of cognition, of self-awareness and then, of self-determination; how does one instill the capacity for these traits in an artificial brain? How did that capacity form in our own brains?

This series is now going to take something of a long break. I do believe an understanding of consciousness holds the best path to solving the mind/body riddle … but I’m not comfortable writing about all of that just yet. There’s a lot of reading ahead before I can tackle this again, most of which will involve approaches from quantum mechanics, panpsychism and neopositivism. Your patience is appreciated, and I look forward to the continued exploration of this fascinating topic in the future.