The Mind-Body Problem: Part 2b – Functionalism

We recently paused our mind-body discussion for a brief look at consciousness, and now we’re ready to move on to the thoughts of Jerry Fodor and that philosopher’s definition of functionalism. This theory states that whatever material the brain is made of (living cells, circuitry, mental or spiritual substance) has no bearing on what states the mind (consciousness, awareness, and thought) might be capable of. In other words, mental states aren’t defined by what sort of brain is at work, but on what sort of mind is housed therein.

Functionalism is a complex theory in the field of cognitive science, and as such Fodor employed a number of metaphors to help describe how the theory works. Before we jump into those, though, we might take a brief look at two other theories that functionalism opposes: identity theory and behaviorism.

If I Only Had a Brain

Behaviorism suggests that minds don’t do much that’s original; there are only behaviors, which are reactions to stimuli caused by other behaviors. In your extracurricular reading, you may have learned that Descartes thought animals were little more than unthinking machines; behaviorism includes humans in that assessment. In fact, behaviorism does not consider mental states whatsoever; it’s not concerned with the inner workings of the mind or even psychology in general. Behaviorism isn’t taught seriously as a psychological concept any longer, but it’s worth addressing in the context of functionalism.

On the other hand, identity theory suggests that experiences in the mind are, strictly speaking, brain processes alone. Many philosophers hold that there must be a non-physical quality to thought (they use the term qualia); identity theory implies that thoughts aren’t just correlated to brain function, they are brain function. In other words, states of the mind are identical to states of the brain. Identity theory suggests that the mind is merely a by-product of the brain’s physical function; whether responding to previous stimuli or working out responses on its own, the brain is the thing doing all the work and the mind is, more or less, an illusory apparatus, a “sandbox” where the brain does its computational work.

A Coke and a Smile

Fodor used two imaginary Coke machines to demonstrate how functionalism differs from behaviorism and identity theory. The first Coke machine is very simple in that if you put a dime into the slot — Fodor mentioned that this is a very old example — you get a Coke from the dispenser. If you put in anything else, say a nickel or a quarter, the machine will simply do nothing, unable to process the non-dime input. (Were this a computer instead of a Coke machine, we would call this a ‘hang.’) This Coke machine is “behaviorist” and has no “mind,” nor does it need one. Its output, a Coke, is dispensed because of the correct input, a dime.

Behaviorism denotes a single-state existence. There are only behaviors, and all behaviors must have a cause ultimately external to the organism in question. If you insert a dime, you get a Coke. If you insert something else or insert nothing, you do not get a Coke, because the machine did not get the correct stimulus to enter its only available state.

The second Coke machine demonstrates functionalism. If you put a dime in the slot, you will still be dispensed a Coke, but if you put in a quarter, you would get a Coke, a dime, and a nickel. If you put in a nickel, the machine doesn’t “do nothing;” it instead “waits” (as a purposeful action, or state) for additional monies to be inserted so as to dispense a Coke, plus whatever reimbursement may be necessary for over-payment. Thus the functionalist Coke machine has three states; dispensing a Coke upon receiving the correct amount of money, waiting for enough money to dispense a Coke once an inadequate amount has been received, and dispensing a Coke along with the correct change if overpaid.

What To Do, What To Do

This is not to imply that the functionalist Coke machine has a “mind,” but unlike the behaviorist Coke machine, its states are not defined by input and output alone but also in terms of each other. We would say that the functionalist Coke machine’s states are interdefined. When a nickel is put into the functionalist Coke machine, the machine enters a waiting state for additional monies so as to determine what to do next, either enter the dispense-a-Coke state, or the dispense-a-Coke-with-some-change state.

(As an aside, either machine not yet having received money at all is not in a “state;” states are causal events, and since nothing has transpired to initiate a state, no state exists.)

What has any of this to do with the mind/body problem? Fodor attempted to show that human brains are not necessary for functionalist behavior, and that puts the kibosh on identity theory as well as behaviorism. Functionalist theory suggests further that the mind/body problem may not be a purely human issue at all, and when considering other sorts of brains, this may be the key to ultimately understanding how to describe the connection between mental and physical states.

If You Could Read My Mind, Love

Fodor had shown that even simple machines can demonstrate the downfalls of behaviorism, but to more specifically differentiate functionalism from identity theory, Fodor focused his attention on the Turing machine.

Turing machines are a type of thought experiment that uses mathematical models to describe abstract operations. A Turing machine can solve any computer algorithm by manipulating symbols on strips of tape according to a given set of rules.

Turing machines share a couple of traits with minds as we understand them. Alan Turing, the thought experiment’s creator, was able to demonstrate that a Turing machine cannot determine whether any arbitrary symbol on its tape would cause a “circular” result (e.g. that a machine would hang, or fail to continue its computational task); also, a Turing machine cannot determine whether any other machine elsewhere along the tape would ever print a given symbol.

In other words, there could be no Turning machine that could predict another Turing machine’s “hang” event, or read the “mind” of another Turning machine. The analogy to human minds should be clear.

A State of Mind

Fodor identified that there are only a finite number of states that a Turing machine can have. A Turing machine could scan a single square of its tape, erase a symbol, print a symbol if none exists, and move the tape, all according to a table of rules. The point of Turing machines, as far as functionalism is concerned, is to demonstrate that:

“Since the definition of a program state never refers to the physical structure of the system running the program, the Turing machine version of functionalism also captures the idea that the character of a mental state is independent of its physical realization. A human being, a roomful of people, a computer, and a disembodied spirit would all be Turing machines if they operated according to a Turing-machine program.”

(Fodor, “The Mind-Body Problem,” The Scientific American, 1981).

In other words, identity theory not only fails to describe interdefined states of mind but also any physical relationship between mental states and the physical device producing those states, making it a poor attempt to resolve the mind/body problem.


Thus far in this series, we have identified (in very brief terms) what the mind/body problem consists of, whether to consider the structure of reality (monist or dualist) in determining how that problem is to be solved, and how functionalism better describes the relationship between brains and mental states than behaviorism or identity theory has.

What happens when a machine acquires what’s been termed “artificial intelligence,” and starts making its own rules for behavior? Further, why is this not quite as ominous as some science fiction writers would have us believe? The next essay looks at the meaning of symbols and the importance of semantics through another thought experiment called The Chinese Room, in this continuing series.