Is the brain a computer?

Book cover, "Why the Mind is not a Computer"

Can human thought be understood as an elaborate form of computation? A quick survey of some of the most influential cognitive scientists (including Steven Pinker and Jerry Fodor) and philosophers of mind (including Daniel Dennett and Patricia Churchland) suggests that the answer is yes. This view is the “computational theory of mind” and is so widely held that it currently dominates the field of cognitive science. Even those who have argued for the importance of neurobiology for our understanding of mind have often asserted or assumed that “nervous systems are information processing machines” (Churchland, 1986, p. 36). The hot question of the moment is not, Is the brain a computer? but rather, What kind of computer is the brain?

But is the computational theory of mind really on the right track? Over the last four decades computer technology has advanced so dramatically – transforming modern society in the process – that the confidence of computational theory seems well justified. We should not confuse specialized feats of superhuman intelligence with consciousness: computers that can beat Gary Kasparov at chess are not that different in other respects from a toaster. However, given the trajectory of the last forty years, it seems that computers are progressing inexorably toward consciousness, the Holy Grail of artificial intelligence. Who besides a dualist (i.e. someone who believes that mind is a supernatural or spiritual entity) would deny this?

For those who cannot think of an answer to this last question, Raymond Tallis’s little gem of a book, Why the Mind Is Not a Computer: A Pocket Lexicon of Neuromythology (Imprint Academic, 2004) is a must read. In less than ninety pages Tallis presents a crystal-clear analysis of (1) how computer metaphors came to dominate cognitive science and philosophy of mind and (2) how the careless use of these metaphors leads to overconfidence about the adequacy of the computational model of mind. In short, the most sanguine computational theorists have allowed semantic ambiguities to stand in place of real explanation. The best reply to the opening question is another question: what do we mean by “computation”?

The “lexicon” of Tallis’s book is a discussion of about 16 common terms of computational discourse – e.g. information, language, representation, grammar, goals, computations – that exposes an endemic habit of what Tallis calls “thinking by transferred epithet.” This is a metaphorical process by which terms with a complex multiplicity of meanings are used both to anthropomorphize machines and to mechanize mental functions. The result is a tendency to regard phrases like “information processing” as, on the one hand, an adequate description of conscious thought and, on the other hand, an adequate description of what various simple parts of the brain are doing. Of course some meanings of the phrase may apply in each case. The crucial issue is whether the phrase is applied consistently so that “information processing” can serve as the linchpin of a neurological theory of human thought. Using examples from the literature, Tallis argues convincingly that computational discourse in philosophy of mind sneaks anthropological meanings into a mechanistic framework. His point is not that computational frameworks are essentially inadequate (although this may turn out to be the case); rather his main point is that we should be more critical of the way theorists exaggerate their adequacy by trafficking in semantic ambiguities.

For example, one of the most important terms analyzed by Tallis is information (54-69) Tallis points out that the ordinary sense of information is embedded in the richly layered contexts (biological, cultural, semantic) of conscious experience, without which we are helpless to understand an event as information in any meaningful sense. At the other end of the spectrum is the more specialized, mathematical sense of information, defined as the reduction of uncertainty in a system. These two senses are not entirely unrelated. However, according to the classic statement of the latter (Shannon and Weaver, 1949), semantic and other kinds of meaning are irrelevant to the mathematical sense. Tallis illustrates this important difference by pointing out that the answer to the question, “Do you love me?” is not very informative in the mathematical sense (if the answer must be yes or no, it delivers only one bit of information), but extremely informative – that is, meaningful – in the context of a real human conversation. Despite the great distance between these senses of information (or perhaps because of this distance), computational discourse often treats the sensory apparatuses of the brain as “information transmitters” and higher functions, even consciousness, as “information processors” without being clear about the intended sense of information.

”The apparent success of this mode of thought depends upon an almost continuous unacknowledged vacillation between the engineering [mathematical] and the ordinary senses of information. The information-theoretic account of perception makes intuitive sense because we think of the bearer of the nervous system being informed in the ordinary sense by what is going on in his nervous system as well as acquiring information in the narrow sense of selecting between alternative possible states. By narrowing the conception of consciousness or awareness to that of being in receipt of information and widening that of information way beyond the engineering sense that gives it scientific respectability, and not acknowledging (or noticing) either of these moves, it seems possible to give a scientific, information-theoretic account of consciousness and of the nervous system” (57-58).

Tallis’s brisk and often devastating analysis is like a cold shower that awakens us from lazy habits of speaking about the mind as a computer. I would like, however, to guard against the possibility of taking Tallis’s much needed criticism too far. The metaphorical extension of terms, or the exploitation of semantic ambiguity, is an essential part of theorizing. Also, the anthropomorphizing of simple mechanisms may betray more than just a careless habit: perhaps it indicates the futility of attempting to build mind from the entirely non-mental. Tallis uses intimations (or outright admissions) of panpsychism as a reductio absurdum of the anthropomorphizing tendency, but if the door is opened to metaphysical speculation, one might draw a different conclusion. Still, insofar as Tallis’s targets purport to be forwarding scientific theories and not philosophical speculation, his criticisms are spot on. See here for information on Why the Mind Is Not a Computer: A Pocket Lexicon of Neuromythology.

Be the first to comment

Leave a Reply

Your email address will not be published.


*