Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Theory of Mind By Alvin I. Goldman, Study notes of Philosophy of mind

Theory of mind module in define modularity of nativist approach to theory of mind, the rationality teleology theory and the simulation theory.

Typology: Study notes

2021/2022

Uploaded on 03/31/2022

alpa
alpa 🇺🇸

4.4

(20)

250 documents

1 / 25

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
Theory of Mind
Alvin I. Goldman
To Appear in:
Oxford Handbook of Philosophy and Cognitive Science (2012)
Edited by Eric Margolis, Richard Samuels, and Stephen Stich
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19

Partial preview of the text

Download Theory of Mind By Alvin I. Goldman and more Study notes Philosophy of mind in PDF only on Docsity!

Theory of Mind

Alvin I. Goldman

To Appear in:

Oxford Handbook of Philosophy and Cognitive Science (2012)

Edited by Eric Margolis, Richard Samuels, and Stephen Stich

  1. Introduction.

‘Theory of Mind’ refers to the cognitive capacity to attribute mental states to self and others. Other names for the same capacity include “commonsense psychology,” “naïve psychology,” “folk psychology,” “mindreading” and “mentalizing.” Mental attributions are commonly made in both verbal and non-verbal forms. Virtually all language communities, it seems, have words or phrases to describe mental states, including perceptions, bodily feelings, emotional states, and propositional attitudes (beliefs, desires, hopes, and intentions). People engaged in social life have many thoughts and beliefs about others’ (and their own) mental states, even when they don’t verbalize them.

In cognitive science the core question in this terrain is: How do people execute this cognitive capacity? How do they, or their cognitive systems, go about the task of forming beliefs or judgments about others’ mental states, states that aren’t directly observable? Less frequently discussed in psychology is the question of how people self- ascribe mental states. Is the same method used for both first-person and third-person ascription, or entirely different methods? Other questions in the terrain include: How is the capacity for ToM acquired? What is the evolutionary story behind this capacity? What cognitive or neurocognitive architecture underpins ToM? Does it rely on the same mechanisms for thinking about objects in general, or does it employ dedicated, domain- specific mechanisms? How does it relate to other processes of social cognition, such as imitation or empathy?

This chapter provides an overview of ToM research, guided by two classifications. The first classification articulates four competing approaches to (third- person) mentalizing, viz., the theory-theory, the modularity theory, the rationality theory, and simulation theory. The second classification is the first-person/third-person contrast. The bulk of the discussion is directed at third-person mindreading, but the final section addresses self-attribution. Finally, our discussion provides representative coverage of the principal fields that investigate ToM: philosophy of mind, developmental psychology, and cognitive neuroscience. Each of these fields has its distinctive research style, central preoccupations, and striking discoveries or insights.

  1. The Theory-Theory

Philosophers began work on theory of mind, or folk psychology, well before empirical researchers were seriously involved, and their ideas influenced empirical research. In hindsight one might say that the philosopher Wilfrid Sellars (1956) jump- started the field with his seminal essay, “Empiricism and the Philosophy of Mind”. He speculated that the commonsense concepts and language of mental states, especially the propositional attitudes, are products of a proto-scientific theory invented by one of our fictional ancestors. This was the forerunner of what was later called the “theory-theory.” This idea has been warmly embraced by many developmental psychologists. However, not everyone agrees with theory-theory as an account of commonsense psychology, so it is preferable to avoid the biased label ‘theory of mind.’ In much of my discussion,

be) a false belief. What happens between three and four that accounts for this striking difference?

Theory theorists answer by positing a change of theory in the minds of the children. At age three they typically have conceptions of desire and belief that depict these states as simple relations between the cognizer and the external world, relations that do not admit the possibility of error. This simple theory gradually gives way to a more sophisticated one in which beliefs are related to propositional representations that can be true or false of the world. At age three the child does not yet grasp the idea that a belief can be false. In lacking a representational theory of belief, the child has – as compared with adults – a “conceptual deficit” (Perner, 1991). This deficit is what makes the 3- year-old child incapable of passing the false-belief test. Once the child attains a representational theory of belief, roughly at age four, she passes the location-change false-belief test.

A similar discrepancy between 3- and 4-year olds was found in a second type of false-belief task, the deceptive container task. A child is shown a familiar container that usually holds candy and is asked, “What’s in here?” She replies, “candy”. The container is then opened, revealing only a pencil. Shortly thereafter the child is asked what she thought was in the container when she was first asked. Three-year-olds incorrectly answer “a pencil,” whereas 4-year-olds correctly answer “candy.” Why the difference between the two age groups, despite the fact that memory tests indicate that 3-year-olds have no trouble recalling their own psychological states? Theory-theorists again offered the same conceptual-deficit explanation. Since the 3-year-olds’ theory doesn’t leave room for the possibility of false belief, they can’t ascribe to themselves their original (false) belief that the container held candy; so they respond with their current belief, namely, that it held a pencil.

This explanation was extremely popular circa 1990. But several subsequent findings seriously challenge the conceptual-deficit approach. The early challenges were demonstrations that various experimental manipulations enable 3-year-olds to pass the tests. When given a memory aid, for example, they can recall and report their original false prediction (Mitchell and Lacohee, 1991). They can also give the correct false-belief answer when the reality is made less salient, for instance, if they are told where the chocolate is but don’t see it for themselves (Zaitchik, 1991). Additional evidence suggests that the 3-year-old problem lies in the area of inhibitory control problems (Carlson and Moses, 2001). Inhibitory control is an executive ability that enables someone to override “prepotent” tendencies, i.e., dominant or habitual tendencies, such as the tendency to reference reality as one knows it to be. A false-belief task requires an attributor to override this natural tendency, which may be hard for 3-year-olds. An extra year during which the executive powers mature may be the crucial difference for 4-year- olds, not a change in their belief concept. A meta-analysis of false-belief task findings encourages Wellman, Cross, and Watson (2001) to retain the conceptual-deficit story, but this is strongly disputed by Scholl and Leslie (2001).

Even stronger evidence against the traditional theory-theory timeline was uncovered in 2005, in a study of 15-month-old children using a non-verbal false-belief task. Onishi and Baillargeon (2005) employed a new paradigm with reduced task demands to probe the possible appreciation of false belief in 15-month-old children, and found signs of exactly such understanding. This supports a much earlier picture of belief understanding than the child-scientist form of theory-theory ever contemplated.

A final worry about this approach can now be added. A notable feature of professional science is the diversity of theories that are endorsed by different practitioners. Cutting-edge science is rife with disputes over which theory to accept, disputes that often persist for decades. This pattern of controversy contrasts sharply with what is ascribed to young children in the mentalizing domain. They are said to converge on one and the same theory, all within the same narrow time-course. This bears little resemblance to professional science.

Gopnik takes a somewhat different tack in recent research. She puts more flesh on the general approach by embedding it in the Bayes-net formalism. Bayes nets are directed-graph formalisms designed to depict probabilistic causal relationships between variables. Given certain assumptions (the causal Markov and faithfulness assumptions), a system can construct algorithms to arrive at a correct Bayes net causal structure if it is given enough information about the contingencies or correlations among the target events. Thus, these systems can learn about causal structure from observations and behavioral interventions. Gopnik and colleagues (Gopnik et al., 2004; Schulz and Gopnik, 2004) report experimental results suggesting that 2- to 4-year-old children engage in causal learning in a manner consistent with the Bayes net formalism. They propose that this is the method used to learn causal relationships between mental variables, including relationships relevant to false-belief tasks (Goodman et al, in press?).

Here are several worries about this approach. Can the Bayes net formalism achieve these results without special tweaking by the theorist, and if not, can other formalisms match these results without similar “special handling”? Second, if the Bayes- net formalism predicts that normal children make all the same types of causal inferences, does this fit the scientific inference paradigm? We again encounter the problem that scientific inference is characterized by substantial diversity across the community of inquirers, whereas the opposite is found in the acquisition of mentalizing skills.

  1. The Modularity-Nativist Approach to Theory of Mind

In the mid-1980s other investigators found evidence supporting a very different model of ToM acquisition. This is the modularity model, which has two principal components. First, whereas the child-scientist approach claims that mentalizing utilizes domain-general cognitive equipment, the modularity approach posits one or more domain-specific modules, which use proprietary representations and computations for the mental domain. Second, the modularity approach holds that these modules are innate cognitive structures, which mature or come on line at pre-programmed stages and are not acquired through learning (Leslie, 1994; Scholl and Leslie, 1999). This approach

and other communicative gestures), imitation, language and emotional referencing, and looking-time studies.

In one study of gaze following, Johnson, Slaughter and Carey (1998) tested 12- month-old infants on a novel object, a small, beach-ball sized object with natural-looking fuzzy brown fur. It was possible to control the object’s behavior from a hidden vantage point, so that when the baby babbled, the object babbled back. After a period of familiarization, an infant either experienced the object reacting contingently to the infant’s own behavior or merely random beeping or flashing. Infants followed the “gaze” of the object by shifting their own attention in the same direction under three conditions: if the object had a face, or the object beeped and flashed contingent on the infant’s own behavior, or both. These results were interpreted as showing that infants use specific information to decide when an object does or does not have the ability to perceive or attend to its surroundings, which seems to support the operation of a dedicated input system (Johnson, 2005). Woodward (1998) used a looking-time measure to show that even 5-month-olds appear to interpret human hands as goal-directed relative to comparable inanimate objects. They looked longer if the goal-object of the hand changed, but not if the hand’s approach path to the goal-object changed. This evidence also suggests an early, dedicated system to the detection of goal-oriented entities.

All of the above findings post-date Alan Leslie’s (1994) postulation of a later- maturing cognitive module: the “theory-of-mind mechanism (ToMM).” Leslie highlighted four features of ToMM: (a) it is domain specific, (b) it employs a proprietary representational system that describes propositional attitudes, (c) it forms the innate basis for our capacity to acquire theory of mind, and (d) it is damaged in autism. ToMM uses specialized representations and computations, and is fast, mandatory, domain specific, and informationally encapsulated, thereby satisfying the principal characteristics of modularity as described by Fodor (1983).

An initial problem with the modularity theory is that ToMM, the most widely discussed module postulated by the theory, doesn’t satisfy the principal criteria of modularity associated with Fodorian modularity. Consider domain specificity. Fodor says that a cognitive system is domain specific just in case “only a restricted class of stimulations can throw the switch that turns [the system] on” (1983: 49). It is doubtful that any suitable class of stimulations would satisfy this condition for ToMM (Goldman, 2006: 102-104). A fundamental obstacle facing this proposal, moreover, is that Fodor’s approach to modularity assumes that modules are either input systems or output systems, whereas mindreading has to be a central system. Next consider informational encapsulation, considered the heart of modularity. A system is informationally encapsulated if it has only limited access to information contained in other mental systems. But when Leslie gets around to illustrate the workings of ToMM, it turns out that information from other central systems is readily accessible to ToMM (Nichols and Stich, 2003: 117-121). Leslie and German (1995) discuss an example of ascribing a pretend state to another person, and clearly indicate that a system ascribing such a pretense uses real-world knowledge, for example, whether a cup containing water would disgorge its contents if it were upturned. This knowledge would have to be obtained

from (another) central system. Perhaps such problems can be averted if a non-Fodorian conception of modularity is invoked, as proposed by Carruthers (2006). But the tenability of the proposed alternative conception is open to debate.

  1. The Rationality-Teleology Theory

A somewhat different approach to folk psychology has been championed by another group of philosophers, chief among them Daniel Dennett (1987). Their leading idea is that one mindreads a target by “rationalizing” her, that is, by assigning to her a set of propositional attitudes that make her emerge – as far as possible – as a rational agent and thinker. Dennett writes:

[I]t is the myth of our rational agenthood that structures and organizes our attributions of belief and desire to others and that regulates our own deliberations and investigations…. Folk psychology, then, is idealized in that it produces its predictions and explanations by calculating in a normative system; it predicts what we will believe, desire, and do, by determining what we ought to believe, desire, and do. (1987: 52)

Dennett contends that commonsense psychology is the product of a special stance we take when trying to predict others’ behavior: the intentional stance. To adopt the intentional stance is to make the default assumption that the agent whose behavior is to be predicted is rational, that her desires and beliefs, for example, are ones she rationally ought to have given her environment and her other beliefs or desires.

Dennett doesn’t support his intentional stance theory with empirical findings; he proceeds largely by thought experiment. So let us use the same procedure in evaluating his theory. One widely endorsed normative principle of reasoning is to believe whatever follows logically from other things you believe. But attributors surely do not predict their targets’ belief states in accordance with such a strong principle; they don’t impute “deductive closure” to them. They allow for the possibility that people forget or ignore many of their prior beliefs and fail to draw all of the logical consequences that might be warranted (Stich, 1981). What about a normative rule of inconsistency avoidance? Do attributors assume that their targets conform to this requirement of rationality? That too seems unlikely. If an author modestly thinks that he must have made some error in his book packed with factual claims, he is caught in an inconsistency (this is the so-called “paradox of the preface”). But wouldn’t attributors be willing to ascribe belief in all these propositions to this author.

These are examples of implausible consequences of the rationality theory. A different problem is the theory’s incompleteness: it covers only the mindreading of propositional attitudes. What about other types of mental states, such as sensations like thirst or pain and emotions like anger or happiness? It is dubious that rationality considerations bear on these kinds of states, yet they are surely among the states that attributers ascribe to others. There must be more to mindreading than imputed rationality.

segment of it. And even this narrow segment might be handled just as well by a rival theory (viz., the simulation theory).

  1. The Simulation Theory

A fourth approach to commonsense psychology is the simulation theory, sometimes called the “empathy theory.” Robert Gordon (1986) was the first to develop this theory in the present era, suggesting that we can predict others’ behavior by answering the question, “What would I do in that person’s situation?” Chess players playing against a human opponent report that they visualize the board from the other side, taking the opposing pieces for their own and vice versa. They pretend that their reasons for action have shifted accordingly. Thus transported in imagination, they make up their mind what to do and project this decision onto the opponent.

The basic idea of the simulation theory resurrects ideas from a number of earlier European writers, especially in the hermeneutic tradition. Dilthey wrote of understanding others through a process of “feeling with” others (mitfuehlen), “reexperiencing” (nacherleben) their mental states, or “putting oneself into” (hineinversetzen) their shoes. Similarly, Schleiermacher linked our ability to understand other minds with our capacity to imaginatively occupy another person’s point of view. In the philosophy of history, the English philosopher R. G. Collingwood (1946) suggested that the inner imitation of thoughts, or what he calls the reenactment of thoughts, is a central epistemic tool for understanding other agents. (For an overview of this tradition, see Stueber, 2006.)

In addition to Gordon, Jane Heal (1986) and Alvin Goldman (1989) endorsed the simulation idea in the 1980s. Their core idea is that mindreaders simulate a target by trying to create similar mental states of their own as proxies or surrogates of those of the target. These initial pretend states are fed into the mindreader’s own cognitive mechanisms to generate additional states, some of which are then imputed to the target. In other words, attributors use their own mind to mimic or “model” the target’s mind and thereby determine what has or will transpire in the target.

An initial worry about the simulation idea is that it might “collapse” into theory theory. As Dennett put the problem:

How can [the idea] work without being a kind of theorizing in the end? For the state I put myself in is not belief but make-believe belief. If I make believe I am a suspension bridge and wonder what I will do when the wind blows, what “comes to me” in my make-believe state depends on how sophisticated my knowledge is of the physics and engineering of suspension bridges. Why should my making believe I have your beliefs be any different? In both cases, knowledge of the imitated object is needed to drive the make-believe “simulation,” and the knowledge must be organized into something rather like a theory. (1987: 100-

Goldman (1989) responded that there is a difference between theory-driven simulation, which must be used for systems different than oneself, and process-driven simulation, which can be applied to systems resembling oneself. If the process or mechanism driving the simulation is similar enough to the process or mechanism driving the target, and if the initial states are also sufficiently similar, the simulation might produce an isomorphic final state to that of the target without the help of theorizing.

  1. Mirroring and Simulational Mindreading

The original form of simulation theory (ST) primarily addressed the attribution of propositional attitudes. In recent years, however, ST has focused heavily on simpler mental states, and on processes of attribution rarely dealt with in the early ToM literature. I include here the mindreading of motor plans, sensations and emotions. This turn in ST dates to a paper by Vittorio Gallese and Alvin Goldman (1998), which posited a link between simulation-style mindreading and activity of mirror neurons (or mirror systems). Investigators in Parma, Italy, led by Giacomo Rizzolatti, first discovered mirror neurons in macaque monkeys, using single cell recordings (Rizzolatti et al., 1996; Gallese et al., 1996). Neurons in the macaque premotor cortex often code for a particular type of goal- oriented action, for example, grasping, tearing, or manipulating an object. A subclass of premotor neurons were found to fire both when the animal plans to perform an instance of their distinctive type of action and when it observes another animal (or human) perform the same action. These neurons were dubbed “mirror neurons,” because an action plan in the actor’s brain is mirrored by a similar action plan in the observer’s brain. Evidence for a mirror system in humans was established around the same time (Fadiga et al., 1995). Since the mirror system of an observer tracks the mental state (or brain state) of an agent, the observer executes a mental simulation of the latter. If this simulation also generates a mental-state attribution, this would qualify as simulation-based mindreading. It would be a case in which an attributor uses his own mind to “model” that of the target. Gallese and Goldman speculated that the mirror system might be part of, or a precursor to, a general mindreading system that works on simulationist principles.

Since the mid-1990s the new discoveries of mirror processes and mirror systems have expanded remarkably. Motor mirroring has been established via sound as well as vision (Kohler et al., 2002), and for effectors other the hand, specifically, the foot and the mouth (Buccino et al., 2001). Meanwhile, mirroring has been discovered for sensations and emotions. Under the category of sensations, there is mirroring for touch and mirroring for pain. Touching a subject’s legs activates primary and secondary somatosensory cortex. Keysers et al. (2004) showed subjects movies of other subjects being touched on their legs. Large extents of the observer’s somatosensory cortex also responded to the sight of the targets’ legs being touched. Several studies established mirroring for pain in the same year (Singer et al., 2004, Jackson et al., 2004, and Morrison et al., 2004). In the category of emotions, the clearest case is mirroring for disgust. The anterior insula is well-known as the primary brain region associated with disgust. Wicker et al. (2003) undertook an fMRI experiment in which normal subjects were scanned while inhaling odorants through a mask – either foul, pleasant, or neutral -- and also while observing video clips of other people’s facial expressions while inhaling

Where else might we look for evidence of mirroring-based mindreading? Better specimens of evidence are found in the emotion and sensation domains. For reasons of space, attention is restricted here to emotion. Although Wicker et al. (2003) established a mirror process for disgust, they did not test for disgust attribution. However, by combining their fMRI study of normal subjects with neuropsychological studies of brain- damaged patients, a persuasive case can be made for mirror-caused disgust attribution (in normals). Calder et al. (2000) studied patient NK, who suffered insula and basal ganglia damage. In questionnaire responses NK showed himself to be selectively impaired in experiencing disgust, as contrasted with fear or anger. NK also showed significant and selective impairment in disgust recognition (attribution), in both visual and auditory modalities. Similarly, Adolphs et al. (2003) had a patient B who suffered extensive damage to the anterior insula and was able to recognize the six basic emotions except disgust when observing dynamic displays of facial expressions. The inability of these two patients to undergo a normal disgust response in their anterior insula apparently prevented them from mindreading disgust in others, although their attribution of other basic emotions was preserved. It is reasonable to conclude that when normal individuals recognize disgust through facial expressions of a target, this is causally mediated by a mirrored experience of disgust (Goldman and Sripada, 2005; Goldman, 2006).

Low-level mindreading, then, can be viewed as an elaboration of a primitive tendency to engage in automatic mental mimicry. Both behavioral and mental mimicry are fundamental dimensions of social cognition. Meltzoff and Moore (1983) found facial mimicry in neonates less than an hour old. Among adults unconscious mimicry in social situations occurs for facial expressions, hand gestures, body postures, speech patterns, and breathing patterns (Hatfield, Cacioppo, and Rapson, 1994; Bavelas et al., 1986; Dimberg, Thunberg, and Elmehed, 2000; Paccalin and Jeannerod, 2000). Chartrand and Bargh (1999) found that automatic mimicry occurs even between strangers, and that it leads to higher liking and rapport between interacting partners. Mirroring, of course, is mental mimicry usually unaccompanied by behavioral mimicry. The sparseness of behavioral imitation (relative to the amount of mental mimicry) seems to be the product of inhibition. Compulsive behavioral imitation has been found among patients with frontal lesions, who apparently suffer from an impairment of inhibitory control (Lhermitte et al., 1986; de Renzi et al., 1996). Without the usual inhibitory control, mental mimicry would produce an even larger amount of behavioral mimicry. Thus, mental mimicry is a deep-seated property of the social brain, and low-level mindreading builds on its foundation.

  1. Simulation and High-Level Mindreading

The great bulk of mindreading, however, cannot be explained by mirroring. Can it be explained (in whole or part) by another form of simulation? The general idea of mental simulation is the re-experiencing or re-enactment of a mental event or process; or an attempt to re-experience or re-enact a mental event (Goldman, 2006, chap. 2). Where does the traditional version of simulation theory fit into the picture? It mainly fits into the second category, i.e., attempted interpersonal re-enactment. This captures the idea of mental pretense, or what I call “enactment imagination” (E-imagination), which consists

of trying to construct in oneself a mental state that isn’t generated by the usual means (Goldman, 2006; Currie and Ravenscroft, 2002). Simulating Minds argues that E- imagination is an intensively used cognitive operation, one commonly used in reading others’ minds.

Let us first illustrate E-imagination with intrapersonal applications, for example, imagining seeing something or launching a bodily action. The products of such applications constitute, respectively, visual and motor imagery. To visualize something is to (try to) construct a visual image that resembles the visual experience you would undergo if you were actually seeing what is visualized. To visualize the Mona Lisa is to (try to) produce a state that resembles a seeing of the Mona Lisa. Can visualizing really resemble vision? Cognitive science and neuroscience suggest an affirmative answer. Kosslyn (1994) and others have shown how the processes and products of visual perception and visual imagery have substantial overlap. An imagined object “overflows” the visual field of imagination at about the same imagined distance from the object as it overflows the real visual field. This was shown in experiments where subjects actually walked toward rectangles mounted on a wall and when they merely visualized the rectangles while imagining a similar walk (Kosslyn, 1978). Neuroimaging reveals a notable overlap between parts of the brain active during vision and during imagery. A region of the occipitotemporal cortex known as the fusiform gyrus is activated both when we see faces and when we imagine them (Kanwisher et al., 1997). Lesions of the fusiform face area impair both face recognition and the ability to imagine faces (Damasio et al., 1990).

An equally (if not more) impressive story can be told for motor imagery. Motor imagery occurs when you are asked to imagine (from a motoric perspective) moving your effectors in a specified way, for example, playing a piano chord with your left hand or kicking a soccer ball. It has been shown convincingly that motor imagery corresponds closely, in neurological terms, to what transpires when one actually executes the relevant movements (Jeannerod, 2001).

At least in some modalities, then, E-imagination produces strikingly similar experiences to ones that are usually produced otherwise. Does the same hold for mental events like forming a belief or making a decision? This has not been established, but it is entirely consistent with existing evidence. Moreover, a core brain network has recently been proposed that might underpin high-level simulational mindreading as a special case. Buckner and Carroll (2007) propose a brain system that subserves at least three, and possibly four, forms of what they call “self-projection.” Self-projection is the projection of the current self into one’s personal past or one’s personal future, and also the projection of oneself into other people’s minds or other places (as in navigation). What all these mental activities share is projection of the self into alternative situations, involving a perspective shift from the immediate environment to an imagined environment (the past, the future, other places, other minds). Buckner and Carroll refer to the mental construction of an imagined alternative perspective as a “simulation.”

valuations onto others. This gap proved very difficult to eliminate. To illustrate the case of feelings, Van Boven and Loewenstein (2003) asked subjects to predict the feelings of hikers lost in the woods with neither food nor water. What would bother them more, hunger or thirst? Predictions were elicited either before or after the subjects engaged in vigorous exercise, which would make one thirsty. Subjects who had just exercised were more likely to predict that the hikers would be more bothered by thirst than by hunger, apparently allowing their own thirst to contaminate their predictions.

Additional evidence that effective quarantine is crucial for successful third-person mindreading comes from neuropsychology. Samson et al. (2005) report the case of patient WBA, who suffered a lesion to the right inferior and middle frontal gyri. His brain lesion includes a region previously identified as sustaining the ability to inhibit one’s own perspective. Indeed, WBA had great difficulty precisely in inhibiting his own perspective (his own knowledge, desires, emotions, etc.). In non-verbal false-belief tests, WBA made errors in 11 out of 12 trials where he had to inhibit his own knowledge of reality. Similarly, when asked questions about other people’s emotions and desires, again requiring him to inhibit his own perspective, 15 of 27 responses involved egocentric errors. This again supports the simulationist approach to high-level mindreading. There is, of course, a great deal of other relevant evidence, which requires considerable interpretation and analysis. But ST seems to fare well in light of recent evidence (for contrary assessments, see Saxe, 2005 and Carruthers, 2006).

  1. First-Person Mindreading

Our last topic is self-mentalization. Philosophers have long claimed that a special method – “introspection,” or “inner sense” – is available for detecting one’s own mental states, although this traditional view is the object of skepticism and even scorn among many scientifically-minded philosophers and cognitive scientists. Most theory theorists and rationality theorists would join these groups in rejecting so-called “privileged access” to one’s own current mental states. Theory theorists would say that self-ascription, like other-person ascription, proceeds by theoretical inference (Gopnik, 1993). Dennett holds that the intentional stance is applied even to oneself. But these positions can be challenged with simple thought experiments.

I am now going to predict my bodily action during the next 20 seconds. It will include, first, curling my right index finger, then wrinkling my nose, and finally removing my glasses. There, those predictions are verified! I did all three things. You could not have duplicated these predictions (with respect to my actions). How did I manage it? Well, I let certain intentions form, and then I detected, i.e., introspected, those intentions. The predictions were based on the introspections. No other clues were available to me, in particular, no behavioral or environmental cues. The predictions must have been based, then, on a distinctive form of access I possess vis-a-vis my current states of mind, in this case, states that were primed to cause the actions. I seem to have similar access to my own itches and memories. In an important modification of a well- known paper that challenged the existence or reliability of introspective access (Nisbett

and Wilson, 1977), the co-author Wilson subsequently provides a good example and a theoretical correction to the earlier paper:

The fact that people make errors about the causes of their own responses does not mean that their inner worlds are a black box. I can bring to mind a great deal of information that is inaccessible to anyone but me. Unless you can read my mind, there is no way you could know that a specific memory just came to mind, namely an incident in high school in which I dropped my bag lunch out a third-floor window, narrowly missing a gym teacher…. Isn’t this a case of my having privileged, ‘introspective access to higher order cognitive processes’? (2002:

Nonetheless, developmentalists have adduced evidence that putatively supports a symmetry or parallelism between self and other. They deny the existence of a special method, or form of access, available only to the first-person. Nichols and Stich (2003: 168-192) provide a comprehensive analysis of this literature, with the clear conclusion that the putative parallelism doesn’t hold up, and fails precisely in ways that favor introspection or self-monitoring.

If there is such a special method, how exactly might it work? Nichols and Stich present their own model of self-monitoring. To have beliefs about one’s own beliefs, they say, all that is required is that there be a monitoring mechanism that, when activated, takes the representation p in the Belief Box as input and produces the representation I believe that p as output. To produce representations of one’s own beliefs, the mechanism merely has to copy representations from the Belief Box, embed the copies in a representation schema of the form I believe that ___, and then place the new representations back into the Belief Box. The proposed mechanism would work in much the same way to produce representations of one’s own desires, intentions, and imaginings. (2003: 160-161)

One major lacuna in this account is its silence about an entire class of mental states: bodily feelings. They don’t fit the model because, at least on the orthodox approach, sensations lack representational content, which is what the Nichols-Stich account relies upon. Their account is a syntactic theory, which says that the monitoring mechanism operates on the syntax of the mental representations monitored. A more general problem is what is meant by saying that the proposed mechanism would work in “much the same way” for attitude types other than belief. How does the proposed mechanism decide which attitude to ascribe? Which attitude verb should be inserted into the schema I ATTITUDE that ____? Should it be belief, desire, hope, fear, etc.? Each contentful mental state consists, at a minimum, of an attitude type plus a content. The Nichols-Stich theory deals only with contents, not types. In apparent recognition of the problem, Nichols and Stich make a parenthetical suggestion: perhaps a distinct but parallel mechanism exists for each attitudes type. But what a profusion of mechanisms this would posit, each mechanism essentially “duplicating” the others! Where is Nature’s parsimony that they appeal to elsewhere in their book?

states for analysis. Or it can refer to the process of performing an analysis of the states and outputting some descriptions or classifications. In the first sense, introspection is a form of attention, not something that requires attention in order to do its job. In the latter sense, it’s a process that performs an analysis once attention has picked out the object or objects to be analyzed.

If introspection is a perception-like process, shouldn’t it include a transduction process? If so, this raises two questions: what are the inputs to the transduction process and what are the outputs? Goldman (2006: 246-255) addresses these questions and proposes some answers. There has not yet been time for these proposals to receive critical attention, so it remains to be seen how this new quasi-perceptual account of introspection will be received. In any case, the problem of first-person mentalizing is as difficult and challenging as the problem of third-person mentalizing, though it has thus far received a much smaller dollop of attention, especially among cognitive scientists.

References

Adolphs, R., Tranel, D. and Damasio, A. R. (2003). Dissociable neural systems for recognizing emotions. Brain and Cognition 52: 61-69.

Baron-Cohen, S., Leslie, A. and Frith, U. (1985). Does the autistic child have a ‘theory of mind’? Cognition 21: 37-46.

Baron-Cohen, S., Leslie, A. and Frith, U. (1986). Mechanical, behavioral, and intentional understanding of picture stories in autistic children. British Journal of Developmental Psychology 4: 113-125.

Bavelas, J.B., Black, A., Lemery, C.R., and Mullett, J. (1986). “I show how you feel”: Motor mimicry as a communicative act. Journal of Personality and Social Psychology 50: 322-329.

Birch, S. A. J. and Bloom, P. (2003). Children are cursed: an asymmetric bias in mental- state attribution. Psychological Science 14: 283-286.

Buccino, G; Binkofski, F, Fink, G. R, Fadiga, L, Fogassi, L, Gallese, V, Seitz, R. J, Zilles, K, Rizzolatti, G, Freund, H.-J. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study. European Journal of Neuroscience13(2): 400-404.

Buckner, R. L. and Carroll, D. C. (2007). Self-projection and the brain. Trends in Cognitive Sciences 11: 49-57.

Calder, A.J., Keane, J., Manes, F., Antoun, N., and Young, A.W. (2000). Impaired recognition and experience of disgust following brain injury. Nature Reviews Neuroscience 3: 1077-1078.

Camerer, C., Loewenstein, G. and Weber, M. (1989). The curse of knowledge in economic settings: an experimental analysis. Journal of Political Economy 97: 1232-

Carlson, S. M. and Moses, L.J. (2001). Individual differences in inhibitory control and children’s theory of mind. Child Development 72: 1032-1053.

Carruthers, P. (2006). The Architecture of the Mind. Oxford: Oxford University Press.

Chartrand, T. L. and Bargh, J. A. (1999). The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology 76: 893-910.

Churchland, Paul (1981). Eliminative materialism and the propositional attitudes. Journal of Philosophy 78: 67-90.