"The Opacity of Mind offers a vigorous defence of the startling view that self-knowledge is based on error prone inferences from sensory experience rather than direct access to what we are thinking. Drawing heavily on cognitive science, Peter Carruthers makes his radical thesis look eminently reasonable, and he delivers fatal blows to the competition."
Jesse Prinz, Distinguished Professor of Philosophy, City University of New York.
It is widely believed in philosophy that people have privileged and authoritative access to their own thoughts, and many theories have been proposed to explain this supposed fact. The Opacity of Mind challenges the consensus view and subjects the theories in question to critical scrutiny, while showing that they are not protected against the findings of cognitive science by belonging to a separate “explanatory space”. The book argues that our access to our own thoughts is almost always interpretive, grounded in perceptual awareness of our own circumstances and behavior, together with our own sensory imagery (including inner speech). In fact our access to our own thoughts is no different in principle from our access to the thoughts of other people, utilizing the conceptual and inferential resources of the same “mindreading” faculty, and relying on many of the same sources of evidence. Peter Carruthers proposes and defends the Interpretive Sensory-Access (ISA) theory of self-knowledge. This is supported through comprehensive examination of many different types of evidence from across cognitive science, integrating a diverse set of findings into a single well-articulated theory. One outcome is that there are hardly any kinds of conscious thought. Another is that there is no such thing as conscious agency.
Written with Carruthers’ usual clarity and directness, this book will be essential reading for philosophers interested in self-knowledge, consciousness, and related areas of philosophy. It will also be of vital interest to cognitive scientists, since it casts the existing data in a new theoretical light. Moreover, the ISA theory makes many new predictions while also suggesting constraints and controls that should be placed on future experimental investigations of self-knowledge.
"The Opacity of Mind is a terrific book. In a nutshell, the plot is this: Gilbert Ryle meets contemporary cognitive science, and together they produce a novel and exciting theory of self-knowledge." …... "This hardly scratches the surface of Carruthers's rich and thought-provoking book. Many other topics are discussed at length: mental architecture, inner sense theories, third-person mindreading, alleged dissociations between self- and other-knowledge, the evidence for widespread confabulation, and much more. As is usual with Carruthers's work, the book is packed with numerous references to the empirical literature -- a welcome corrective to work on self-knowledge which blithely disregards it. The Opacity of Mind contains much to disagree with, but also much to learn."
Alex Byrne (MIT), Notre Dame Philosophical Reviews
"Those familiar with the scholarship in this area will find here a substantive, well-defended alternative to existing accounts in the study of self-knowledge. Any adequate discussion of this issue will need to take into account Carruther's view and the argument he marshals in support of it. Moreover, the clarity and thoroughness that Carruthers brings to the subject makes the book more than suitable for introducing students to the role of philosophy in the interdisciplinary study of human cognition." CHOICE
"In this terrific book, Peter Carruthers aims to show that current theories of our knowledge of our own mental states don’t sit at all well with our best theories of how the mind works. Carruthers also proposes and defends a radical alternative theory, which he succeeds in lending an impressive degree of support with appeal to both philosophical argumentation and a wealth of considerations drawn from recent work in cognitive science and related areas. In doing so, he offers a model of how an enduring and central philosophical issue can be fruitfully engaged in an empirically-informed manner. Philosophers of mind and epistemologists continue to be fascinated by our knowledge of our own mental lives; such readers will be fascinated by Carruthers’s book, whether or not they agree with its deeply revisionary conclusions."
Aidan McGlynn (Aberdeen), Philosophical Quarterly
"This is undoubtedly a rich and provocative book." …… "In conclusion, therefore, The Opacity of Mind is a challenging and provocative book, informed by an extraordinary knowledge of scientific psychology and cognitive science. Carruthers certainly places a formidable burden on anyone challenging the key ideas of the ISA theory – in particular, to anyone who wants to maintain any sort of transparent access to propositional attitudes."
José Bermúdez (Dean of Liberal Arts, Texas A&M), Mind
1. The Interpretive Sensory-Access (ISA) Theory
2. Predictions of the ISA Theory
3. Transparent-Access Accounts
4. A Guide Through the Volume
2. The Mental Transparency Assumption
2. Transparency Assumptions in Philosophy
3. Are Transparency Assumptions a Human Universal?
4. Explaining our Intuitions of Transparency
5. Leveling the Playing Field
3. The ISA Theory: Foundations and Elaborations
1. A Global Broadcast Architecture
2. Working Memory
3. The Social Intelligence Hypothesis
4. The ISA Model Revisited
5. Sensory Self-Knowledge
4. Transparent Sensory Access to Attitudes?
1. Self-Knowledge by Looking Outward
2. Self-Knowledge by Expression
3. Constitutive Authority and Dual Systems
4. Revisionary Attitudes
5. Transparent Sensory Access to Affect
1. Desire and Emotion
2. Awareness of Affect
3. Awareness of Affective Attitude Strength?
4. Awareness of Affective Attitude Content?
6. Intermediate-Strength Transparent-Access Theories
1. The Tagging Hypothesis
2. Attitudinal Working Memory
3. Awareness of Action
4. The Active Mind
7. Inner Sense Theories
1. Inner Sense and Mindreading: Three Theories
2. Developmental Evidence
3. Emotional Mirroring
4. Unsymbolized Thinking
8. Mindreading in Mind
1. The Theoretical Options
2. Why Mindreading Matters
3. Evidence of Early Mindreading
4. Explaining the Gap
5. Mindreading in Animals
9. Metacognition and Control
1. Inner Sense versus ISA
2. Human Metacognition
3. Human Meta-Reasoning
4. Animal Metacognition
5. Epistemic Emotions in Humans and Animals
10. Dissociation Data
4. Images of the Brain
11. Self-Interpretation and Confabulation
1. The Limits of Introspection
2. When Will the Two Methods Operate?
3. Confabulated Decisions, Intentions, and Judgments
4. Self-Perception Data
5. Dissonance Data
6. Concluding Comments
12. Conclusion and Implications
1. Summary: The Case Against Transparent Access to Attitudes
2. Eliminating Most Kinds of Conscious Attitude
3. Eliminating Conscious Agency
4. Rethinking Responsibility
Index of Names
Index of Subjects
This book is about the nature and sources of self-knowledge. More specifically, it is about the knowledge that we have of our own current mental lives. How do we know of mental events like seeing that something is the case or entertaining a visual image of it, as well as wondering, supposing, judging, believing, or remembering that it is so? How do we know of our own present emotions of fear or anger? How do we have knowledge of what we want, or of what we are currently longing for? And how do we know that we have just decided to do something, or what we intend to do in the future?
More specifically still, for the most part this book will focus on our knowledge of our current thoughts and thought processes (paradigmatic examples of which are judging, actively wanting, and deciding). This means that two broad classes of mental state will only be discussed in a peripheral way. One is the set of sensory or sensory-involving states, which include seeing, hearing, feeling, and so on, as well as imagistic versions of the same types of experience. This is because, like most theories in the field, the model of self-knowledge that I present regards our awareness of these types of state as being relatively unproblematic. The other class of mental states to receive only peripheral discussion consists of our so-called “standing” attitudes, which are stored and remain in existence even when we sleep, such as beliefs, memories, standing desires, and intentions for the future. This is because, despite disagreements about details, almost everyone thinks that knowledge of our own standing attitudes depends upon knowledge of the corresponding (or otherwise suitably related) current mental events. So our primary focus should be on the latter.
Disappointingly for some readers, this book isn’t about the sort of self-knowledge that has traditionally been thought to be a part of wisdom. This includes knowledge of one’s abilities and limitations, one’s enduring personality characteristics, one’s strengths and weaknesses, and the mode of living that will ultimately make one happy. Everyone allows that knowledge of this kind is hard to come by, and that having more of it rather than less of it can make all the difference to the overall success of one’s life. Moreover, it is part of common sense that those close to us may have a better idea of these things than we do ourselves. Instead, this book is about a kind of self-knowledge that nearly everyone thinks is easy to come by, almost to the point of triviality. This is the knowledge that we have of our own current thoughts and thought processes, which are generally believed to be transparently available to us through some sort of introspection.
I shall argue, in contrast, that knowledge of most of our own kinds of thinking is by no means trivially easy. Indeed, it is no different in principle from the knowledge that we have of the mental states of other people, and is acquired by the same mental faculty utilizing many of the same general sources of evidence. For I shall defend what I call the “Interpretive Sensory-Access” (or “ISA”) theory of self-knowledge. This holds that our only mode of access to our own thinking is through the same sensory channels that we use when figuring out the mental states of others. Moreover, knowledge of most kinds of thinking (and hence by extension knowledge of our own standing attitudes) is just as interpretive in character as other-knowledge. Our common-sense conception of the transparency of our own minds is illusory, I shall argue. On the contrary, for the most part our own thoughts and thought processes are (in a sense) opaque to us. For they can only be discerned through an intervening sensory medium whose contents need to be interpreted.
One goal of this book is to integrate and provide a theoretical focus for a great deal of disparate work in cognitive science. While some cognitive scientists have developed theoretical positions somewhat like that defended here, for the most part they have proposed theories that are either too strong or too weak; and none cite and discuss the full range of evidence available. (Moreover, many others continue to be held in the thrall of some suitably restricted version of a traditional self-transparency account.) Thus Gopnik (1993) draws on mostly developmental data to argue that we lack introspection for all mental states (including perceptual ones), which is, I shall argue, too strong. And both Wilson (2002) and Wegner (2002) build theories of self-knowledge that emphasize interpretation, while nevertheless allowing that we have introspective access to thoughts of many kinds, which is (I shall argue) too weak. The cognitive scientist whose account comes closest to that defended here is Gazzaniga (1998), but he draws on only a narrow range of evidence deriving from the “split brain” patients with whom he has famously worked. (Some of this evidence will be discussed in Chapter 2.)
Another goal of this book is to challenge philosophical theories of self-knowledge. Philosophers are almost unanimous in thinking that knowledge of our own mental states is somehow special, and radically different from other-knowledge. Descartes (1641) famously believed that we have infallible knowledge of our own thoughts and thought processes. Few today would endorse such a strong claim. But almost all hold that knowledge of our own thoughts is somehow privileged (arrived at in a special way that isn’t available to others) and especially certain and authoritative (incapable of being challenged by others). The ISA theory maintains, in contrast, that self-knowledge of most forms of thought doesn’t differ in kind from knowledge of the thoughts of other people.
Many philosophers believe, however, that findings from cognitive science are irrelevant to their claims. For philosophical and scientific accounts are thought to occupy different “explanatory spaces”, and to belong to different levels of analysis (“personal” and “subpersonal” respectively). I propose to argue that (in the present context at least) these views are mistaken. Chapter 2 will show that philosophical theories in this domain—whether wittingly or not—carry significant commitments about the unconscious processes that underlie self-knowledge. And cognitive science can (and does) show us that those commitments are false.
If the account that I propose can be sustained, then it may have important implications for other areas of philosophy. Some issues in the theory of knowledge will need to be re-examined, for example, since they take introspection of our own mental states for granted. (Thus the so-called “problem of other minds” is generally expressed in the question, “How do I know that other people have mental states like my own?”) And arguably philosophical theories of personal identity, of agency, and of moral responsibility might likewise be deeply affected. Some of these potential implications will be addressed briefly in the concluding chapter.
For the benefit of readers whose background is in psychology (especially social psychology), I should emphasize that my use of the phrase “propositional attitude” is quite different from the one they will be familiar with. In psychology an attitude is, roughly, a disposition to engage in evaluative behavior of some sort. Thus one has an attitude towards a political party, or the morality of abortion, or the permissibility of the death penalty. But one doesn’t (normally) have an attitude towards the date of one’s own or one’s mother’s birth, or to the fact that whales are mammals. In philosophy (and throughout this book), in contrast, an attitude can be any kind of standing thought or form of active thinking that has a conceptual or propositional content. (These contents can often be reported in a sentential that-clause.) Hence knowing, or recalling, that I was born in June are propositional attitudes. Believing, or judging, that whales are mammals are propositional attitudes. And so, too, are wanting, hoping, fearing, supposing, or being angry that the next President will be a Republican.
For the benefit of readers who are philosophers, I need to emphasize that this book doesn’t by any means fit the mold of much contemporary analytic philosophy. It contains very little that is recognizable as conceptual analysis, and hardly any of its claims are intended to be a priori. Indeed, the book can just as well be thought of as an exercise in theoretical psychology. (Compare theoretical physics, which uses other people’s data to develop and test theories.) But this is an activity that Hume and many other philosophers of the past would have recognized as a kind of philosophy, and it is one that many naturalistically-inclined philosophers of the present will recognize as a kind of philosophy. Indeed, in my view it is a mistake to address questions in the philosophy of mind in any other way. It is even more misguided to address them in ignorance of the relevant data in cognitive science, as many philosophers continue to do.
My goal is to fashion an explanatory theory that best accounts for the full range of available evidence. Hence the overall form of argument of the book is an inference to the best explanation, not any kind of deductive or quasi-deductive demonstration. As such it is holistic in character, involving not just an evaluation of how well the competing theories can accommodate the evidence, but also how successfully those accounts comport with surrounding theories in cognitive science. Moreover, like the results of any inference to the best explanation, the conclusions reached in this book are both provisional and hostage to future discoveries. I can live with that.
Finally by way of initial orientation, let me stress a pair of background assumptions. One is that the mind is real. By this I mean not just that there are truths about mental states. (Almost everyone with the exception of a few eliminativists about the mental—such as Churchland, 1979—accepts this.) Rather, I mean that the mind has an existence and substantive character that goes well beyond, and is independent of, our best common-sense interpretive practices. Hence knowing the truth about the mind requires a great deal more than informed reflection on those practices. In fact, it requires cognitive science. Philosophy of mind therefore needs to be continuous with the latter.
A second assumption is slightly more technical. (For defense, see Fodor, 1998; Marcus, 2001; Carruthers, 2006a; and Gallistel and King, 2009.) It is that many mental states are realized discretely in the brain and possess causally relevant component structure. Beliefs, for example, are not just clusters of behavioral dispositions. Nor are they realized holistically in distributed networks of a “radical connectionist” sort. Rather, they possess a discrete existence and are structured out of component concepts. Moreover, it is these structures (which may or may not be language-like, I should stress) that causally underlie the relevant dispositions. In short, individual beliefs and desires, too, are real, and each has a substantial nature that goes beyond any mere set of behavioral dispositions. In any case, that is what I shall assume.
Some of the ideas utilized in this book were first developed in journal articles over the last half-dozen years or so. In all cases the material taken from these pieces has been thoroughly re-worked, sometimes involving significant changes of mind. I am grateful to the referees for those journals, who helped me to improve my thoughts (and my writing), and also to numerous colleagues who offered me comments and critical advice on earlier drafts of the papers in question. I am especially grateful to those who wrote commentaries on my target article in Behavioral and Brain Sciences in 2009 (“How we know our own minds”, BBS, 32, 121-182). I learned a great deal from the exchange.
I have also been piloting the main ideas of this book in presentations and talks at a variety of venues over the same six-year period. I am grateful to all those who participated in the ensuing discussions for their criticisms and positive suggestions.
I would like to thank the following friends and colleagues for providing me with valuable feedback on an initial draft of some or all of this book: Ori Friedman, Tim Fuller, Peter Langland-Hassan, Joëlle Proust, Georges Rey, Eric Schwitzgebel, David Williams, and two anonymous readers for Oxford University Press.
I am particularly grateful to Brendan Ritchie, who worked as my research assistant through the period when I was drafting the book. He proved invaluable in many ways, including the provision of detailed and well-informed feedback on a number of initial drafts. He is also responsible for all of the diagrams (with the exception of Figure 8.1). I owe the same debt of gratitude to Logan Fletcher, who worked as my research assistant through the final stages of revision and preparation of the book for press. He helped me to figure out how to respond to criticisms from the various readers and commentators, and provided insightful comments on each subsequent revision. He also worked with me on the proofs, and in putting together the indexes. The feedback I got from these two young philosophers is as good as any I have received from anyone, ever.
Early versions of many of the ideas in this book were presented and discussed in a Graduate Seminar at the University of Maryland in Spring 2008. A first rough draft of the book was then taken as the main reading for a second seminar in Spring 2010. I am grateful to all the graduate and postdoctoral students who attended for giving me the benefit of their criticisms and puzzlement. Both seminars were wonderfully useful to me. (I hope they were stimulating and informative for the students in turn.) The participants were as follows: Jason Christie, Sean Clancy, Mark Engelbert, Mark Engleson, Kent Erickson, Logan Fletcher, Marianna Ganapini, Yu Izumi, Andrew Knoll, David McElhoes, Christine Ng, Vincent Picciuto, J. Brendan Ritchie, Sungwon Woo, Yashar Saghai, Bénédicte Veillet, and Chris Vogel.
I am indebted to the General Research Board of the University of Maryland for an award that provided a semester of leave to begin work on this book. I am even more indebted to the National Science Foundation for a Scholar’s Award (number 0924523) provided by their Science, Technology, and Society program. This gave me a year of research leave, as well as two years of support for a Research Assistant, to enable me to complete the book. I am also grateful to my Department Chair, John Horty, for allowing me to accept both awards.
Finally, thanks to Shaun Nichols and Stephen Stich for permission to reproduce their figure of the mindreading system from their 2003 book, which is reprinted here as Figure 8.1.