Return to articles on consciousness 

12     The evolution of consciousness

 

            Peter Carruthers

 

 

How might consciousness have evolved? Unfortunately for the prospects of providing a convincing answer to this question, there is no agreed account of what consciousness is. So any attempt at an answer will have to fragment along a number of different lines of enquiry. More fortunately, perhaps, there is general agreement that a number of distinct notions of consciousness need to be distinguished from one another; and there is also broad agreement as to which of these is particularly problematic – namely phenomenal consciousness, or the kind of conscious mental state which it is like something to have, which has a distinctive subjective feel or phenomenology (henceforward referred to as p-consciousness). I shall survey the prospects for an evolutionary explanation of p-consciousness, on a variety of competing accounts of its nature. My goal is to use evolutionary considerations to adjudicate between some of those accounts.

 

1          Drawing distinctions

One of the real advances made in recent years has been in distinguishing different notions of consciousness (see particularly: Rosenthal, 1986; Dretske, 1993; Block, 1995; Lycan, 1996). Not everyone agrees on quite which distinctions need to be drawn; but all are at least agreed that we should distinguish creature consciousness from mental-state consciousness. It is one thing to say of an individual person or organism that it is conscious (either in general or of something in particular); and it is quite another thing to say of one of the mental states of a creature that it is conscious.

            It is also agreed that within creature-consciousness itself we should distinguish between intransitive and transitive variants. To say of an organism that it is conscious simpliciter (intransitive) is to say just that it is awake, as opposed to asleep or comatose. Now while there are probably interesting questions concerning the evolution of the mechanisms which control wakefulness and regulate sleep, these seem to be questions for evolutionary biology alone, not raising any deep philosophical issues. To say of an organism that it is conscious of such-and-such (transitive), on the other hand, is normally to say at least that it is perceiving such-and-such. So we say of the mouse that it is conscious of the cat outside its hole, in explaining why it does not come out; meaning that it perceives the cat’s presence. To provide an evolutionary explanation of transitive creature-consciousness would thus be to attempt an account of the emergence of perception. No doubt there are many problems here, to some of which I shall return later.

            Turning now to the notion of mental-state consciousness, the major distinction is between phenomenal (p-) consciousness, on the one hand – which is a property of states which it is like something to be in, which have a distinctive subjective ‘feel’ – and various functionally-definable notions, such as Block’s (1995) access consciousness, on the other. Most theorists believe that there are mental states – such as occurrent thoughts or judgements – which are conscious (in whatever is the correct functionally-definable sense), but which are not p-conscious. (In my 1996a and 1998b I disagreed; arguing that occurrent propositional thoughts can only be conscious – in the human case at least – by being tokened in imaged natural language sentences, which will then possess phenomenal properties.) But there is considerable dispute as to whether mental states can be p-conscious without also being conscious in the functionally-definable sense; and even more dispute about whether p-consciousness can be explained in functional and/or representational terms.

            It seems plain that there is nothing deeply problematic about functionally-definable notions of mental-state consciousness, from a naturalistic perspective. For mental functions and mental representations are the staple fare of naturalistic accounts of the mind. But this leaves plenty of room for dispute about the form that the correct functional account should take. And there is also plenty of scope for enquiry as to the likely course of the evolution of access-consciousness. (In my 1996a, for example, I speculated that a form of higher-order access to our own thought-processes would have conferred decisive advantages in terms of flexibility and adaptability in thinking and reasoning.)

            But what almost everyone is also agreed on, is that it is p-consciousness which is philosophically most problematic. It is by no means easy to understand how the properties distinctive of p-consciousness – phenomenal feel, or what-it-is-likeness – could be realised in the neural processes of the brain; and nor is it easy to see how these properties could ever have evolved. Indeed, when people talk about the ‘problem of consciousness’ it is really the problem of p-consciousness which they have in mind. My strategy in this chapter will be to consider a variety of proposals concerning the nature of p-consciousness from an evolutionary standpoint, hoping to obtain some adjudication between them.

 

2          Mysterianism and physicalism

There are those who think that the relationship between p-consciousness and the rest of the natural world is inherently mysterious (Nagel, 1974, 1986; Jackson, 1982, 1986; McGinn, 1991; Chalmers, 1996). Of these, some think that p-conscious states are non-physical in nature (Nagel, Jackson), although perhaps tightly connected with physical states by means of natural laws (Chalmers). Others think that while we have good general reasons for believing that p-conscious states are physical, their physical nature is inherently closed to us (McGinn). In respect of all of these approaches one might think: if p-consciousness is a mystery, then so will its evolution be. And that thought is broadly correct. If there is an evolutionary story to be told, within these frameworks, it will be an account of the evolution of certain physical structures in the brain – structures with which (unknowably to us) p-consciousness is identical (McGinn); or structures which cause p-consciousness as an epiphenomenon (Jackson); or structures which are causally correlated with p-consciousness by basic causal laws (Chalmers). These will not, then, be accounts of the evolution of p-consciousness as such.

            There is no good argument against mysterian approaches to p-consciousness to be found from this direction, however. To insist that p-consciousness must have an evolutionary explanation, and hence that mysterian theories are wrong, would plainly be question-begging, in this context. The real case against mysterianism is two-fold. First, it can be shown that the various arguments which have been presented for the inherent mysteriousness of p-consciousness are bad ones (Lewis, 1990; Loar, 1990; Tye, 1995; Lycan, 1996; Carruthers, 2000). Then second, it can be shown that a successful explanatory account of p-consciousness can be provided (see below, and Carruthers, 2000).

            Since the focus of my interest, in this chapter, is on cases where evolutionary considerations may help to provide an adjudication between alternative explanations of p-consciousness, I propose to leave mysterian approaches to one side. In the same way, and for a similar reason, I leave aside theories which claim to explain p-consciousness by postulating a type-identity between p-conscious states and states of the brain (Crick and Koch, 1990; Hill, 1991). This is because such identities, even if true, are not really explanatory of the puzzling features of p-consciousness. The right place to look for an explanation of p-consciousness, in my view, is in the cognitive domain – the domain of thoughts and representations. Accordingly, it is on such theories that I shall concentrate my attention.

 

3          First-order representational (FOR) theory

A number of recent theorists have attempted to explain p-consciousness in first-order representational (FOR) terms (Kirk, 1994; Dretske, 1995; Tye, 1995). The goal of such accounts is to characterise all of the phenomenal – ‘felt’ – properties of experience in terms of the representational contents of experience. So the difference between an experience of green and an experience of red will be explained as a difference in the properties represented – reflective properties of surfaces, say – in each case. And the difference between a pain and a tickle is similarly explained in representational terms – it is said to reside in the different properties (different kinds of disturbance) represented as located in particular regions of the subject’s own body. In each case, a p-conscious experience is said to be one which is poised to have an impact on the subject’s beliefs and practical-reasoning processes in such a way as to guide behaviour.

            It seems plain that there will be no special problem for such accounts in providing an evolutionary explanation of p-consciousness. I suggest that the task for FOR theory is just that of explaining, in evolutionary terms, how the transitions get made from (a) organisms with a repertoire of behavioural reflexes, triggered by simple features of the environment; to (b) organisms whose innate reflexes are action-schemas guided by incoming quasi-perceptual information; to (c) organisms which can also possess a suite of learned action-schemas, also guided by quasi-perceptual information; to (d) organisms in which perceptual information is made available to simple conceptual thought and reasoning.

            As an example of (a) – an organism relying only on environmental triggers – consider the tick, which drops from its perch when it detects butyric acid vapour (which is released by the glands of all mammals) and then burrows when it detects warmth. These are fixed action-patterns released by certain triggering stimuli, but which do not seem in any sense to be guided by them. As an example of (b) – an organism with a set of innate action-schemas guided by quasi-perceptual information – consider the Sphex wasp, whose behaviour in leaving a paralysed cricket in a burrow with its eggs seems to be a fixed action-pattern, but an action-pattern the details of whose execution depends upon quasi-perceptual sensitivity to environmental contours. (The states in question are only quasi-perceptual because, by hypothesis, the wasp lacks a capacity for conceptual thought; rather, its ‘percepts’ feed directly into behaviour-control, and only into behaviour-control.) For examples of (c) – organisms with learned action-patterns – one can probably turn to fish, reptiles and amphibians, which are capable of learning new ways of behaving, but which may not yet be capable of anything really resembling practical reasoning. Finally, as an example of (d) – an organism with conceptual thought – consider the cat, or the mouse, each of which probably has simple conceptual representations of the environment generated by perception, and is capable of simple forms of reasoning in the light of those representations.

            It should be obvious that the evolutionary gains, at each stage, come from the increasingly flexible behaviours which are permitted. With the transition from triggered reflexes to perceptually-guided ones you get behaviours which can be fine-tuned to the contingent features of the organism’s current environment. And with the transition from a repertoire of perceptually-guided action-patterns to conceptual thought and reasoning, you get the possibility of subserving some goals to others, and of tracking and recalling the changing features of the objects in the environment in a much more sophisticated way.

            There is no good argument to be found against first-order representationalist (FOR) theories from this quarter. Quite the contrary: that FOR-theory can provide a simple and elegant account of the evolution of p-consciousness is one of its strengths. According to FOR-theory, the evolution of p-consciousness is really just the evolution of perceptual experience. There are powerful objections to FOR-theory from other quarters, however; partly relating to its failure to draw important distinctions; partly arising from its failure really to explain the puzzling features of p-consciousness (Carruthers, 2000). I shall not pursue these here. Instead, I shall focus my discussion on a variety of higher-order representationalist (HOR) accounts of p-consciousness, in connection with which evolutionary considerations really do start to have a significant impact in guiding choice.

 

4          Higher-order representational (HOR) theory

HOR accounts of p-consciousness may be divided into four general types. First, there are ‘inner sense’, or higher-order experience (HOE), theories, according to which p-consciousness emerges when our first-order perceptual states are scanned by a faculty of inner sense to produce HOEs (Armstrong, 1968, 1984; Lycan, 1996). Second, there are higher-order thought (HOT) accounts, according to which p-consciousness arises when a first-order perceptual state is, or can be, targeted by an appropriate HOT. These HOT theories then admit of two further sub-varieties: actualist, where it is the actual presence of a HOT about itself which renders a perceptual state p-conscious (Rosenthal, 1986, 1993; Gennaro, 1996); and dispositionalist, where it is the availability of a perceptual state to HOT which makes it p-conscious (Carruthers, 1996a). Then finally, there are higher-order description (HOD) accounts (Dennett, 1978, 1991), which are like HOT theories, except that linguistically-formulated descriptions of the subject’s mental states take over the role of thoughts.

            Each kind of higher-order representational (HOR) account can make some claim to explaining p-consciousness, without needing to appeal to intrinsic, non-representational, properties of experience (qualia). I have developed this claim in some detail with respect to dispositionalist higher-order thought (HOT) theory in my 1996a (section 7.6), and so do not intend to repeat it here; and I think that it is fairly obvious that this form of explanation generalises (with slight variations) to any kind of HOR account. It is perhaps important, however, to give at least some flavour of the approach, before turning to adjudicate between the four different varieties. So let me just outline why subjects whose experiences are available to HOTs might become worried by inverted and absent qualia thought-experiments (assuming, of course, that they have sufficient conceptual sophistication in other respects – such as a capacity for counter-factual thinking – and have the time and inclination for philosophy).

            Any system instantiating a HOT model of consciousness will have the capacity to distinguish or classify perceptual states according to their contents, not by inference (that is, by self-interpretation) or relational description, but immediately. The system will be capable of recognising the fact that it has an experience as of red, say, in just the same direct, non-inferential, way that it can recognise red. A HOT system will, therefore, have available to it recognitional concepts of experience. In which case, absent and inverted subjective feelings will immediately be a conceptual possibility for someone applying these recognitional concepts. If I instantiate such a system (and I am clever enough), I shall straight away be able to think, ‘This type of experience might have had some quite other cause’, for example. Or I shall be capable of wondering, ‘How do I know that red objects – which seem red to me – don’t seem green to you?’ And so on.

 

5          The evolution of HOEs

How might a faculty of inner sense have evolved? A prior question has to be: would it need to have evolved? Or might inner sense be a ‘spandrel’ (Gould and Lewontin, 1979) – that is, a mere by-product of other features of cognition which were themselves selected for? The answer to this question will turn largely on the issue of directed complexity. To the extent that a faculty of inner sense exhibits complex internal organisation subserving a unitary or systematically organised causal role, to that extent it will be plausible to postulate evolutionary selection.

 

5.1       The complexity of inner sense

HOE theories are ‘inner sense’ models of p-consciousness. They postulate a set of inner scanners, directed at our first-order mental states, which construct analog representations of the occurrence and properties of those states. According to HOE theorists, just as we have systems (the senses) charged with scanning and constructing representations of the world (and of states of our own bodies), so we have systems charged with scanning and constructing representations of some of our own states of mind. And just as our ‘outer’ senses (including pain and touch, which can of course be physically ‘inner’) can construct representations which are unconceptualised and analog, so too does ‘inner sense’ (‘second-order sense’) construct unconceptualised and analog representations of some of our own inner mental states.

            The internal monitors postulated by HOE theories would surely need to have considerable computational complexity, in order to generate the requisite HOEs. In order to perceive an experience, the organism would need to have mechanisms to generate a set of internal representations with a content (albeit non-conceptual) representing the content of that experience, in all its richness and fine-grained detail. For HOE theories, just as much as HOT theories, are in the business of explaining how it is that one aspect of someone’s experiences (e.g. of colour) can be conscious while another aspect (e.g. of movement) can be non-conscious. In each case a HOE would have to be constructed which represents just those aspects, in all of their richness and detail.

            As a way of reinforcing the point, notice that any inner scanner would have to be a physical device (just as the visual system itself is) which depends upon the detection of those physical events in the brain which are the output of the various sensory systems (just as the visual system is a physical device which depends upon detection of physical properties of surfaces via the reflection of light). It is hard to see how any inner scanner could detect the presence of an experience qua experience. Rather, it would have to detect the physical realisations of experiences in the human brain, and construct the requisite representation of the experiences which those physical events realise, on the basis of that physical-information input. This makes is seem inevitable, surely, that the scanning device which supposedly generates higher-order experiences (HOEs) of visual experience would have to be almost as sophisticated and complex as the visual system itself.

            Now one might think that HOE theory’s commitment to this degree of complexity, all of which is devoted to the creation of p-conscious states, is itself a reason to reject it, provided that some other alternative is available. This may well be so – indeed, I would urge that it is. But for present purposes, the point is that mechanisms of inner sense would need to have evolved. The complexity of those mechanisms makes it almost inevitable that the devices in question will have evolved, in stages, under some steady selectional pressure or pressures.

 

5.2       Perceptual integration as the evolutionary function of HOEs

What, then, might have led to the evolution of a faculty for generating HOEs? The answer had better not turn on the role of HOEs in underpinning and providing content for higher-order thoughts (HOTs), on pain of rendering a faculty of inner sense redundant. For as we shall see shortly, HOT theory can provide a perfectly good explanation of p-consciousness, and a perfectly good explanation of its evolution, without needing to introduce HOEs. So even if some or all creatures with inner sense are de facto capable of HOTs, a HOE theorist would be well-advised to find some distinctive role for HOEs which need not presuppose that a capacity for HOTs is already present.

            One suggestion made in the literature is that HOEs might serve to refine first-order perception, in particular helping to bind together and integrate its contents (Lycan, 1996). The claim seems to be that HOEs might be necessary to solve the so-called ‘binding problem’ in a distributed, parallel-process, perceptual system. (The problem is that of explaining how representations of objects and representations of colour, say, get bound together into a representation of an object-possessing-a-colour.) But this suggestion is highly implausible. So far as I am aware, no cognitive scientist working on the binding problem believes that second-order representations play any part in the process. And in any case it is quite mysterious how such second-order processing would be presumed to work.

            Suppose that I am viewing a upright red bar and a horizontal green bar, and that my visual system has constructed, separately, representations of red and of green, and representations of upright and horizontal bars. Then the binding problem is the problem of how to attach the redness to the uprightness and the greenness to the horizontalness, rather than vice versa. How could it possibly help with this problem, to add into the equation a HOE of my experience of red, a HOE of my experience of green, a HOE of my experience of uprightness, and a HOE of my experience of horizontalness? Those HOE states look like they would be just as discrete, and just as much in need of appropriate ‘binding’, as the first-order experiences which are their targets.

 

5.3       Mental simulation as the evolutionary function of HOEs

Another suggestion made in the literature is that the evolution of a capacity for ‘inner sense’ and for HOEs might be what made it possible for apes to develop and deploy a capacity for ‘mind-reading’, attributing mental states to one another, and thus enabling them to predict and exploit the behaviour of their conspecifics (Humphrey, 1986). This idea finds its analogue in the developmental account of our mind-reading abilities provided by Goldman (1993) and some other ‘simulationists’. The claim is that we have introspective access to some of our own mental states, which we can then use to generate simulations of the mental activity of other people, hence arriving at potentially useful predictions or explanations of their behaviour.

            I believe that this sort of evolutionary story should be rejected, however, because I think that simulationist accounts of our mind-reading abilities are false (see my 1996b). Rather, ‘theory-theory’ accounts of our abilities are much to be preferred, according to which those abilities are underpinned by an implicit theory of the structure and functioning of the mind (Stich, 1983; Fodor, 1987; Wellman, 1990; Nichols et al, 1996). Then since all theories involve concepts of the domain theorised, it would have to be the case that mind-reading abilities coincide with a capacity for higher-order thoughts (HOTs). However, it is worth setting this objection to one side. For even if we take simulationism seriously, there are overwhelming problems in attempting to use that account to explain the evolution of a faculty of inner sense.

            One difficulty for any such proposal is that it must postulate that a capacity for ‘off-line thinking’ would be present in advance of (or at least together with) the appearance of inner sense. For simulation can only work if the subject has a capacity to take their own reasoning processes ‘off-line’, generating a set of ‘pretend’ inputs to those processes, and then attributing the outputs of the processes to the person whose mental life is being simulated. Yet some people think that the capacity for ‘off-line’ (and particularly imaginative) thinking was probably a very late arrival on the evolutionary stage, only appearing with the emergence of Homo sapiens sapiens (or even later) some 100,000 years ago (Bickerton, 1995; Carruthers, 1998a). And certainly the proposal does not sit well with the suggestion that a capacity for higher-order experiences (HOEs) might be widespread in the animal kingdom – on the contrary, one would expect that only those creatures with a capacity for ‘mind-reading’ and/or a capacity for ‘off-line’ imaginative thinking would have them.

            Another difficulty is to see how the initial development of inner sense, and its use in simulation, could even get going, in the absence of some mental concepts, and so in the absence of a capacity for HOTs. There is a stark contrast here with outer sense, where it is easy to see how simple forms of sensory discrimination could begin to develop in the absence of conceptualisation and thought. An organism with a light-sensitive patch of skin, for example (the very first stages in the evolution of the eye), might become wired up, or might learn, to move towards, or away from, sources of light; and one can imagine circumstances in which this might have conferred some benefit on the organisms in question. But the initial stages in the development of inner sense would, on the present hypothesis, have required a capacity to simulate the mental life of another being. And simulation seems to require at least some degree of conceptualisation of its inputs and outputs.

            Suppose, in the simplest case, that I am to simulate someone else’s experiences as they look at the world from their particular point of view. It is hard to see what could even get me started on such a process, except a desire to know what that person sees. And this of course requires me to possess a concept of seeing. Similarly at the end of a process of simulation, which concludes with a simulated intention to perform some action A. It is hard to see how I could get from here, to the prediction that the person being simulated will do A, unless I can conceptualise my result as an intention to do A, and unless I know that what people intend, they generally do. But then all this presupposes that mental concepts (and so a capacity for HOTs) would have had to be in place before (or at least coincident with) the capacity for inner sense and for mental simulation.

            A related point is that it is difficult to see what pressures might have led to the manifest complexity of a faculty of inner sense, in the absence of quite a sophisticated capacity for conceptualising mental states, and for making inferences concerning their causal relationships with one another and with behaviour; and so without quite a sophisticated capacity for HOTs. We have already stressed above that a faculty of inner sense would have to be causally and computationally complex. In which case one might think that a steady and significant evolutionary pressure would be necessary, over a considerable period of time, in order to build it. But all of the really interesting (that is, fit, or evolutionarily fruitful) things one can do with mental state attributions – like intentional deceit – require mental concepts: in order to deceive someone intentionally, you have to think that you are inducing a false belief in them; which in turn requires that you possess the concept belief.

            I conclude this section, then, by claiming that ‘inner sense’ accounts of p-consciousness are highly implausible, on evolutionary (and other) grounds. The take-home message is: we would never have evolved higher-order experiences (HOEs) unless we already had higher-order thoughts (HOTs); and if we already had HOTs then we did not need HOEs. Upshot: if we are to defend any form of higher-order representation (HOR) theory, then it should be some sort of HOT theory (or perhaps a higher-order description, or ‘HOD’, theory), rather than a HOE theory.

 

6          Evolution and actualist HOT theory

The main objection to actualist forms of HOT theory is at the same time a difficulty for evolutionary explanation. The objection is that an implausibly vast number of HOTs would have to be generated from moment to moment, in order to explain the p-conscious status of our rich and varied conscious experiences. This objection has been developed and defended in some detail in my 1996a (section 6.2), so I shall not pause to recapitulate those points here. I shall for the most part confine myself to exploring the further implications of the objection for the evolution of p-consciousness.

            One aspect of the ‘cognitive overload’ objection should be briefly mentioned here, however. This is that it is not very plausible to respond by claiming – in the manner of Dennett, 1991 – that the contents of experience are themselves highly fragmentary, only coalescing into a (partially) integrated account in response to quite specific internal probing. This claim and actualist HOT theory would seem to be made for one another (although Rosenthal, for example, does not endorse it; 1986, 1993). It can then be claimed that the p-conscious status of an experiential content is dependent upon the actual presence of a HOT targeted on that very state, while at the same time denying that there need be many HOTs tokened at any one time. Yet some attempt can also be made at explaining how we come to be under the illusion of a rich and varied sensory consciousness: it is because, wherever we direct our attention – wherever we probe – a p-conscious content with a targeting HOT coalesces in response.

            This sort of account does not really explain the phenomenology of experience, however. For it still faces the objection that the objects of attention can be immensely rich and varied, hence requiring there to be an equally rich and varied repertoire of HOTs tokened at the same time. Think of immersing yourself in the colours and textures of a Van Gogh painting, for example, or the scene as you look out at your garden – it would seem that one can be p-conscious of a highly complex set of properties, which one could not even begin to describe or conceptualise in any detail.

 

6.1       Actual HOTs and mental simulation

Now, what would have been the evolutionary pressure leading us to generate, routinely, a vast array of HOTs concerning the contents of our conscious experiences? Not simulation-based mentalising, surely. In order to attribute experiences to people via simulation of their perspective on the world, or in order to make a prediction concerning their likely actions through simulation of their reasoning processes, there is no reason why my own experiences and thoughts should actually give rise, routinely, to HOTs concerning themselves. It would be sufficient that they should be available to HOT, so that I can entertain thoughts about the relevant aspects of my experiences or thoughts when required. All that is necessary, in fact, is what is postulated by dispositionalist HOT theory, as we shall see shortly.

            I think the point is an obvious one, but let me labour it all the same. Suppose that I am a hunter-gatherer stalking a deer, who notices a rival hunter in the distance. I want to work out whether he, too, can see the deer. To this end, I study the lie of the land surrounding him, and try to form an image of what can be seen from my rival’s perspective. At his point I need to have higher-order access to my image and its contents, so that I can exit the simulation and draw inferences concerning what my rival will see. But surely nothing in the process requires that I should already have been entertaining HOTs about my percepts of the deer and of the rival hunter before initiating the process of simulation. So nothing in a simulationist account of mind-reading abilities can explain why p-consciousness should have emerged, if actualist HOT theory is true.

 

6.2       Actual HOTs and the is–seems distinction

Nor would a vast array of actual HOTs concerning one’s current experiences be necessary to underpin the is–seems distinction. This distinction is, no doubt, an evolutionarily useful one – enabling people to think and learn about the reliability of their own experiences, as well as to manipulate the experiences of others, to produce deceit. But again, the most that this would require is that our own experiences should be available to HOTs, not that they should routinely give rise to such thoughts, day-in, day-out, and in fulsome measure.

            Again the point is obvious, but again I labour it. Suppose that I am a desert-dweller who has been misled by mirages in the past. I now see what I take to be an oasis in the distance, but recall that on previous occasions I have travelled towards apparently-perceived oases, only to find that there is nothing there. I am thus prompted to think, ‘Perhaps that is not really an oasis in the distance; perhaps the oasis only seems to be there, but is not’. I can then make some sort of estimate of likelihood, relying on my previous knowledge of the area and of the weather conditions, and act accordingly. Nothing here requires that my initial (in fact delusory) percept should already have been giving rise to HOTs. All that is necessary is that the content ‘oasis’ should prompt me to recall the previous occasions on which I have seemed to see one, but have been proved wrong – and it is only at this stage that HOTs first need to enter the picture.

            I conclude this section, then, with the claim that we have good evolutionary (and other) grounds to reject actualist HOT theory, of the sort defended by Rosenthal. Greatly preferable, as we shall see, is a form of dispositionalist HOT theory.

 

7          Evolution and dispositionalist HOT theory

The account of the evolution of p-consciousness generated by dispositionalist HOT theory proceeds in two main stages. First, there was the evolution of systems which generate integrated first-order sensory representations, available to conceptualised thought and reasoning. The result is the sort of architecture depicted in figure 1, in which perceptual information is presented via a special-purpose short-term memory store (E) to conceptualised belief-forming and practical reasoning systems, as well as via another route (N) to guide a system of phylogenetically more ancient action-schemas. Then second, there was the evolution of a theory-of-mind faculty (ToM), whose concepts could be brought to bear on that very same set of first-order representations (see figure 2, in which ‘E’ for experience is transformed into ‘C’ for conscious). A sensible evolutionary story can be told in respect of each of these developments; and then p-consciousness emerges as a by-product, not directly selected for (which is not to say that it is useless; it may be maintained, in part, as an exaptation – see below).

 

            Figure 1 – First-order perception

 

            The first stage in this account has already been discussed in section 3 above. Here just let me emphasise again in this context how very implausible it is that perceptual contents should only be (partially) integrated in response to probing. For many of the purposes of perception require that perceptual contents should already be integrated. Think, for example, of a basketball player selecting, in a split-second, a team member to receive a pass. The decision may depend upon many facts concerning the precise distribution of team members and opponents on the court, which may in turn involve recognition of the colours of their respective jerseys. It is simply not plausible that all of this information should only coalesce in response to top-down probing of the contents of experience. (‘Am I seeing someone in red to my right? Am I seeing someone in yellow coming up just behind him?’ And so on.) Indeed in general it seems that the requirements of on-line planning of complex actions requires an integrated perceptual field to underpin and give content to the indexical thoughts which such planning involves. (‘If I throw it to him just so then I can move into that gap there to receive the return pass’, and so on.)

            At any rate, this is what I shall assume – I shall assume that it is the task of the various sensory systems to generate an integrated representation of the environment (and of the states of our own bodies), which is then made available to a variety of concept-wielding reasoning, planning and belief-generating systems (some of which may be quasi-modular in structure – see my 1998a, and Mithen, 1996).

 

7.1       The evolution of mind-reading and p-consciousness

Now for the second stage in the evolution of p-consciousness, on a dispositionalist HOT account. There seems little doubt that our mind-reading (or ‘theory of mind’) faculty has evolved, and been selected for. First, there is good reason to think that it is a dissociable module of the mind, with a substantive genetic basis (Baron-Cohen, 1995). Second, precursors of this ability seem detectable in other great apes (Byrne and Whiten, 1988; Byrne, 1996), having a use both in deceiving others and facilitating co-operation with them. And there seems every reason to think that enhanced degrees of this ability would have brought advantages in survival and reproduction. Consistently with this, however, we could claim that what really provided the pressure for development of the highest forms of mind-reading ability, was the need to process and interpret early hominid attempts at speech (Carruthers, 1998a; Gómez, 1998), which would probably have consisted of multiply-ambiguous non-syntactically-structured word-strings (what Bickerton, 1995, calls ‘proto-language’).

 

            Figure 2 – Dispositionalist HOT theory

 

            Now the important point for our purposes is that the mind-reading faculty would have needed to have access to a full range of perceptual representations. It would have needed to have access to auditory input in order to play a role in generating interpretations of heard speech, and it would have needed to have access to visual input in order to represent and interpret people’s movements and gestures, as well as to generate representations of the form, ‘A sees that P’ or ‘A sees that [demonstrated object/event]’. It seems reasonable to suppose, then, that our mind-reading faculty would have been set up as one of the down-stream systems drawing on the integrated first-order perceptual representations, which were already available to first-order concepts and indexical thought (see figure 2).

            Once this had occurred, then nothing more needed to happen for people to enjoy p-conscious experiences, on a dispositionalist HOT account. Presumably they would already have had first-order recognitional concepts for a variety of surface-features of the environment – red, green, rough, loud, and so on – and it would then have been but a trivial matter (once armed with mentalistic concepts, and the is–seems distinction) to generate higher-order recognitional concepts in response to the very same perceptual data – seems red, looks green, feels rough, appears loud, and so on. Without the need for any kind of ‘inner scanner’, or the creation of any new causal connections or mechanisms, people would have achieved higher-order awareness of their own experiential states. And then once armed with this new set of recognitional concepts, subjects would have been open to the familiar and worrisome philosophical thought-experiments – ‘How do I know that red seems red to you? maybe red seems green to you?’ and so on.

            Once people possessed higher-order recognitional concepts, and were capable of thoughts about their own experiences generally, then this would, no doubt, have had further advantages, helping to preserve and sustain the arrangement. Once you can reflect on your perceptual states, for example, you can learn by experience that certain circumstances give rise to perceptions which are illusory, and you can learn to withhold your first-order judgements in such cases. This may well be sufficient to qualify p-consciousness as an exaptation (like the black-heron’s wings, which are now used more for shading the water while fishing than for flight; or like the penguin’s wings, which are now adapted for swimming, although they originally evolved for flying). But it is important to be clear that p-consciousness was not originally selected for, on the present account. Rather, it is a by-product of a mind-reading faculty (which was selected for) having access to perceptual representations.

 

7.2       HOT consumers and subjectivity

It might well be wondered how the mere availability to HOTs could confer on our perceptual states the positive properties distinctive of p-consciousness – that is, of states having a subjective dimension, or a distinctive subjective feel. The answer lies in the theory of content. I agree with Millikan (1984) that the representational content of a state depends, in part, upon the powers of the systems which consume that state. There is a powerful criticism here of ‘informational’, or ‘causal co-variance’ accounts of representational content, indeed (Botterill and Carruthers, 1999, ch.7). It is no good a state carrying information about some environmental property, if – so to speak – the systems which have to consume, or make use of, that state do not know that it does so. On the contrary, what a state represents will depend, in part, on the kinds of inferences which the cognitive system is prepared to make in the presence of that state, or on the kinds of behavioural control which it can exert.

            This being so, once first-order perceptual representations are present to a consumer-system which can deploy a theory of mind, and which contains recognitional concepts of experience, then this is sufficient to render those representations at the same time as higher-order ones. This is what confers on our p-conscious experiences the dimension of subjectivity. Each experience is at the same time (while also representing some state of the world, or of our own bodies) a representation that we are undergoing just such an experience, by virtue of the powers of the mind-reading consumer-system. Each percept of green, for example, is at one and the same time a representation of green and a representation of seems green or experience of green. In fact, the attachment of a mind-reading faculty to our perceptual systems completely transforms the contents of the latter.

            This is a good evolutionary story that dispositionalist HOT theory can tell, it seems to me. It does not require us to postulate anything beyond what most people think must have evolved anyway (integrated first-order perceptions, and a mind-reading faculty with access to those perceptions). Out of this, p-consciousness emerges without the need for any additional computational complexity or selectional pressure. So other things being equal (assuming that it can do all the work needed of a theory of p-consciousness – see my 2000), dispositionalist HOT theory is the theory to believe.

 

8          Evolution and HODs

The only real competitor left in the field, amongst higher-order representation (HOR) theories, is the higher-order descriptivism espoused by Dennett (1978, 1991. Note that I shall abstract from the major differences between these works – particularly the claim in the latter that facts about consciousness are largely indeterminate – focusing just on the alleged connection with language.) On this account, p-conscious states are those perceptual contents which are available for reporting in speech (or writing, or for representing to oneself in ‘inner speech’). Dennett can (and does, 1991) tell a perfectly good evolutionary story about the evolution of the required cognitive structures, in a number of stages.

 

8.1       HODs and evolution

First, hominids evolved a wide variety of specialist processing-systems for dealing with particular domains, organised internally along connectionist lines. Thus they may well have evolved specialist theory-of-mind systems; co-operative exchange systems; processors for dealing in naive physics and tool-making; processors for gathering and organising information about the living world; systems for selecting mates and directing sexual strategies; and so on – just as some evolutionary psychologists and archaeologists now suppose (Barkow et al., 1992; Mithen, 1996; Pinker, 1997). These systems would have operated independently of one another; and at this stage most of them would have lacked access to each other’s outputs. Although Dennett himself does not give a time-scale, this first stage could well have coincided with the period of massive brain-growth, lasting two or more million years, between the first appearance of Homo habilis and the evolution of archaic forms of Homo sapiens.

            Second, hominids then evolved a capacity to produce and process natural language; which was used in the first instance exclusively for purposes of inter-personal communication. This stage could well have coincided with the arrival of Homo sapiens sapiens in Southern Africa some 100,000 years ago. The resulting capacity for sophisticated and indefinitely complex communication would have immediately conferred on our species a decisive advantage, enabling more subtle and adaptable forms of co-operation, and more efficient accumulation and transmission of new skills and discoveries. And indeed, just as might be predicted, we do see Homo sapiens sapiens rapidly colonising the globe, displacing competitor hominid species; with Australia being reached for the first time by boat some 60,000 years ago. And the evidence is that our species was more efficient at hunting than its predecessors, and soon began to carve harpoons out of bone, beginning fishing for the first time (Mithen, 1996, pp.178-183).

            Finally, a new and clever trick caught on amongst our ancestors, giving rise to what is distinctive of the conscious human mind. As Dennett (1991) tells it, we began to discover that by asking ourselves questions, we could often elicit information which we did not know we had. Each of the specialist processing systems would have had access to the language faculty, and by generating questions through that faculty and receiving answers from it, these systems would have been able to interact quite freely and access one another’s resources for the first time. The result, thinks Dennett, is the Joycean machine – the constant stream of ‘inner speech’ which occupies so much of our waking lives, and which amounts to a new virtual processor (serial and digital) overlain on the parallel distributed processes of the human brain. This final stage might well have coincided with the explosion of culture around the globe some 40,000 years ago, including the use of beads and necklaces as ornaments; the burying of the dead with ceremonies; the working of bone and antler into complex weapons; and the production of carved statuettes and paintings (Mithen, 1996).

 

8.2       HODs versus HOTs

This is a perfectly sensible evolutionary account, which can be made to fit the available archaeological and neuro-psychological data quite nicely. But what reason does it give us for thinking that p-conscious states are those which are available to (higher-order) linguistic description (HOD), rather than to higher-order thought (HOT)? After all, Dennett himself is eulogistic about HOT theories of consciousness, except that he thinks it unnecessary to insert a thought between an experience and our dispositions to describe it linguistically (1991, ch. 10); and he also allows that quite sophisticated mind-reading capacities would probably have been in place prior to the evolution of language, and independently of it in mature humans (personal communication). The vital consideration, I think, is that Dennett denies that there exists any thought realistically construed independently of language; and so, a fortiori, there are no genuine HOTs in the absence of language, either – it is only when those higher-order contents are formulated linguistically that we get discrete, structured, individually-causally-effective states; prior to that stage, it is merely that people can usefully be interpreted as entertaining HOTs, from the standpoint of the ‘Intentional Stance’ (on this, see Dennett, 1987).

            In arguing against Dennett’s HOD theory, then, I need to do two things. First, I need to argue that a mature capacity for HOTs would involve discrete, structured, states, and to argue this independently of any considerations to do with natural language. And second, I need to show that such a capacity is in fact independent of linguistic capacities – in evolution, development, and/or mature human cognition.

 

8.3       The case for structured HOTs

For the first stage of my case I borrow from Horgan and Tienson (1996), who show how the standard arguments for the view that thoughts must be carried by discrete structured states (generally thought to be sentences of an innate and universal symbolic system, or Mentalese) can be considerably strengthened. (The standard arguments are that only the Mentalese hypothesis can explain how thought can be systematic and productive; see Fodor, 1987). Horgan and Tienson ask just why propositional attitudes should be systematic. Is it merely a brute fact about (some) cognisers, that if they are capable of entertaining some thoughts, then they will also be capable of entertaining structurally related thoughts? They argue not, and develop what they call the tracking argument for Mentalese. Any organism which can gather and retain information about, and respond flexibly and intelligently to, a complex and constantly changing environment must, they claim, have representational states with compositional structure.

            Consider early hominids, for example, engaged in hunting and gathering. They would have needed to keep track of the movements and properties of a great many individuals – both human and non-human – updating their representations accordingly. While on a hunt, they would have needed to be alert for signs of prey, recalling previous sightings and patterns of behaviour, and adjusting their search in accordance with the weather and the season, while also keeping tabs on the movements, and special strengths and weaknesses, of their co-hunters. Similarly while gathering, they would have needed to recall the properties of many different types of plants, berries and tubers, searching in different places according to the season, while being alert to the possibility of predation, and tracking the movements of the children and other gatherers around them. Moreover, all such hominids would have needed to track, and continually up-date, the social and mental attributes of the others in their community (see below).

            Humans (and other intelligent creatures) need to collect, retain, up-date, and reason from a vast array of information, both social and non-social. There seems no way of making sense of this capacity except by supposing that it is subserved by a system of compositionally structured representational states. These states must, for example, be formed from distinct elements representing individuals and their properties, so that the latter may be varied and up-dated while staying predicated of one and the same thing.

            This very same tracking-argument applies – indeed, applies par excellence – to our capacity for higher-order thoughts (HOTs), strongly suggesting that our mind-reading faculty is so set up as to represent, process, and generate structured representations of the mental states of ourselves and other people. The central task of the mind-reading faculty is to work out and remember who perceives what, who thinks what, who wants what, who feels what, and how different people are likely to reason and respond in a wide variety of circumstances. And all these representations have to be continually adapted and updated. It is very hard indeed to see how this task could be executed, except by operating with structured representations, elements of which stand for individuals, and elements of which stand for their mental properties; so that the latter can be varied and altered while keeping track of one and the same individual. Then on the assumption that a mind-reading faculty would have been in place prior to the evolution of natural language, and/or that it can remain intact in modern humans in the absence of language, we get the conclusion that HOTs (realistically construed) are independent of language.

            The demand for structured representations to do the work of the mind-reading faculty is even more powerful than the above suggests. For HOTs are characteristically relational (people have thoughts about things; they have desires for things; they have feelings about other people; and so on) and they admit of multiple embeddings. (I may attribute to John the thought that Mary does not like him, say; and this may be crucial in predicting or explaining his behaviour.) In addition, HOTs can be acquired and lost on a one-off basis, not learned gradually following multiple exposures, like a capacity to recognise a new kind of object. (Pattern-recognition is what connectionist networks do best, of course; but they normally still require extensive training regimes. One-off learning is what connectionist networks do worst, if they can do it at all.) When I see John blushing as Mary smiles at him, I may form the belief that he thinks she likes him. But then later when I see her beating him furiously with a stick, I shall think that he has probably changed his mind. How this could be done without a system of structured representations is completely mysterious; and the chance that it might be done by some sort of distributed connectionist network – in which there are no elements separately representing John, Mary and the likes-relation – looks vanishingly small.

 

8.4       The independence of HOTs from language

How plausible is it that such structured higher-order representations are independent of natural language? Many theories of the evolution of language – especially those falling within a broadly Gricean tradition – presuppose that they are. On these accounts, language began with hominids using arbitrary ‘one-off’ signals to communicate with one another, requiring them to go in for elaborate higher-order reasoning concerning each other’s beliefs and intentions (Origgi and Sperber, this volume). For example, in the course of a hunt I may move my arm in a circular motion so as to get you to move around to the other side of our prey, to drive it towards me. Then on Grice’s (1957, 1969) analysis, I make that movement with the intention that you should come to believe that I want you to move around behind, as a result of you recognising that this is my intention. Plainly such communicative intentions are only possible for beings with a highly developed and sophisticated mind-reading faculty, capable of representing multiple higher-order embeddings.

            A number of later theorists have developed rather less elaborate accounts of communication than Grice. For example, Searle (1983) argues that the basic kind of intention is that I should be recognised as imposing a particular truth-condition on my utterance. And Sperber and Wilson (1986/1995) explain communication in terms of intentions and expectations of relevance. But these accounts still presuppose that communicators are capable of higher-order thought (HOT). In the case of Searle, this is because the concepts of truth and falsity – presupposed as already possessed by the first language-users – would require an understanding of true and false belief (Papineau, this volume). And in the case of Sperber and Wilson, it is because calculations of relevance involve inferences concerning others’ beliefs, goals, and expectations.

            On a contrasting view, it is possible that there was only a fairly limited mind-reading capacity in existence prior to the evolution of language; and that language and a capacity for structured HOTs co-evolved (see Gómez, 1998, for an account of this sort). Even if this were so, however, it would remain an open question whether language would be implicated in the internal operations of the mature mind-reading faculty. Even if they co-evolved, it may well be that structured HOTs are possible for contemporary individuals in the absence of language.

            In so far as there is evidence bearing on this issue, it supports the view that structured HOTs can be entertained independently of natural language. One sort of evidence relates to those deaf people who grow up isolated from deaf communities, and who do not learn any form of syntactically-structured Sign until quite late (Sacks, 1989; Goldin-Meadow and Mylander, 1990; Schaller, 1991). These people nevertheless devise systems of ‘home-sign’ of their own, and often engage in elaborate pantomimes to communicate their meaning. These seem like classic cases of Gricean communication; and they seem to presuppose that a capacity for sophisticated HOTs is fully intact in the absence of natural language.

            Another sort of evidence relates to the capacities of aphasics, who have lost their ability to use or comprehend language. Such people are generally quite adept socially, suggesting that their mind-reading abilities remain intact. And this has now been confirmed experimentally in a series of tests conducted with an a-grammatical aphasic man. Varley (1998) reports conducting a series of mind-reading tests (which examine for grasp of the notions of belief and false belief) with an a-grammatic aphasic. This person has severe difficulties in both producing and comprehending anything resembling a sentence (particularly involving verbs). So it seems very unlikely that he would be capable of entertaining a natural language sentence of the form, ‘A believes that P’. Yet he passed almost all of the tests undertaken (which were outlined to him by a combination of pantomime and single-word explanation).

            It seems, then, that a capacity for HOTs can be retained in the absence of language. But we also have the tracking-argument for the conclusion that a capacity for HOTs requires discrete, structured, representations. So we have the conclusion that higher-order thought, realistically construed, is independent of language, even in the case of human beings. And so there is reason to prefer a dispositionalist HOT theory over Dennett’s dispositionalist HOD theory.

 

9          Conclusion

Evolutionary considerations cannot help us, if our goal is to argue against mysterian views of p-consciousness, or against first-order representationalist (FOR) theories. But they do provide us with good reason to prefer a dispositionalist higher-order thought (HOT) account of p-consciousness, over either actualist HOT theory, on the one hand, or higher-order experience (HOE) theory, on the other; and they also have a role to play in demonstrating the superiority of dispositionalist HOT theory over dispositionalist higher-order description (HOD) theory.

 

 

Acknowledgements

I am grateful to George Botterill, Andrew Chamberlain, Susan Granger and an anonymous referee for Cambridge University Press for comments on earlier drafts of this chapter. This chapter extracts, re-presents, and weaves together material from my 2000, chs. 1, 5, 8 and 10; with thanks to the publishers, Cambridge University Press.

 


 

References

Armstrong, D. 1968. A Materialist Theory of the Mind. London: Routledge.

Armstrong, D. 1984. Consciousness and causality. In D. Armstrong and N. Malcolm, Consciousness and Causality. Oxford: Blackwell.

Barkow, J., Cosmides, L., and Tooby, J. (eds.) 1992. The Adapted Mind. Oxford: Oxford University Press.

Baron-Cohen, S. 1995. Mindblindness. Cambridge, MA: MIT Press.

Bickerton, D. 1995. Language and Human Behaviour. Seattle: University of Washington Press. (London: UCL Press, 1996.)

Block, N. 1995. A Confusion about a function of consciousness. Behavioural and Brain Sciences 18, 227-247.

Botterill, G. and Carruthers, P. 1999. Philosophy of Psychology. Cambridge: Cambridge University Press.

Byrne, R. 1996. The Thinking Ape. Oxford: Oxford University Press.

Byrne, R. and Whiten, A. (eds.) 1988. Machiavellian Intelligence. Oxford: Oxford University Press.

Carruthers, P. 1996a. Language, Thought and Consciousness. Cambridge: Cambridge University Press.

Carruthers, P. 1996b. Simulation and self-knowledge: a defence of theory-theory. In P. Carruthers and P.K. Smith (eds.), Theories of Theories of Mind, 22-38. Cambridge: Cambridge University Press.

Carruthers, P. 1998a. Thinking in language? Evolution and a modularist possibility. In P. Carruthers and J. Boucher (eds.), Language and Thought, 94-119. Cambridge: Cambridge University Press.

Carruthers, P. 1998b. Conscious thinking: language or elimination? Mind and Language, 13, 323-342.

Carruthers, P. 2000. Phenomenal Consciousness: a naturalistic theory. Cambridge: Cambridge University Press.

Chalmers, D. 1996. The Conscious Mind. Oxford: Oxford University Press.

Crick, F. and Koch, C. 1990. Towards a neurobiological theory of consciousness. Seminars in the Neurosciences, 2, 263-275.

Dennett, D. 1978. Towards a cognitive theory of consciousness. In his Brainstorms, 149-173. Hassocks, Sussex: Harvester Press.

Dennett, D. 1987. The Intentional Stance. Cambridge, MA: MIT Press.

Dennett, D. 1991. Consciousness Explained. London: Penguin Press.

Dretske, F. 1993. Conscious experience. Mind, 102, 263-283.

Dretske, F. 1995. Naturalizing the Mind. Cambridge, MA: MIT Press.

Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press.

Gennaro, R. 1996. Consciousness and Self-Consciousness. Amsterdam: John Benjamins Publishing.

Goldin-Meadow, S. and Mylander, C. 1990.  Beyond the input given. Language 66, 323-355

Goldman, A. 1993. The psychology of folk psychology. Behavioural and Brain Sciences, 16, 15-28.

Gómez, J-C. 1998. Some thoughts about the evolution of LADS: with special reference to TOM and SAM. In P. Carruthers and J. Boucher (eds.), Language and Thought, 94-119. Cambridge: Cambridge University Press.

Gould, S. and Lewontin, R. 1979. The spandrels of San Marco and the Panglossian paradigm. Proceedings of the Royal Society, B205, 581-598.

Hill, C. 1991. Sensations: a Defence of Type Materialism. Cambridge: Cambridge University Press.

Horgan, T. and Tienson, J. 1996. Connectionism and Philosophy of Psychology. Cambridge, MA: MIT Press.

Humphrey, N. 1986. The Inner Eye. London: Faber and Faber.

Jackson, F. 1982. Epiphenomenal qualia. Philosophical Quarterly, 32, 127-136.

Jackson, F. 1986. What Mary didn’t know. Journal of Philosophy, 83, 291-295.

Kirk, R. 1994. Raw Feeling. Oxford: Oxford University Press.

Lewis, D. 1990. What experience teaches. In W. Lycan (ed.), Mind and Cognition, 499-519. Oxford: Blackwell.

Loar, B. 1990. Phenomenal states. Philosophical Perspectives, 4, 81-108.

Lycan, W. 1996. Consciousness and Experience. Cambridge, MA: MIT Press.

McGinn, C. 1991. The Problem of Consciousness. Oxford: Blackwell.

Millikan, R. 1984. Language, Thought, and Other Biological Categories. Cambridge, MA: MIT Press.

Mithen, S. 1996. The Prehistory of the Mind. London: Thames and Hudson.

Nagel, T. 1974. What is it like to be a bat? Philosophical Review, 82, 435-456.

Nagel, T. 1986. The View from Nowhere. Oxford: Oxford University Press.

Nichols, S., Stich, S., Leslie, A., and Kein, D. 1996. Varieties of off-line simulation. In P. Carruthers and P.K. Smith (eds.), Theories of Theories of Mind, 39-74. Cambridge: Cambridge University Press.

Pinker, S. 1997. How the Mind Works. London: Penguin.

Rosenthal, D. 1986. Two concepts of consciousness. Philosophical Studies, 49, 329-359.

Rosenthal, D. 1993. Thinking that one thinks. In M. Davies and G. Humphreys (eds.), Consciousness, 197-223. Oxford: Blackwell.

Sachs, O. 1989. Seeing Voices. London: Picador.

Schaller, S. 1991. A Man Without Words. New York: Summit Books.

Searle, J. 1983. Intentionality. Cambridge: Cambridge University Press.

Stich, S. 1983. From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press.

Tye, M. 1995. Ten Problems of Consciousness. Cambridge, MA: MIT Press.

Varley, R. 1996. Aphasic language, aphasic thought. In P. Carruthers and J. Boucher (eds.), Language and Thought, 128-145. Cambridge: Cambridge University Press.

Wellman, H. 1990. The Child’s Theory of Mind. Cambridge, MA: MIT Press.


Figure 1

 

 

 

 

 

 

 


Figure 2