iCog Blog

Trusting the Uncanny Valley: Exploring the Relationship Between AI, Mental State Ascriptions, and Trust.

uncanny-valley-humanoid-android-with-creator-468x312

Henry Powell- PhD Candidate in Philosophy at the University of Warwick

Interactive artificial agents such as social and palliative robots have become increasingly prevalent in the educational and medical fields (Coradeshi et al. 2006). Different kinds of robots, however, seem to engender different kinds of interactive experiences from their users. Social robots, for example, tend to afford positive interactions that look analogous to the ones we might have with one another. Industrial robots, on the other hand, rarely, if ever, are treated in the same way. Some very lifelike humanoid robots seem to fit somewhere outside of these two spheres, inspiring feelings of discomfort or disgust from people who come into contact with them. One way of understanding why this phenomenon obtains is via a conjecture developed by the Japanese roboticist Masahiro Mori in 1970 (Mori, 1970, pp. 33-35). This so called “uncanny valley” conjecture has a number of potentially interesting theoretical ramifications. Most importantly, that it may help us to understand a set of conditions under which humans could potentially ascribe mental states to beings without minds – in this case, that trusting an artificial agent can lead one to do just that. With this in mind the aims of this post are two-fold. Firstly, I wish to provide an introduction to the uncanny valley conjecture and secondly, I want to raise doubts concerning its ability to shed light on the conditions under which mental state ascriptions occur. Specifically, in experimental paradigms that see subjects as trusting their AI coactors.

Mori’s uncanny valley conjecture proposes that as robots increase in their likeness to human beings, their familiarity likewise increases. This trend continues up to a point at which their lifelike qualities are such that we become uncomfortable interacting with them. At around 75% human likeness, robots are seen as uncannily like human beings and are viewed with discomfort, or, in more extreme cases, disgust, significantly hindering their potential to galvanise positive social interactions.

uncanny-valley-graph-450x351

This effect has been explained in a number of ways. For instance, Saygin et al. (2011, 2012), have suggested that the uncanny valley effect is produced when there is a perceived incongruence between an artificial agent’s form and its motion. If an agent is seen to be clearly robotic but move in a very human-like way, or vice-versa, there is an incompatibility effect in the predictive, action simulating cognitive mechanisms that seek to pick out and forecast the actions of humanlike and non-humanlike objects. This predictive coding mechanism is provided contradicting information by the visual system ([human agent] with [nonhuman movement]) that prevents it from carrying out predictive operations to its normal degree of accuracy (Urgen & Miller, 2015). I take it that the output of this cognitive system is presented in our experience as being uncertain and that this uncertainty accounts for the feelings of unease that we experience when interacting with these uncanny artificial agents.

Of particular philosophical interest in this regard is a strand of research that has suggested that humans can be seen to make mental state ascriptions to artificial agents that fall outside the uncanny valley in given situations. This story was posited in two studies published in 2011 and 2015 by Kurt Gray & Daniel Wegner and Maya Mathur & David Reichling respectively. As I believe that it contains the most interesting evidential basis for thinking along these lines I will limit my discussion here to the latter experiment.

Mathur & Reichling’s study saw subjects partake in an “investment game” (Berg et al. 1995) – a generally accepted experimental standard in measuring trust – with a number of artificial agents whose facial features varied in their human likeness. This was to test whether subjects were willing to trust different kinds of artificial agents depending on where they fell on the uncanny valley scale. What they found was that subjects played the game in such a way that indicated that they trusted robots with certain kinds of facial features to act in certain ways so as to reach an outcome that was mutually beneficial to both of them, rather than favouring one or the other. The authors surmised that because the subjects seemed to trust these artificial agents, in a way that suggested that they had thought about what the artificial agent’s intentions might be, the subjects had ascribed mental states to their robotic partners in these cases.

It was proposed that subjects had believed that the artificial agents had mental states encompassing intentional propositional attitudes (beliefs, desires, intentions etc.). This was because subjects seemed to assess the artificial agent’s decision making processes in the form of what the robots “interests” in the various outcomes might be. This result is potentially very exciting but I think that it jumps to conclusions rather too quickly. I’d now like to briefly give reasons for my thinking along these lines.

Mathur and Reichling seem to be making two claims in the discussion of their study’s results.

  1. That subjects trusted the artificial agents.
  2. That this trust implies the ascription of mental states.

My objections here are the following. I think that i) is more complicated than the authors make it out to be and that ii) is just not at all obvious and does not follow from i) when i) is analysed in the proper way. Let us address i) first as it leads into the problem with ii).

When elaborated, I think that i) is making a claim that the subjects believed that the artificial agents would act in a certain way and that this action would be satisfactorily reliable. I think that this is plausible but I also think that the form of trust here is not that which is intended by Mathur and Reichling and is thus uninteresting in relation to ii). There are, as far as I can tell, at least two ways in which we can trust things. The first and perhaps most interesting form of trust is that one expressible in sentences like “I trust my brother to return the money that I lent him”. This implies that I think of my brother as the sort of person who would not, given the opportunity and upon rational reflection, do something contrary to what he had told me he would do. The second form of trust is that which we might have towards a ladder or something similar. We might say of such objects that “I trust that if I walk up this ladder it will not collapse because I know that it is sturdy”. The difference here should be obvious. I trust the ladder because I can infer from its physical state that it will perform its designated function. It has no loose fixtures, rotting parts or anything else that might make it collapse when I walk up it. To trust the ladder in this way I do not think that it has to make commitments to the action expected of it based on a given set of ethical standards. In the case of trusting my brother, my trust in him is expressible as a belief in the idea that given the opportunity to choose not do what I have asked of him he will chose in favour of that which I have asked. The trust that I have in my brother requires that I believe that he has mental states that inform and help him to choose to act in favour of my asking him to do something. One form of trust implies the existence of mental states, the other does not. In regards to ii) then, as has just been argued, trust only implies mental states if it is of the form that I would ascribe to my brother in the example just given, but not if that trust was of the sort that we would normally ascribe to reliably functional objects like ladders. So ii) only follows from i) if the former kind of trust is evinced and not otherwise.

This analysis suggests that if we are to believe that the subjects in this experiment ascribed mental states to the artificial agents (or indeed subjects in any other experiment that reaches the same conclusions) then we need sufficient reasons for thinking that the subjects were treating the artificial agents like I would treat my brother and not like I would treat the ladder in respect to ascriptions of trust. Mathur and Reichling are silent as to these and thus we have no good reason for thinking that mental state ascriptions were taking place in the minds of the subjects in their experiment. While I do not think that it is entirely impossible that such a thing might obtain in some circumstances it is just not clear from this experiment that it obtains in this instance.

What I have hopefully shown in this post is that is important that proceed with caution when making claims about our willingness to ascribe other minds to certain kinds of objects and agents (either artificial or otherwise). Specifically, it is important to do so in relation to our ability to hold such things in seemingly special kinds of relations with ourselves, trust being an important example of this.

 

References:

Berg, J., Dickhaut J., McCabe, K., (1995). Trust, Reciprocity, and Social History. Game and Economic Behaviour, 10, 122-142.

Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., … Ishida, H. (2006). Human-inspired robots. IEEE Intelligent Systems, 21(4), 74–85.

Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130.

MacDorman, K. F. (2005). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 workshop: toward social mechanisms of android science (pp. 106–118).

  1. B. Mathur and D. B. Reichling, “An uncanny game of trust: Social trustworthiness of robots inferred from subtle anthropomorphic facial cues,”Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on, La Jolla, CA, 2009, pp. 313-314.

Saygin, A. P. (2012). What can the Brain Tell us about Interactions with Artificial Agents and Vice Versa? In Workshop on Teleoperated Androids, 34th Annual Conference of the Cognitive Science Society.

Saygin, A. P., & Stadler, W. (2012). The role of appearance and motion in action prediction. Psychological Research, 76(4), 388–394. http://doi.org/10.1007/s00426-012-0426-z

Urgen, B. A., & Miller, L. E. (2015). Towards an Empirically Grounded Predictive Coding Account of Action Understanding. Journal of Neuroscience, 35(12), 4789–4791.

 

 

 

Split Brains and the Compositional Metaphysics of Consciousness

il_fullxfull.367667424_picx

Luke Roelofs- Postdoctoral Fellow in Philosophy at the Australian National University

The mammalian brain has an odd sort of redundancy: it has two hemispheres, each capable of supporting more-or-less normal human consciousness without the other. We know this because destroying, incapacitating, or removing one hemisphere leaves a patient who, despite some difficulties with particular activities, is clearly lucid and conscious. The puzzling implications of this redundancy are best brought out by considering the unusual phenomenon called the ‘split-brain’.

The hemispheres are connected by a bundle of nerve fibres called the corpus callosum, as well as both being linked to the non-hemispheric parts of the brain (the ‘brainstem’). To control the spread of epileptic seizures, some patients had their corpus callosum severed while leaving both hemispheres, and the brainstem, intact (Gazzaniga et al. 1962, Sperry 1964). These patients appear normal most of the time, with no abnormalities in thought or action, but when experimenters manage to present stimuli to sensory channels which will take them exclusively to one hemisphere or the other, strange dissociations appear. For example, when we show the word ‘key’ to the right hemisphere (such as by flashing it in the left half of the patient’s visual field), it cannot be verbally reported (because the left hemisphere controls language), but if we ask the patient to pick up the object they saw the word for, they will readily pick out a key – but only if they can use their left hand (controlled by the right hemisphere). Moreover, for example, if the patient is shown the word ‘keyring’, with ‘key’ going to the right hemisphere and ‘ring’ going to the left, they will pick out a key (with their left hand) and a ring (with their right hand), but not a keyring. They will even report having seen only the word ‘ring’, and deny having seen either ‘key’ or ‘keyring’.

Philosophical discussion of the split-brain phenomenon takes two forms: arguing in support of a particular account of what is going on (e.g. Marks 1980, Hurley 1998, Tye 2003, pp.111-129, Bayne & Chalmers 2003, pp.111-112, Bayne 2008, 2010, pp.197-220), or exploring how the case challenges the very way that we frame such accounts. A seminal example of the latter form is Nagel (1971) which reviews several ways to make sense of the split-brain patient – as one person, as two people, as one person who occasionally splits into two people, etc. – and rejects them all for different reasons, concluding that we have found a case where our ordinary concept of ‘a person’ breaks down and cannot be coherently applied. My work develops an idea in the vicinity of Nagel’s: that our ordinary concept of ‘a person’ can handle the split-brain phenomenon if we transform it to allow for composite subjectivity – something which we have independent arguments for.

Start with what Nagel says about one of the proposed interpretations of the split-brain patient: as two people inhabiting one body. Pointing out that when not in experimental situations, the patient shows fully integrated behaviour, he asks whether we can really refuse to ascribe all their behaviour to a single person, “just because of some peculiarities about how the integration is achieved”(Nagel 1971, p.406). Of course sometimes two people do seem to work ‘as one’, as in “pairs of individuals engaged in a performance requiring exact behavioral coordination, like using a two-handed saw, or playing a duet.” Perhaps the two hemispheres are like this? But Nagel worries that this position is unstable:

“If we decided that they definitely had two minds, then [why not] conclude on anatomical grounds that everyone has two minds, but that we don’t notice it except in these odd cases because most pairs of minds in a single body run in perfect parallel?” (Nagel 1971, p.409)

Nagel’s worry here is cogent: if we accept that there can be two distinct subjects despite it appearing for all the world as though there was only one, we seem to lose any basis for confidence that the same thing is not happening in other cases. He continues:

“In case anyone is inclined to embrace the conclusion that we all have two minds, let me suggest that the trouble will not end there. For the mental operations of a single hemisphere, such as vision, hearing, speech, writing, verbal comprehension, etc. can to a great extent be separated from one another by suitable cortical deconnections; why then should we not regard each hemisphere as inhabited by several cooperating minds with specialized capacities? Where is one to stop?” (Nagel 1971, Fn11)

Where indeed? If one apparently unified mind could be really a collection of interacting minds, why not think that all apparently unified minds are really such collections? What evidence could decide one way or the other? Taking this line seems to leave us with empirically undecidable questions about every mind we encounter.

What is striking is that this way of thinking isn’t problematic for anything other than minds – indeed it is platitudinous. Most things can be equally well understood as one or as many, because we are happy to regard them simultaneously as a collection of parts and as a single whole. What makes the split-brain phenomenon so perplexing is our difficulty in extending this attitude to minds.

Consider, for instance, the physical brain. Do we have one brain, or do we have several billion neurones, or even 8-or-so lobes? The answer of course is ‘all of the above’: the brain is nothing separate from the billions of neurones, in the right relationships, and neither are the 8 lobes anything separate from the brain (which they compose) or the neurones (which compose them). And as a result of the ease with which we shift between one-whole and many-parts modes of description, we can be sanguine about the question ‘how many brains does the split-brain patient have?’ There is some basis for saying ‘one’, and some basis for saying ‘two’, but it’s fine if we can’t settle on a single answer, because the question is ultimately a verbal one. There are all the normal parts of a brain, standing in some but not all of their normal relations, and so not fitting the criteria for being ‘a brain’ as well as they normally would. And there are two overlapping subsystems within the one whole, which individually fit the criteria for being ‘a brain’ moderately well. But there is no further fact about which form of description – calling the whole a brain or calling the two subsystems each a brain – is ultimately correct.

The challenge is to take the same relaxed attitude to the question ‘how many people?’ Here is what I would like to say: the two hemispheres are conscious, and the one brain that they compose is conscious in virtue of their consciousness and the relations between them. Under normal circumstances their interactions ensure that the composite consciousness of the whole brain is well-unified: in the split-brain experiments, their interactions are different and establish a lesser degree of unity. And each hemisphere is itself a composite of smaller conscious parts. This amounts to embracing what Nagel views as a reductio.

There is something very difficult to think through about the composite consciousness view. It seems as though if each hemisphere is someone, that’s one thing, and if the whole brain is someone, that’s another: they cannot be just two equivalent ways of describing the same state of affairs. And this intuitive resistance to seeing conscious minds as composed of others (call it the ‘Anti-Combination intuition’) goes well beyond the split-brain phenomenon. It has a long history in the form of the ‘simplicity argument’, which anti-materialist philosophers from Plotinus (1956, pp.255-­258, 342-­356) to Descartes (1985, Volume 2, p.59) to Brentano (1987, pp. 290­-301) have used to show the immateriality of the soul. In a nutshell, this argument says that since minds cannot be thought of as composite, they must be indivisible, and since all material things are divisible, the mind cannot be material (for further analysis see Mijuskovic 1984, Schachter 2002, Lennon & Stainton 2008). Nor is the significance of this difficulty just historical: many recent materialist theories either stipulate that no conscious being can be part of another (Putnam 1965, pp.163, Tononi 2012, pp.59­-68), or else advance arguments based on the intuitive absurdity of consciousness in a being composed of other conscious beings (Block 1978, cf. Barnett 2008, Schwitzgebel 2015).

All of the just-cited authors take the Anti-Combination intuition as a datum, and draw conclusions from it about the nature of consciousness – conclusions up to and including substance dualism. I prefer the opposite approach: to see the Anti-Combination intuition as a fact about humans which impedes our understanding of how consciousness fits into the natural world, and thus as something which philosophers should seek to analyse, understand, and ultimately move beyond. As it happens, there is a group of contemporary philosophers engaged in just this task: constitutive panpsychists. Panpsychists think that the best explanation for human consciousness is that consciousness is a general feature of matter, and constitutive panpsychists see human consciousness as constituted out of simpler consciousnesses just as the human brain is constituted out of simpler physical structures. The most pressing objection to this view, which has received extensive recent discussion, is the ‘combination problem’: can multiple simple consciousness really compose a single complex consciousness (Seager 1995, p.280, Goff 2009, Coleman 2013, Mørch 2014, Roelofs 2014, Forthcoming-a, Forthcoming-b, Chalmers Forthcoming)? And this is at bottom the same issue as we have been grappling with concerning the split-brain phenomenon. In my research, I try to explore the Anti-Combination intuition, its basis, and how to move past it, with an eye both to the general metaphysical questions raised by constitutive panpsychism, and to particular neuroscientific phenomena like the split-brain.

 

References:

Barnett, David. 2008. ‘The Simplicity Intuition and Its Hidden Influence on Philosophy of Mind.’ Nous 42(2): 308­-335

Bayne, Timothy. 2008. ‘The Unity of Consciousness and the Split-Brain Syndrome.’ The Journal of Philosophy 105(6): 277-300.

Bayne, Timothy. 2010. The Unity of Consciousness. Oxford: Oxford University Press

Bayne, Timothy, & Chalmers, David. 2003. ‘What is the Unity of Consciousness?’ In Cleeremans, A. (ed.), The Unity of Consciousness: Binding, Integration, Dissociation, Oxford: Oxford University Press: 23-58

Block, Ned. 1978. ‘Troubles with Functionalism.’ In Savage, C. W. (ed.), Perception and Cognition: Issues in the Foundations of Psychology¸ University of Minneapolis Press: 261-325

Brentano, Franz. 1987. The Existence of God: Lectures given at the Universities of Worzburg and Vienna, 1868-­1891. Ed. and trans. Krantz, S., Nijhoff International Philosophy Series

Chalmers, David. Forthcoming­. ‘The Combination Problem for Panpsychism.’ In Bruntrup, G. and Jaskolla, L. (eds.), Panpsychism, Oxford: Oxford University Press

Coleman, Sam. 2014. ‘The Real Combination Problem: Panpsychism, Micro-­Subjects, and Emergence.’ Erkenntnis 79:19-44

Descartes, René. 1985. ‘Meditations on First Philosophy.’ Originally published 1641. In Cottingham, John, Stoothoff, Robert, and Murdoch, Dugald, (trans and eds.) The Philosophical Writings of Descartes, 2 vols., Cambridge: Cambridge University Press

Gazzaniga, Michael, Bogen, Joseph, and Sperry, Roger. 1962. ‘Some Functional Effects of Sectioning the Cerebral Commissures in Man.’ Proceedings of the National Academy of Sciences 48(2): 17-65

Goff, Philip. 2009. ‘Why Panpsychism doesn’t Help us Explain Consciousness.’ Dialectica 63(3):289-­311

Hurley, Katherine. 1998. Consciousness in Action. Harvard University Press.

Lennon, Thomas, and Stainton, Robert. (eds.) 2008. The Achilles of Rationalist Psychology. Studies In The History Of Philosophy Of Mind, V7, Springer

Marks, Charles. 1980. Commissurotomy, Consciousness, and Unity of Mind. MIT Press

Mijuskovic, Benjamin. 1984. The Achilles of Rationalist Arguments: The Simplicity, Unity, and Identity of Thought and Soul From the Cambridge Platonists to Kant: A Study in the History of an Argument. Martinus Nijhoff.

Mørch, Hedda Hassel. 2014. Panpsychism and Causation: A New Argument and a Solution to the Combination Problem. Doctoral Dissertation, University of Oslo

Nagel, Thomas. 1971. ‘Brain Bisection and the Unity of Consciousness.’ Synthese 22:396-413.

Plotinus. 1956. Enneads. Trans. and eds. Mackenna, Stephen, and Page, B. S. London: Faber and Faber Ltd.

Putnam, Hilary. 1965. ‘Psychological predicates’. In Capitan, William, and Merrill, Daniel. (eds.), Art, mind, and religion. Liverpool: University of Pittsburgh Press

Roelofs, Luke. 2014. ‘Phenomenal Blending and the Palette Problem.’ Thought 3:59–70.

Roelofs, Luke. Forthcoming-a. ‘The Unity of Consciousness, Within and Between Subjects.’ Philosophical Studies.

Roelofs, Luke. Forthcoming-b. ‘Can We Sum subjects? Evaluating Panpsychism’s Hard Problem.’ In Seager, William (ed.), The Routledge Handbook of Panpsychism, Routledge.

Schachter, Jean-Pierre. 2002. ‘Pierre Bayle, Matter, and the Unity of Consciousness.’ Canadian Journal of Philosophy 32(2): 241­-265

Seager, William. 1995. ‘Consciousness, Information and Panpsychism.’ Journal of Consciousness Studies 2­3:272-2­88

Sperry, Roger. 1964. ‘Brain Bisection and Mechanisms of Consciousness.’ In Eccles, John (ed.), Brain and Conscious Experience. Springer-Verlag

Tye, Michael. 2003. Consciousness and Persons: Unity and Identity. MIT Press

Tononi, Giulio. 2012. ‘Integrated information theory of consciousness: an updated account.’ Archives Italiennes de Biologie 150(2­3): 56­-90

Investigating the Stream of Consciousness

Oliver Rashbrook-Cooper–British Academy Postdoctoral Fellow in Philosophy at the University of Oxford

There are a number of different ways in which we can fruitfully study our streams of consciousness. We might try to provide a detailed characterisation of how conscious experience seems ‘from the inside’, and closely scrutinize the phenomenology. We might try to uncover the structure of consciousness by focussing upon our temporal acuity, and examining when and how we are subject to temporal illusions. Or we might focus upon investigating the neural mechanisms upon which conscious experience depends.

Sometimes, these different approaches appear to yield contradictory results. In particular, the deliverances of introspection sometimes appear to be at odds with what is revealed both by certain temporal illusions and by research into neural mechanisms. When this occurs, what should we do? We can begin by considering two features of how consciousness phenomenologically seems.

It is natural to think of experience as unfolding in step with its objects. Over a ten second interval, for instance, I might watch someone sprint 100 metres. If I watch this event, my experience will unfold over a ten second interval. First I will hear the pistol fire, see the race begin, and so on, until I see the leader cross the finish line. My experience of the race has two features. Firstly, it seems to unfold in step with the race itself, secondly it seems to unfold smoothly – it seems as if I am continuously aware of the race, rather than my awareness of it being fragmented into discrete episodes.

Can this characterisation of how things seem be reconciled with what we learn from other ways of investigating the stream of consciousness? To answer this question we can consider two different cases: the case of the colour phi phenomenon, and the case of discrete neural processing.

The colour phi phenomenon is a case in which the presentation of two static stimuli gives rise to an illusory experience of motion. When two coloured dots that are sufficiently close to one another are illuminated successively in a sufficiently brief window of time, one is left with the impression that there is a single dot moving from one location to the other (examples can be found here and here)

This phenomenon generates a puzzle about whether experience really unfolds in step with its objects. In order for us to experience apparent motion between the two locations, we need to register the occurrence of the second dot. This makes it seem as if the experience of motion can only occur after the second dot has flashed, for without registering the second dot, we wouldn’t experience motion at all. So it seems that, in this case, the experience of motion doesn’t unfold in step with its apparent object at all. If this is right, then we have reason to doubt that experience normally unfolds in step with its objects, for if we can be wrong about this in the colour phi case, perhaps we are wrong about it in all cases.

The second kind of case is the case of discrete neural processing. There is reason to think that the neural mechanisms underpinning conscious perception are discrete (see, for example, VanRullen and Koch, 2003). This looks to be in tension with the second feature we noted earlier – that our awareness of things appears to be continuous. As in the case of colour phi, it might be tempting to think that this tells us that our impression of how things seem ‘from the inside’ is mistaken.

However, when we consider how things really strike us phenomenologically, it becomes clear that there is an alternative way to reconcile these apparently contradictory results. We can begin by noting that when we introspect, it isn’t possible for us to focus our attention upon conscious experience without focussing upon a temporally extended portion of experience – there is always a minimal interval upon which we are able to focus.

The claims that experience seems to unfold in step with its objects and seems continuous apply to these temporally extended portions of experience that we are able to focus upon when we introspect. If this is right, then we have a different way of thinking about the colour phi case. On this approach, over an interval, we have an experience of apparent motion that unfolds over the time it takes the two dots to flash. The phenomenology is, however, neutral about what occurs over the sub-intervals of this experience.

The claim that this experience unfolds over an extended interval of time isn’t inconsistent with what goes on in the colour phi case. The apparent inconsistency only arises if we think that the claim that experience seems to unfold in step with its object applies to all of the sub-intervals of this experience, no matter how short (for development and discussion of this point, see Hoerl (2013), Phillips (2014), and Rashbrook (2013a)).

Likewise, in the case of discrete neural processing, in order for the case to generate a clash with how experience appears ‘from the inside’, our characterisation of how consciousness seems must apply not only to some temporally extended potions of consciousness, but to all of them, no matter how brief. Again, we might question whether this is really how things seem.

While experience doesn’t seem to be fragmented into discrete episodes, this certainly doesn’t mean that it seems to fill every interval for which we are conscious, no matter how brief (for discussion, see Rashbrook, 2013b). As in the case of the colour phi, perhaps our characterisation of how things seem applies only to temporally extended portions of experience – so the deliverances of introspection are simply neutral about whether conscious experience fills every instant of the interval it occupies.

There is more than one way, then, to reconcile the psychological and the phenomenological strategies of enquiring about conscious experience. Rather than taking non-phenomenological investigation to reveal the phenomenology to be misleading, perhaps we should take it as an invitation to think more carefully about how things seem ‘from the inside’.

 

References:

Hoerl, Christoph. 2013. ‘A Succession of Feelings, in and of Itself, is Not a Feeling of Succession’. Mind 122:373-417.

Phillips, Ian. 2014. The Temporal Structure of Experience. In Subjective Time: The Philosophy, Psychology, and Neurscience of Temporality, ed. Dan Lloyd and Valtteri Arstila, 139-159. MIT.

Rashbrook, Oliver. 2013a. An Appearance of Succession Requires a Succession of Appearances. Philosophy and Phenomenological Research 87:584-610.

Rashbrook, Oliver. 2013b. The continuity of consciousness. European Journal of Philosophy 21:611-640.

VanRullen, Rufin. and Koch, Christoph. 2003. Is perception discrete or continuous? Trends in Cognitive Sciences 7:207-13.

Infant Number Knowledge: Analogue Magnitude Reconsidered

Alexander Green, MPhil Candidate, Department of Philosophy, University of Warwick

Following Stanislas Dehaene’s The Number Sense (1997) there has been a surge in interest in number knowledge, especially the development of number knowledge in infants. This research has broadly focused on answering the following questions: What numerical abilities do infants possess, and how do these work? How are they different from the numerical abilities of adults, and how is the gap bridged in cognitive development?

The aim of this post is to provide a general introduction to infant number knowledge by focusing on the first two of these questions. There is much evidence indicating that there are two distinct systems by which infants are able to track and represent numerosity – parallel individuation and analogue magnitude. I will begin by briefly explaining what these numerical capacities are. I will then focus my discussion on the analogue magnitude system, and raise some doubts about the way in which this system is commonly understood to work.

Firstly, consider parallel individuation. This system allows infants to differentiate between sets of different quantities by tracking multiple individual objects at the same time (see Feigenson & Carey 2003; Feigenson et al 2002; Hyde 2011). For example if an infant were presented with three objects, parallel individuation would allow the tracking of the individual objects ({object 1, object 2, object 3}) rather than allowing the tracking of total set-size ({three objects}). There are two further points of interest about parallel individuation. Firstly, parallel individuation only represents numerosity indirectly because it track individuals rather than total set-size. Secondly it is limited to sets of fewer than four individuals.

Secondly, consider analogue magnitude. This system allow infants to discriminate between set sizes provided that the ratio is sufficiently large (see (Xu & Spelke 2000), (Feigenson et al 2004), (Xu et al, 2005)). More specifically, analogue magnitude allows infants to differentiate between different sets provided that the ratio is at least 2:1. Interestingly the precise cardinal value of the sets seems to be irrelevant as long as the ratio remains constant (i.e. it applies equally to a case of two-to-four as twenty-to-forty). Thus the limitations of the analogue magnitude system are determined by ratio, in contrast to the parallel individuation system whose limitations are determined by specific set-size.

So how does analogue magnitude work? I will argue that the most recent answer to this question is incorrect. This is because contemporary authors rightly reject the original characterisation of analogue magnitude (the accumulator model), yet fail to reject its implications.

The accumulator model of analogue magnitude is introduced by Dehaene, by way of an analogy with Robinson Crusoe (1997, p.28). Suppose that Crusoe must count coconuts. To do this he might dig a hole next to a river, and dig a trench which links the river to this hole. He also creates a dam, such that he can control when the river flows into the hole. For every coconut Crusoe counts, he diverts some given amount of water into the hole. However as Crusoe diverts more water into the hole, it becomes more difficult to differentiate between consecutive numbers of coconuts (i.e. the difference between one and two diversions of water is easier to see than between twenty and twenty-one).

Dehaene supposes that analogue magnitude representations are given by a similar iconic format, i.e. by representing a physical magnitude proportional to the number of individuals in the set. Consider the following example: one object is represented by ‘_’, two objects are represented by ‘__’, three are represented by ‘___’, and so on. Under this model, analogue magnitude is understood to represent the approximate cardinal value of a set by the use of an iterative counting method (Dehaene 1997, p.29). This partly reflects the empirical data: subjects are able to represent differences in set size (with longer lines indicating larger sets), and the importance of ratio for differentiation is accounted for (because it is more difficult to differentiate between sets which differ by smaller ratios).

More recently this accumulator model of analogue magnitude has come to be rejected, however. This model entails that each object in a set must be individually represented in turn (the first object produces the representation ‘_’, the second produces the representation ‘__’, etc). This suggests that it would take longer for a larger number to be represented than a smaller one (as the quantity of objects to be individually represented differs). However there are empirical reasons to reject this.

For example there is evidence suggesting that the speed of forming analogue magnitude representations doesn’t vary between different set sizes (Wood & Spelke 2005). Additionally, infants are still able to discriminate between different set sizes in cases where they are unable to attend to the individual objects of a set in sequence (Intriligator & Cavanagh 2001). These findings suggests that it is incorrect to claim that analogue magnitude representations are formed by responding to individual objects in turn.

Despite these observations, many authors continue to advocate the implications of this accumulator model even though there isn’t empirical evidence to support these. The implications that I am referring to are that analogue magnitude represents approximate cardinal value and that it does so by the aforementioned iconic format. For example, consider Carey’s discussions of analogue magnitude (2001, 2009). Carey takes analogue magnitude to enable infants to ‘represent the approximate cardinal value of sets’ (2009, p.127). As a result, the above iconic format (in which infants represent a physical magnitude proportional to the number of relevant objects) is still advocated (Carey 2001, p.38). This characterisation of analogue magnitude is typical of many authors (e.g. Feigenson et al 2004; Slaughter et al 2006; Feigenson et al 2002; Lipton & Spelke 2003; Condry & Spelke 2008).

Given the rejection of the accumulator method, this characterisation seems difficult to justify. Analogue magnitude allows infants the ability to differentiate between two sets of quantity, but there seems no reason why this would require anything over and above the representation of ordinal value (i.e. ‘greater than’ and ‘less than’). Consequently the claim that analogue magnitude represents approximate cardinal value seems to be both unjustified and unnecessary. Given this there also seems to be no justification for the Crusoe-analogy iconic format because this doesn’t contribute anything other than allowing analogue magnitude to represent approximate cardinal value which, as we have seen, is empirically undermined.

In this post I have discussed the abilities of parallel individuation and analogue magnitude, in answer to the question: what numerical abilities do infants possess, and how do these work? Parallel individuation allows infants to differentiate between small quantities of objects (fewer than four), and analogue magnitude allows differentiation between quantities if the ratio is sufficiently large. I have also advanced a negative argument against the dominant understanding of analogue magnitude. Many authors have rejected the iterative accumulator model without rejecting its implications (analogue magnitude as representing approximate cardinal value, and its doing so by iconic format). This suggests that the literature requires a new understanding of how the analogue magnitude system works.

 

References:

Carey, S. 2001. ‘Cognitive Foundations of Arithmetic: Evolution and Ontogenisis’. Mind & Language. 16(1): 37-55.

Carey, S. 2009. The Origin of Concepts. New York: OUP.

Condry, K., & Spelke, E. 2008. ‘The Development of Language and Abstract Concepts: The Case of Natural Number.’ Journal of Experimental Psychology: General. 137(1): 22-38.

Dehaene, S. 1997. The Number Sense: How the Mind Creates Mathematics. Oxford: OUP.

Feigenson, L., Carey, S., & Hauser, M. 2002. ‘The Representations Underlying Infants’ Choice of More: Object Files versus Analog Magnitudes’. Psychological Science. 13(2): 150-156.

Feigenson, L., & Carey, S. 2003. ‘Tracking Individuals via Object-Files: Evidence from Infants’ Manual Search’. Developmental Science. 6(5): 568-584.

Feigenson, L., Dehaene, S., & Spelke, E. 2004. ‘Core Systems of Number’. Trends in Cognitive Sciences. 8(7): 307-314.

Hyde, D. 2011. ‘Two Systems of Non-Symbolic Numerical Cognition’. Frontiers in Human Neuroscience. 5: 150.

Intriligator, J., & Cavanagh, P. 2001. ‘The Spatial Resolution of Visual Attention’. Cognitive Psychology. 43: 171-216.

Lipton, J., & Spelke, E. 2003. ‘Origins of Number Sense: Large-Number Discrimination in Human Infants’. Psychological Science. 14(5): 396-401.

Slaughter, V., Kamppi, D., & Paynter, J. 2006. ‘Toddler Subtraction with Large Sets: Further Evidence for an Analog-Magnitude Representation of Number’. Developmental Science. 9(1): 33-39.

Wagner, J., & Johnson, S. 2011. ‘An Association between Understanding Cardinality and Analog Magnitude Representations in Preschoolers’. Cognition. 119(1): 10-22.

Wood, J., & Spelke, E. 2005. ‘Chronometric Studies of Numerical Cognition in Five-Month-Old Infants’. Cognition. 97(1): 23-29.

Xu, F., & Spelke, E. 2000. ‘Large Number Discrimination in 6-Month-Old Infants’. Cognition. 74(1): B1-B11.

Xu, F., Spelke, E., & Goddard, S. 2005. ‘Number Sense in Human Infants’. Developmental Science. 8(1): 88-101.

The Mental Causation Question and Emergence

Dr. Umut Baysan–University Teacher in Philosophy at the University of Glasgow

How can the mind causally influence a world that is, ultimately, made up of physical stuff? This is one way of asking the mental causation question, where mental causation is the type of causation in which either the cause or effect is a mental event or property. The question can also be put this way: How can mental events or properties (such as beliefs, desires, sensations, and so on) cause other events? Discussion of the mental causation question dates back to at least Princes Elizabeth of Bohemia’s challenge to Descartes, who took the mind to be a non-physical substance. Elizabeth’s question to Descartes was how one can make sense of the idea that the mind could move the body, or the body could influence the mind, if they are two distinct substances as such.

We take mental causation to be real. The reality of mental causation is so central to our philosophical thinking that the view that there is no such thing as mental causation, namely epiphenomenalism, has a crucial dialectical role in philosophical argumentation in metaphysics of mind. As with Elizabeth’s criticism of Descartes, sometimes views in the metaphysics of mind are evaluated on this basis. In terms of their roles in philosophical argumentation, I find epiphenomenalism and radical scepticism to be very similar. In epistemology, radical scepticism is the view that there is no such thing as knowledge of the external world. Although pretty much everyone takes radical scepticism to be false, some epistemologists still devote time to showing why this is the case, as a view’s implication of radical scepticism is taken to be reason enough to dispense with it. Likewise in metaphysics of mind, nearly everyone thinks that epiphenomenalism is false, but there is a very sizable literature trying to show how this is so. For this reason, we often find charges of epiphenomenalism in reductio arguments.

Although there may have been ways of tackling Princess Elizabeth’s challenge to Descartes, the difficulty of doing so moved many contemporary philosophers towards an ontologically physicalist view according to which, at least in the actual world, there are only physical substances. Now, once we get rid of all non-physical substances from our ontology (substance physicalism) and yet still hold on to the existence of minds (realism about the mind), the next set of questions is: What should we do with the properties of such minds? What are mental properties? Can mental properties be reduced to physical properties?

For the sake of brevity, I shall not recite the reasons why such a reduction cannot be maintained, so let’s just assume that mental properties are not physical properties. (For seminal work on this point, see Putnam 1967.) In a world with purely physical substances, some of which have irreducibly mental properties, it might look as if the mental causation question can be answered easily. Mental events can cause physical events (or vice versa); such a causal relation doesn’t require the interaction of physical and non-physical substances, so the problem of causal interaction evaporates.

Emergentism is a view, or rather a group of views, according to which substance physicalism is true and mental properties are irreducibly mental. There are (at least) two varieties of emergentism. The weak variety, which sometimes goes by the name “non-reductive physicalism”, takes mental properties to be realized by physical properties. (For my work on what it is for a property to be realized by another property, see Baysan 2015). The strong variety, which goes by the name (surprise surprise!) “strong emergentism”, holds that (at least some) mental properties are as fundamental as physical properties to the extent that they need not be realised by physical properties. (See Barnes 2012 for an account of strong emergence along these lines. For joint discussions of weak and strong emergence, see Chalmers 2006 and Wilson 2015.)

Some contemporary metaphysicians of mind, most notably Jaegwon Kim (2005), think that epiphenomenalism is still a threat to emergentism. It is thought be a problem for the weak, non-reductive physicalist, variety because of the following line of thought. The physical world is supposed to be causally closed in the sense that if a physical event has a cause at any time, then at that time, it has a sufficient physical cause. Thus, if a physical event is caused by a mental event (or a property), it must be fully caused by a physical event (or a property) too. If all this is true, then every physical event that has a mental cause must be causally overdetermined. (Here, the idea is that causation implies determination, and having more than one fully sufficient cause implies overdetermination.) The acceptance of such systematic causal overdetermination is taken to be absurd; the world can’t have that much redundant causation. Therefore, the combination of non-reductive physicalism and the reality mental causation is not tenable. That is the charge anyway.

Now, what about strong emergentism? In a nutshell, defenders of this view can reject the idea that the physical domain is causally closed in the way that non-reductive physicalists typically assume. Given its anti-physicalist assumption that some properties other than the physical ones can be fundamental too, rejecting the causal closure principle is definitely a live option for strong emergentism. However, according to some, that is precisely the problem with this view. From a scientific or naturalistic point of view, how can we defend such a view if its best way of accommodating mental causation is through rejecting the causal closure of the physical domain?

The picture that I have portrayed thus far seems to suggest that unless we go all the way and reduce mental properties to physical properties, there isn’t any room for mental causation. This is what Kim and others have been trying to persuade us over the years. But, is the reasoning that has led us here really solid? Should all of the argumentative steps briefly sketched above be accepted? I have some doubts.

First, there is an emerging (pun intended) consensus that the causal argument against non-reductive physicalism sketched above has some flaws. Some philosophers aren’t convinced that non-reductive physicalism, as Kim portrays it, really implies causal overdetermination (see Yablo 1992 for a seminal account). Very roughly, the idea is that such causal overdetermination appears to obtain when a whole event and its parts are causing an event too. But taking a whole and its parts separately is surely “double counting”. Also, in presentations of the causal argument against non-reductive physicalism, we often come across the idea that if an event has two distinct sufficient causes, it must be genuinely causally overdetermined—this is known as “the exclusion principle”. But a principle with such a crucial dialectical role needs some backing-up, and some authors have noted that there doesn’t seem to be any positive argument for the truth of the exclusion principle. (For a criticism along these lines, see Arnadottir and Crane 2013).

Second, the reason to resist strong emergentism that is sketched above can be questioned too. Do we really have good reasons to think that the physical domain is causally closed? I don’t think that we can play the causal closure card unless we carefully study the reasons that are given in favour of it. Considering its importance in argumentation in the metaphysics of mind, it would be fair to say that there hasn’t been enough attention given to the positive reasons for holding it. I am aware of three arguments for the causal closure principle: (1) Lycan’s (1987) argument that it is absurd to think that laws of conservation hold everywhere in the universe with the exception of the human skull; (2) McLaughlin’s (1992) suggestion that the failure of the causal closure principle was a scientific hypothesis in chemistry which was eventually falsified (in chemistry!); and (3) Papineau’s (2002) argument that the principle is inductively verified by the practice of 20th century physiologists. This is not the place to carefully examine these three arguments in detail, but I think it is fair to say that these arguments don’t even attempt to be conclusive. The closure principle may turn out to be true, but whether that is the case or not will be an empirical matter of fact, and until we somehow established it empirically, we need to devise more solid philosophical arguments for it.

I hope this short discussion has persuaded you that whichever view in the metaphysics of mind turns out to be true, the mental causation question will play some role in determining its plausibility.

References:

Árnadóttir, S. and Crane, T. (2013). ‘There is no Exclusion Problem’, in Mental Causation and Ontology, eds. S. C. Gibb, E. J. Lowe, and R. D. Ingthorsson (Oxford: Oxford University Press).

Barnes, E. (2012). ‘Emergence and Fundamentality’. Mind, 121, pp. 873–901.

Baysan, U. (2015) ‘Realization Relations in Metaphysics’, Minds and Machines 25, pp. 247–60.

Chalmers, D. (2006). ‘Strong and Weak Emergence’, in The Re-Emergence of Emergence , eds. P. Clayton & P. Davies (Oxford: Oxford University Press).

Kim, J. (2005) Physicalism or Something Near Enough. Princeton, NJ: Princeton University Press.

Lycan, W. (1987). Consciousness. Cambridge, MA: MIT Press.

McLaughlin, B. (1992). ‘The Rise and Fall of British Emergentism’, in Emergence or Reduction?: Prospects for Nonreductive Physicalism, eds. A. Beckermann, H. & J. Kim (De Gruyter).

Papineau, D. (2002). Thinking about Consciousness. Oxford: Oxford University Press.

Putnam, H. (1967). ‘Psychological Predicates’, in Art, Mind, and Religion, eds. W.H. Capitan & D.D. Merrill (Pittsburgh: University of Pittsburgh Press).

Wilson, J. (2015). ‘Metaphysical Emergence: Weak and Strong’, in Metaphysics in Contemporary Physics: Poznan Studies in the Philosophy of the Sciences and the Humanities, eds. T. Bigaj and C. Wuthrich (Leiden: Brill).

Yablo, S. (1992) ‘Mental Causation’. Philosophical Review, 101, pp. 245–280.

The Cognitive Impenetrability of Recalcitrant Emotions

Dr. Raamy Majeed —Postdoctoral Research Fellow on the John Templeton Foundation project, ‘New Directions in the Study of Mind’ in the Faculty of Philosophy, University of Cambridge and By-Fellow, Churchill College, University of Cambridge

Consider the following emotional episodes. You fear Fido, your neighbour’s dog you judge to be harmless. You are angry with your colleague, even though you know his remark wasn’t really offensive. You are jealous of your partner’s friend, despite believing that she doesn’t fancy him. D’Arms and Jacobson (2003) call these recalcitrant emotions: emotions that exist “despite the agent’s making a judgment that is in tension with it” (pg. 129). The phenomenon of emotional recalcitrance is said to raise a challenge for theories of emotions. Drawing on the work of Greenspan (1981) and Helm (2001), Brady argues that this challenge is “to explain the sense in which recalcitrant emotions involve rational conflict or tension” (2009: 413).

Whether we require rational conflict to account for emotional recalcitrance is debatable. Indeed, much of the present controversy involves spelling out the precise nature of this conflict. But conflict, rational or otherwise, isn’t the only feature that is pertinent to the phenomenon. What tends to get neglected is precisely what gives these emotions their name, viz. their recalcitrance; their persistent nature. To elaborate, emotional episodes, by their very nature, are episodic, and we shouldn’t expect recalcitrant emotions to last any longer than non-recalcitrant ones. Nevertheless, it is in the very nature of recalcitrant emotions that they are mulish, that they don’t succumb to our judgements – i.e. to the extent that these emotional episodes last.

Here is an example. Suppose I judge that flying is safe, but feel instantly afraid as soon as my plane starts to take off. But suppose, also, that once I realize that my fear is irrational, or at least, that it is in tension with my judgement, my fear dissipates. This, arguably, won’t count as an instance of emotional recalcitrance. By contrast, say I remain fearful despite my judgement. I keep thinking to myself, ‘I know this is safe’, and yet I continue to feel afraid. This, I venture, better captures what we mean by emotional recalcitrance. Mutatis mutandis for being afraid of Fido, being jealous of your partner’s friend etc. All familiar cases of emotional recalcitrance seem to share this persistent feature. The question is, what accounts for it?

My hypothesis is this: emotions are recalcitrant to the extent that they are cognitively impenetrable. According to Goldie, “someone’s emotion or emotional experience is cognitively penetrable only if it can be affected by his relevant beliefs” (2000: 76). So far as I can tell, the first to discuss the cognitive (im)penetrability of emotions is Griffiths (1990, 1997), who takes one of the advantages of his theory to be precisely that it accounts for recalcitrant emotions, or what he calls ‘irrational emotions’.

Griffiths’s explanation of emotional recalcitrance is neglected by much of the current literature on the phenomenon. This is warranted in one respect. Griffiths doesn’t account for the sense in which recalcitrant emotions involve rational conflict, which, as mentioned earlier, is one of the central controversies. But there is a way in which the neglect is unwarranted. This has to do with the charge that his account makes emotions too piecemeal.

To elaborate, one of the most controversial features of Griffiths’s account of emotions more generally is that it divvies up emotions into three broad types, only one of which forms a natural kind. These are the set of evolved adaptive ‘affect-program’ responses, which are, more or less, cognitively impenetrable. They are surprise, fear, anger, disgust, sadness and joy. The rest are ‘higher cognitive emotions’, which are cognitively penetrable, like jealousy, shame etc., or social constructions that are ‘essentially pretences’, e.g. romantic love.

This account, arguably, does make emotions too piecemeal, but to reject the hypothesis that recalcitrant emotions are cognitively impenetrable for this reason is to throw the baby out with the bathwater. Let us be neutral as to what emotions actually are, as well as to the kinds of emotions that can be cognitively impenetrable. I think we can remain thus neutral, and still borrow some of Griffiths’s insights concerning the cognitive impenetrability of recalcitrant emotions to explain their recalcitrance.

Leaving aside the Ekman-esque notion that there are a set of basic emotions from which all other emotions arise, we can follow Griffiths in supposing that emotions, indeed the very same kind of emotions, can be brought about in distinct ways. Take, for instance, the affect-program responses. The processes that typically give rise to them, as well as these responses themselves, are what Griffiths claims is cognitively impenetrable. But he notes that they can also be triggered by processes that are cognitively penetrable. In fact, he is clear that the former doesn’t rule out the latter: “[t]he existence of a relatively unintelligent, dedicated mechanism does not imply that higher-level cognitive processes cannot initiate the same events” (1990: 187).

Griffiths exploits this account to explain emotional recalcitrance. In brief, the phenomenon occurs when an affect-program response is triggered without the cognitive process of belief-fixation that gives rise to judgement. For example, “[if] only the affect-program system classes the stimulus as a danger, the subject will exhibit the symptoms of fear, but will deny making the judgements which folk theory supposes to be implicit in the emotion” (1990: 191).

This explanation isn’t supposed to provide us with an account of what recalcitrant emotions are; what picks them out as a type. Rather, for Griffiths, it gives us a ‘theory’ of them; we have an explanation for their occurrence. Regardless of whether this theory is adequate, it is my view that the work such an explanation can be further put towards is to explain the recalcitrant nature of recalcitrant emotions. While the affect-program responses don’t always run in tandem with the cognitive processes involved in belief-fixation, what explains the persistent nature of these responses is that they, as well as the processes that give rise to them, are cognitively impenetrable. Moreover, cognitive penetrability admits of degrees. Thus, the extent to which such responses are recalcitrant will depend on the extent to which they, as well as the processes that give rise to them, are cognitively impenetrable.

One of the advantages of his theory, according to Griffiths, is that “[t]he occurrence of emotions in the absence of suitable beliefs is converted from a philosophers’ paradox into a practical subject for psychological investigation” (1990: 192). The present explanation is similarly advantageous in that it provides an explanation of emotional recalcitrance that is empirically verifiable. But by the same token, the explanation is only of interest to the extent that it is empirically plausible. The evidence is far from conclusive, but there is good reason to think we are on the right track.

McRae et al. (2012) sought to test “whether the way an emotion is generated influences the impact of subsequent emotion regulatory efforts” (pg. 253). Emotions can be triggered ‘bottom up’, i.e. in response to perceptible properties of a stimulus, or ‘top down’, i.e. in response to cognitive appraisals of an event. They took their findings to “suggest that top-down generated emotions are more successfully down-regulated by reappraisal than bottom-up emotions” (pg. 259). Emotions generated bottom-up, then, appear to behave as if they are cognitively impenetrable; or at least, as if they are less penetrable than ones generated top-down. Insofar as any of the emotions thus generated conflict (in the relevant sense) with an evaluative judgement, we have an instance of emotional recalcitrance. Run these thoughts together, and they imply that recalcitrant emotions are recalcitrant to the extent that they are cognitively impenetrable.

 

References:

Brady, M. S. (2009). ‘The Irrationality of Recalcitrant Emotions’. Philosophical Studies 145: 413–30.

D’Arms, J., & Jacobson, D. (2003). ‘The Significance of Recalcitrant Emotion’. In A. Hatzimoysis (Ed.), Philosophy and the Emotions. Cambridge: Cambridge University Press.

Goldie, P. (2000). The Emotions: A Philosophical Exploration. Oxford University Press.

Greenspan, P. S. (1981). ‘Emotions as Evaluations’. Pacific Philosophical Quarterly 62: 158–69.

Griffiths, P. E. (1990). ‘Modularity, and the Psychoevolutionary Theory of Emotion’. Biology and Philosophy 5: 175-96.

—- (1997). What Emotions Really Are. Chicago University Press.

Helm, B. (2001). Emotional Reason. Cambridge University Press.

McRae, K., Misra, S., Prasad, A. K., Pereira, S. C., Gross, J. J. (2012). ‘Bottom-up and Top-down Emotion Generation: Implications for Emotion Regulation’. Social Cognitive and Affective Neuroscience 7: 253-62.

 

Resisting nativism about mindreading

Marco Fenici–Independent researcher

My flatmate, Sam, returns home from campus, and tells me he is thirsty. We always have beer in the fridge, and I know he likes it, but I have already drunk the last one. What will Sam do? I predict that he will go to the kitchen looking for beer. At least, this is what I should do if I consider his reasonable (but incorrect) belief that there is beer in the fridge.

As philosophers often put it, such situations rely on mindreading—our capacity to attribute mental states such as beliefs, desires, and intentions to others. Indeed, this capacity is often deemed vital for the prediction and explanation of others’ behaviour in a wide variety of situations (Dennett, 1987; Fodor, 1987); a view that has influenced much empirical research. Extended investigation of children’s capacity to predict others’ actions using elicited-response false belief tasks (Baron-Cohen, Leslie, & Frith, 1985; Wimmer & Perner, 1983), which apparently require children to perform inferential reasoning of the above kind, was, until recently, widely taken to show that it is not until age four or more that children correctly understand others’ to have false beliefs (Wellman, Cross, & Watson, 2001).

These findings led to a large debate between, so-called, simulation theorists and theory theorists, but this debate has proven largely orthogonal to the concerns of psychologists (see Apperly, 2008, 2009 for discussion). Thus, I will not discuss it further in the present treatment. Instead, I will focus on a further controversy raised by the above findings: namely, the question of how infants/children acquire the socio-cognitive abilities. According to the child-as-scientist view (Bartsch & Wellman, 1995; Carey & Spelke, 1996; Gopnik & Meltzoff, 1996), children acquire a Theory of Mind (ToM) by forming, testing and revising hypotheses about the relations between mental states and observed behaviour. In contrast, proponents of modularism about mindreading (Baron-Cohen, 1995) contend that children have an innately endowed ToM provided by a domain-specific cognitive module, which has developed as our species evolved (Cosmides & Tooby, 1992; Humphrey, 1976; Krebs & Dawkins, 1984).

In the last years, the nativist view has been gaining increasing consensus after the finding that infants look longer—indicating their surprise—when they see an actor acting against a (false) belief that it would be rational to attribute to her (see Baillargeon, Scott, & He, 2010 for a review). These results are taken to indicate that infants can attribute true and false beliefs to other agents, and expect them to act coherently with these attributed mental states. Because of the very young age of the infants assessed, it has been claimed that, since birth, they must possess a predisposition to identify others’ mental states thereby implying a nativism about mindreading.

I have always been concerned about this conclusion, which seems to me a capitulation to a best explanation argument. Indeed, infants’ selective response in a spontaneous-response task does not yet specify which properties of the agent infants are sensitive to. It is not clear at all that the infants are responding to mental properties of the agents they observe rather than to other observed features of the actor’s behaviour or of the scene (Fenici & Zawidzki, in press; Hutto, Herschbach, & Southgate, 2011; Rakoczy, 2012). Furthermore, embracing nativism about mindreading excludes the possibility that infants may learn to attribute mental states in their earliest year of life (see Mazzone, 2015).

Moreover, the nativist interpretation of infants’ looking behaviour in spontaneous-response false belief tasks manifests an “adultocentric” bias. Indeed, what seems to us a full-fledged ability to interpret others’ actions by attributing mental states may have an independent explanation when manifested in the looking behaviour of younger infants. But, as it so happens, there are various reasons to doubt that infants’ social cognitive capacities manifested in spontaneous-response false belief tasks are developmentally continuous with later belief attribution capacities such as those apparently manifested by four-year-olds when succeeding in elicited-response false belief tasks (see Fenici, 2013, sec. 4 for full discussion).

First, three-year-olds are sensitive to false beliefs in spontaneous- but not in elicited-response false belief tasks (Clements & Perner, 1994; Garnham & Ruffman, 2001) in contrast to autistic subjects, who succeed in elicited (Happé, 1995) but not spontaneous-response false belief tasks (Senju, 2012; Senju et al., 2010). These opposed patterns suggest that the two capacities can be decoupled.

Furthermore, the activation of the ToM module is supposed to be automatic. Looking at the empirical evidence, adults ability for perspective taking is automatic (Surtees, Butterfill, & Apperly, 2011) while the capacity to consider others’ beliefs is not (Apperly, Riggs, Simpson, Chiavarino, & Samson, 2006; Back & Apperly, 2010, but see Cohen & German, 2009 for discussion).

Finally, if infants’ ToM mechanism was mostly responsible for their later success in elicited-response false belief tasks, one would expect alleged mindreading abilities in infancy to be a strong predictor of four-year-olds’ belief attribution capacities. However, longitudinal studies found only isolated and task-specific predictive correlations from infants’ performance in a variety of spontaneous-response false belief tasks at 15–18 months to success by the same children in elicited-response false belief tasks at age four (Thoermer, Sodian, Vuori, Perst, & Kristen, 2012).

These considerations make it important to explore alternative non-nativist explanations of the same data. In Fenici (2014), I undertook this challenge and argued that infants can progressively refine their capacity to form an expectation about the next course of an observed action without attributing a mental state to the actor.

In detail, extended investigation has by now demonstrated that, from 5–6-months on, infants can track the (motor) goals of others’ actions, such as grasping (Woodward, 1998, 2003). By one year, this capacity is quite sophisticated (Biro, Verschoor, & Coenen, 2011; Sommerville & Woodward, 2005; Woodward & Sommerville, 2000). These studies demonstrate that infants associate cognitive agents with the outcome of their actions, and rely on these associations to form expectations about the agent’s future behaviour. Although this is normally taken to be equivalent to the idea that infants attribute goals, these capacities may depend on neural processes of covert (motor) imitation (Iacoboni, 2003; Wilson & Knoblich, 2005; Wolpert, Doya, & Kawato, 2003), which become progressively attuned to more abstract features of the observed action due to associative learning (Cooper, Cook, Dickinson, & Heyes, 2013; Ruffman, Taumoepeau, & Perkins, 2012).

Computing the statistical regularities in observed patterns of action may lead infants to form expectations not only about others’ motor behaviour but also about their gaze. Indeed, infants find it more difficult to track target-directed gaze than target-directed motor behaviour because the former but not the latter lacks physical contact between the actor and the target. They can nevertheless begin forming associations between actors and the target of their gaze by noticing that cognitive agents regularly act upon the objects they gaze at. This hypothesis is coherent with empirical data attesting that the ability to follow others’ gaze significantly improves around the ninth month (Johnson, Ok, & Luo, 2007; Luo, 2010; Senju, Csibra, & Johnson, 2008), and that this capacity may merely depend on infants’ ability to detect contingent patterns of interaction with the gazing agent (Deligianni, Senju, Gergely, & Csibra, 2011).

The analysis above may also account for infants’ attested sensitivity to goal-directed behaviour and gazing. Significantly, it may also explain the cognitive capacities manifested in spontaneous-response false belief tasks. In fact, several studies found that, around 12–14 months, infants do not associate an agent with a possible target of action when a barrier is preventing her from seeing the target (Butler, Caron, & Brooks, 2000; Caron, Kiel, Dayton, & Butler, 2002; Sodian, Thoermer, & Metz, 2007). Statistical learning may well account for this novel capacity just as it apparently explains 9-month-olds’ acquired sensitivity to gaze direction from their previous sensitivity to target-directed behaviour.

Indeed, once they have learnt to associate actors with the targeted objects of their gazing, infants can start noticing that agents do not behave similarly in the presence or in the absence of barriers in their line of gaze. Significantly, this sensitivity to the modifying role that barriers have on others’ future gazing and acting comes in place right before infants start manifesting sensitivity to false beliefs in spontaneous-response false belief tasks. This may well be because developing this sensitivity is the last developmental step that infants need to achieve to manifest looking-behaviour that is selective to others’ false beliefs in spontaneous-response false belief tasks.

In conclusion, despite the wide consensus that nativism about mindreading boasts among philosophers and developmental psychologists, the evidence actually opposes a continuity in the development of social cognition from infancy to early childhood. Therefore, the capacities manifested in spontaneous-response seem not to be the forerunners of our mature capacity to attribute mental states, and that they could have evolved in other ways (Fenici, in press, subm., 2012; Fenici & Carpendale, in prep.) Future research should explore the possibility that infants’ alleged mindreading capacities actually indicate some more basic tendency to form and update expectations about others’ future actions, a capacity which progressively develops over the course of time to reflect a growing appreciation of which objects others can and cannot gaze at (Fenici, 2014; Ruffman, 2014).

References

Apperly, I. A. (2008). Beyond Simulation-theory and Theory-theory: why social cognitive neuroscience should use its own concepts to study “theory of mind.” Cognition, 107(1), 266–283. http://doi.org/10.1016/j.cognition.2007.07.019

Apperly, I. A. (2009). Alternative routes to perspective-taking: Imagination and rule-use may be better than simulation and theorising. British Journal of Developmental Psychology, 27(3), 545–553. http://doi.org/10.1348/026151008X400841

Apperly, I. A., Riggs, K. J., Simpson, A., Chiavarino, C., & Samson, D. (2006). Is belief reasoning automatic? Psychological Science, 17(10), 841–844. http://doi.org/10.1111/j.1467-9280.2006.01791.x

Back, E., & Apperly, I. A. (2010). Two sources of evidence on the non-automaticity of true and false belief ascription. Cognition, 115(1), 54–70.

Baillargeon, R., Scott, R. M., & He, Z. (2010). False-belief understanding in infants. Trends in Cognitive Sciences, 14(3), 110–118.

Baron-Cohen, S. (1995). Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: The MIT Press.

Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a “Theory of Mind”? Cognition, 21(1), 37–46.

Bartsch, K., & Wellman, H. M. (1995). Children Talk About the Mind. New York: Oxford University Press.

Biro, S., Verschoor, S., & Coenen, L. (2011). Evidence for a unitary goal concept in 12-month-old infants. Developmental Science, 14(6), 1255–1260.

Butler, S. C., Caron, A. J., & Brooks, R. (2000). Infant understanding of the referential nature of looking. Journal of Cognition and Development, 1(4), 359–377.

Carey, S., & Spelke, E. S. (1996). Science and core knowledge. Philosophy of Science, 63(4), 515–533.

Caron, A. J., Kiel, E. J., Dayton, M., & Butler, S. C. (2002). Comprehension of the referential intent of looking and pointing between 12 and 15 months. Journal of Cognition and Development, 3(4), 445–464. http://doi.org/10.1080/15248372.2002.9669677

Clements, W. A., & Perner, J. (1994). Implicit understanding of belief. Cognitive Development, 9(4), 377–395.

Cohen, A. S., & German, T. C. (2009). Encoding of others’ beliefs without overt instruction. Cognition, 111(3), 356–363. http://doi.org/10.1016/j.cognition.2009.03.004

Cooper, R. P., Cook, R., Dickinson, A., & Heyes, C. M. (2013). Associative (not Hebbian) learning and the mirror neuron system. Neuroscience Letters, 540, 28–36. http://doi.org/10.1016/j.neulet.2012.10.002

Cosmides, L., & Tooby, J. (1992). Cognitive adaptations for social exchange. In J. Barkow, L. Cosmides, & J. Tooby, The Adapted Mind: Evolutionary Psychology and the Generation of Culture (pp. 163–228). Oxford: Oxford University Press.

Deligianni, F., Senju, A., Gergely, G., & Csibra, G. (2011). Automated gaze-contingent objects elicit orientation following in 8-month-old infants. Developmental Psychology, 47(6), 1499–1503.

Dennett, D. C. (1987). The Intentional Stance. Cambridge, MA: The MIT Press.

Fenici, M. (subm.). How children approach the false belief test: Social development, pragmatics, and the assembly of Theory of Mind. Cognition.

Fenici, M. (in press). What is the role of experience in children’s success in the false belief test: maturation, facilitation, attunement, or induction? Mind & Language.

Fenici, M. (2012). Embodied social cognition and embedded theory of mind. Biolinguistics, 6(3-4), 276–307.

Fenici, M. (2013). Social cognitive abilities in infancy: is mindreading the best explanation? Philosophical Psychology. http://doi.org/10.1080/09515089.2013.865096

Fenici, M. (2014). A simple explanation of apparent early mindreading: infants’ sensitivity to goals and gaze direction. Phenomenology and the Cognitive Sciences, 14, 1–19. http://doi.org/10.1007/s11097-014-9345-3

Fenici, M., & Carpendale, J. I. M. (in prep.). Solving the false belief test puzzle: A constructivist approach to the development of social understanding.

Fenici, M., & Zawidzki, T. W. (in press). Do infant interpreters attribute enduring mental states or track relational properties of transient bouts of behavior? Studia Philosophica Estonica, 9(2).

Fodor, J. A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: The MIT Press.

Garnham, W. A., & Ruffman, T. (2001). Doesn’t see, doesn’t know: is anticipatory looking really related to understanding or belief? Developmental Science, 4(1), 94–100.

Gopnik, A., & Meltzoff, A. N. (1996). Words, Thoughts, and Theories. Cambridge, Mass: The MIT Press.

Happé, F. G. E. (1995). The role of age and verbal ability in the theory of mind task performance of subjects with autism. Child Development, 66(3), 843–855.

Humphrey, N. K. (1976). The social function of intellect. In P. P. G. Bateson & J. R. Hinde (Eds.), Growing Points in Ethology (pp. 303–317). Cambridge: Cambridge University Press.

Hutto, D. D., Herschbach, M., & Southgate, V. (2011). Social cognition: mindreading and alternatives. Review of Philosophy and Psychology, 2(3), 375–395. http://doi.org/10.1007/s13164-011-0073-0

Iacoboni, M. (2003). Understanding intentions through imitation. In S. H. Johnson-Frey (Ed.), Taking Action: Cognitive Neuroscience Perspectives on Intentional Acts (pp. 107–138). Cambridge, MA: The MIT Press.

Johnson, S. C., Ok, S., & Luo, Y. (2007). The attribution of attention: 9-month-olds’ interpretation of gaze as goal-directed action. Developmental Science, 10(5), 530–537. http://doi.org/10.1111/j.1467-7687.2007.00606.x

Krebs, J. R., & Dawkins, R. (1984). Animal signals: mind-reading and manipulation. Behavioural Ecology: An Evolutionary Approach, 2, 380–402.

Luo, Y. (2010). Do 8-month-old infants consider situational constraints when interpreting others’ gaze as goal‐directed action? Infancy, 15(4), 392–419.

Mazzone, M. (2015). Being nativist about mind reading: More demanding than you might think. In Proceedings of the EuroAsianPacific Joint Conference on Cognitive Science (EAPCogSci 2015) (Vol. 1419, pp. 288–293).

Rakoczy, H. (2012). Do infants have a theory of mind? British Journal of Developmental Psychology, 30(1), 59–74. http://doi.org/10.1111/j.2044-835X.2011.02061.x

Ruffman, T. (2014). To belief or not belief: Children’s theory of mind. Developmental Review, 34(3), 265–293. http://doi.org/10.1016/j.dr.2014.04.001

Ruffman, T., Taumoepeau, M., & Perkins, C. (2012). Statistical learning as a basis for social understanding in children. British Journal of Developmental Psychology, 30(1), 87–104.

Senju, A. (2012). Spontaneous theory of mind and its absence in autism spectrum disorders. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry, 18(2), 108–113. http://doi.org/10.1177/1073858410397208

Senju, A., Csibra, G., & Johnson, M. H. (2008). Understanding the referential nature of looking: Infants’ preference for object-directed gaze. Cognition, 108(2), 303–319. http://doi.org/10.1016/j.cognition.2008.02.009

Senju, A., Southgate, V., Miura, Y., Matsui, T., Hasegawa, T., Tojo, Y., … Csibra, G. (2010). Absence of spontaneous action anticipation by false belief attribution in children with autism spectrum disorder. Development and Psychopathology, 22(02), 353–360. http://doi.org/10.1017/S0954579410000106

Sodian, B., Thoermer, C., & Metz, U. (2007). Now I see it but you don’t: 14-month-olds can represent another person’s visual perspective. Developmental Science, 10(2), 199–204.

Sommerville, J. A., & Woodward, A. L. (2005). Pulling out the intentional structure of action: the relation between action processing and action production in infancy. Cognition, 95(1), 1–30.

Surtees, A. D. R., Butterfill, S. A., & Apperly, I. A. (2011). Direct and indirect measures of level‐2 perspective‐taking in children and adults. British Journal of Developmental Psychology, 30, 75–86.

Thoermer, C., Sodian, B., Vuori, M., Perst, H., & Kristen, S. (2012). Continuity from an implicit to an explicit understanding of false belief from infancy to preschool age. British Journal of Developmental Psychology, 30(1), 172–187. http://doi.org/10.1111/j.2044-835X.2011.02067.x

Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory-of-mind development: the truth about false belief. Child Development, 72(3), 655–684.

Wilson, M., & Knoblich, G. (2005). The case for motor involvement in perceiving conspecifics. Psychological Bulletin, 131(3), 460–473.

Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13(1), 103–128.

Wolpert, D. M., Doya, K., & Kawato, M. (2003). A unifying computational framework for motor control and social interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 358(1431), 593–602. http://doi.org/10.1098/rstb.2002.1238

Woodward, A. L., & Sommerville, J. A. (2000). Twelve-month-old infants interpret action in context. Psychological Science, 11(1), 73–77.

 

The Experience of Trying

Josh Shepherd- Junior Research Fellow in Philosophy, Jesus College; Postdoctoral Research Fellow, Oxford Centre for Neuroethics; James Martin Fellow, Oxford Martin School

What kinds of conscious experiences accompany (and perhaps assist) the exercise of control over bodily and mental action?

For answers on this and related questions, one might turn to the rapidly growing literature on the so-called ‘sense of agency.’ The sense of agency is supposed to be something experiential and related to action, but I think it is fair to say that there is little unity in the ways scientists deploy the term. Andreas Kalckert (2014) writes that the sense of agency is ‘the experience of being able to voluntarily control limb movement.’ Hauser et al. (2011) write that the sense of agency is ‘the experience of controlling one’s own actions and their consequences.’ Damen et al. (2015) write that ‘The sense of agency refers to the ability to recognize oneself as the controller of one’s own actions and to distinguish these from actions caused or controlled by other sources.’ Chambon et al. (2013) write ‘sense of agency’ refers to the feeling of controlling an external event through one’s own action.’ David et al. (2008) write ‘The sense of agency is a central aspect of human self-consciousness and refers to the experience of oneself as the agent of one’s own actions.’ These glosses variously emphasize voluntary control of limb movement, controlling actions and consequences, controlling consequences through action, abilities of recognition and discrimination, and an experience of oneself as agent. While these glosses might share a neighborhood, they differ in details that are, arguably, quite important if one wants to understand the kinds of experience at issue in bodily (and mental) action control.

In my own work, then, I have eschewed use of the term sense of agency, preferring instead to start with a more detailed account of the phenomenology. Consider, for example, what I have called the experience of trying.

To get a grip on this kind of experience, consider lifting a heavy weight with one’s arm. Doing so, one will often experience tension in the elbow, strain or effort in the muscles, heaviness or pull on the wrist, and so on. In addition, there is an aspect of this experience that is not to be identified with any of these haptic elements, or with any conjunction of them. When lifting the heavy weight, one has an experience of trying to do so. Put generally, then, we might say that the experience of trying is an experience as of directing activity towards the satisfaction of an intention (this is not to say that possessing a concept of intention or of an intention’s satisfaction is necessary for the capacity to have such experiences). In the example at hand, it is a phenomenal character as of directing the movements of the arm.

With this much, many appear to agree. David Hume speaks of the ‘internal impression’ of ‘knowingly giving rise to’ some motion of the body or perception of the mind. His language suggests that he regards the ‘giving rise to’ as fundamentally directive.

It may be said, that we are every moment conscious of internal power; while we feel, that, by the simple command of our will, we can move the organs of our body, or direct the faculties of our mind. An act of volition produces motion in our limbs, or raises a new idea in our imagination. This influence of the will we know by consciousness. (2000, 52)

 Further evidence for this point is that Hume thought of this experience-type as whatever is shared in both successful and failed actions:

A man, suddenly struck with palsy in the leg or arm, or who had newly lost those members, frequently endeavours, at first to move them, and employ them in their usual offices. Here he is as much conscious of power to command such limbs, as a man in perfect health is conscious of power to actuate any member which remains in its natural state and condition. (53)

More recently Carl Ginet has asserted a very similar view.

It could seem to me that I voluntarily exert a force forward with my arm without at the same time its seeming to me that I feel the exertion happening: the arm feels kinesthetically anesthetized. (Sometimes, after an injection of anesthetic at the dentist’s office, my tongue seems to me thus kinesthetically dead as I voluntarily exercise it: I then have an illusion that my will fails to engage my tongue.) (1990, 28)

Are these philosophers right? In a recent paper (Shepherd 2015) I argue for a position that seems (to me) to indicate the answer is yes. This is the view:

Constitutive view. The neural activity that realizes an experience of trying is just a part of the neural activity that directs real-time action control.

 My argument – very briefly – is this. There is no good empirical reason to deny this view. And there is some empirical reason to adopt it. In what follows I’ll offer a shortened version of the first part of this argument.

Why do I say there is no good empirical reason to deny the view? The best empirical reason would stem from application of a certain kind of theory of the ‘sense of agency’ to experiences of trying. This theory seeks to establish that some version of a comparator model of the sense of agency is correct. According to the comparator model:

an intention produces overt action by interacting with a tangled series of modeling mechanisms that take the intention’s relatively abstract specification of a goal-state and transform it into various fine-grained, functionally specific commands and predictions. An inverse model (or ‘controller’) takes the goal state as input and outputs a motor command designed to drive the agent towards the goal-state. A forward model receives a copy of the motor command as input and outputs a prediction concerning its likely sensory consequences. Throughout action production, the inverse model receives updates from various comparator mechanisms. On standard expositions of the model (e.g., Synofzik et al. 2008), three types of comparator mechanism are posited. One compares the goal-state with feedback from the environment, and informs the inverse model of any errors; a second compares the goal-state with the forward model’s predictions, and informs the inverse model of any errors; a third compares the forward model’s prediction with feedback from the environment, and informs the forward model (so as to develop a more accurate forward model). (Shepherd 2015, 5)

 On a comparator account of agentive experience, when predicted and desired (or, at slower time scales, predicted and actual) states match, the given comparator ‘codes’ the activity as self-generated. This code is then sent to a system hypothesized to use it in generating the sense of agency. Proponents of the comparator account recognize that this is not a complete explanation of agentive experience, but they maintain that this matching process “lies at the heart of the phenomenon” (Bayne 2011, 357).

Notice that according to the comparator account, the neural activities that realize agentive experience are not directly involved with action generation and control. If a comparator model can account for the experience of trying, then the constitutive view is likely false. Of course, this account was not designed to explain experiences of trying, but rather the sense of agency. Can a comparator account be extended to experiences of trying?

I argue from self-paralysis studies that the answer is no. In these studies, experimenters paralyzed themselves with neuromuscular blocks that left them conscious, and then attempted to perform various actions. Regarding the resultant experiences, here is what Simon Gandevia and colleagues report.

All reported strong sensations of effort accompanying attempted movement of the limb, as if trying to move an object of immense weight. Subjective difficulty in sustaining a steady level of effort for more than a few seconds was experienced, partly because there was no visual or auditory feedback that the effort was appropriate, and because all subjects experienced unexpected illusions of movement. As examples, attempted flexion of the fingers produced a feeling of slight but distinct extension which subsided in spite of continued effort, and attempted dorsiflexion of the ankle led to the sensation of slow plantar flexion. Further increases in effort repeatedly caused the same illusory movements. (Gandevia et al. 1993, 97)

As I note in (Shepherd 2015):

[P]articipants had experiences of trying to move a finger or ankle in a certain direction. And participants had experiences of the relevant finger or ankle moving in the other direction. This indicates that the experience of trying is both causally linked with and distinct from the experience of the body moving. (7)

This also looks like confirmation of the claims made by Hume and Ginet, and indication that a comparator model does not work for experiences of trying. Nothing like a matching process appears to underlie these experiences.

This leaves open a number of interesting questions. Do we have positive empirical reason to adopt the constitutive view? How do experiences of trying relate to other agentive experiences – experiences of action, perceptual experiences in action, experiences of control or of error in action, and so on? I deal with some of these questions in my (2015). I deal with others in work in progress. Dealing with all of them is more than enough work for much more than one person.

 

References:

Bayne, T. (2011). The sense of agency. In F. Macpherson (ed.), The Senses. Oxford: Oxford University Press: 355–374.

Chambon, V., Wenke, D., Fleming, S. M., Prinz, W., & Haggard, P. (2013). An online neural substrate for sense of agency. Cerebral Cortex, 23(5), 1031-1037.

Damen, T. G., Müller, B. C., van Baaren, R. B., & Dijksterhuis, A. (2015). Re-Examining the Agentic Shift: The Sense of Agency Influences the Effectiveness of (Self) Persuasion. PloS one, 10(6), e0128635.

Ginet, C. 1990. On Action. Cambridge University Press.

Hauser, M., Moore, J. W., de Millas, W., Gallinat, J., Heinz, A., Haggard, P., & Voss, M. (2011). Sense of agency is altered in patients with a putative psychotic prodrome. Schizophrenia research, 126(1), 20-27.

Hume, D. (2000). An Enquiry Concerning Human Understanding: A Critical Edition, (ed.) T. L. Beauchamp. Oxford University Press.

Kalckert, A. (2014). Moving a rubber hand: the sense of ownership and agency in bodily self-recognition.

Shepherd, J. (2015). Conscious action/Zombie action. Noûs.

Synofzik, M., Vosgerau, G. and Newen, A. (2008). Beyond the comparator model: A multifactorial two-step account of agency. Consciousness and Cognition 17(1): 219–239.

An interesting time for the study of moral judgment and cognition

Veljko Dubljevic- Banting Postdoctoral Fellow in the Neuroethics Research Unit at the The Institut de recherches cliniques de Montréal and the department of Neurology and Neurosurgery at McGill University– Co-Editor of the Springer Book Series “Advances in Neuroethics”

What is moral? Is it always good to save lives? Is killing always wrong? Is being caring always a virtue? Are there various factors that collectively affect moral judgements? Are these factors self-standing or do they interact?

Our moral judgements and moral intuitions suggest answers to some of these questions. This is so for both experts, such as moral philosophers and psychologists, who study morality in their different ways, and laypersons alike. The study of morality among moral philosophers has long been marked by disagreement between utilitarians, deontologists and virtue theorists on normative issues (such as should we give priority respectively to consequences, duties or virtues in moral judgment), as well as between cognitivists and non-cognitivists, realists and anti-realists (to name just a few opposing views) on meta-ethical issues.

Moral psychology—the empirical and scientific study of human morality—has, by contrast, long shown considerable convergence in its approach to moral judgment. Despite some variation in the details, it is striking that Kohlberg’s (1968) developmental model has simply been adopted, even where it is criticised (see e.g., Gilligan 1982). According to the developmental model moral judgment is simply the application of moral reasoning – deliberate, effortful use of moral knowledge (a system 2 process, in today’s parlance). This is not to disregard the variety of viewpoints in moral philosophy – moral psychology has taken these to reflect distinct stages in the development of a ‘mature’ morality.

This all changed with a paradigm shift in moral psychology towards a more diverse ‘intuitive paradigm’, according to which moral judgment is most often automatic and effortless (a system 1 process). Studies revealing automatism in everyday behaviour (Bargh and Chartrand 1999), cognitive illusions and subliminal influences such as ‘priming’ (Tulving and Schacter 1990), ‘framing’ (Tversky and Kahneman 1981), and ‘anchoring’ effects (Ariely 2008), provide ample empirical evidence that moral cognition, decision-making and judgment are often a product of associative, holistic, automatic and quick processes which are cognitively undemanding (see Haidt 2001). This along with the ‘moral dumbfounding’ effect – the fact that most people make quick moral judgments and are hard pressed to offer a reasoned explanation for them – led to a shift away from the developmental model which struggled to accomodate these findings.

As a result, moral psychologists now agree that moral judgment is not driven solely by system 2 reasoning. However, they disagree on almost everything else. A range of competing theories and models offer explanations on how moral judgment takes place. Some claim that moral judgments are nothing more than basic emotional responses, perhaps followed by rationalizations (Haidt 2001), while others claim that there are competing emotional and rational processes that pull moral judgment in one or the other direction (Greene 2008), while still others think that moral judgment is intuitive, but not necessarily emotional (see, e.g., Mikhail 2007, Gigerenzer 2010, Dubljevic & Racine 2014)

Here, I will summarize some relevant information and conclude by considering which models are still viable and which are not, based on currently available evidence.

Let’s start with the basic emotivist model: As mentioned earlier, it was espoused by Jonathan Haidt (2001) in pioneering work that offered a constructive synthesis of social and cognitive psychological work on automaticity, intuition and emotion, and has also been championed by influential moral philosophers, such as Walter Sinnott-Armstrong et al. (2010). However, it has been called into question by work that successfully dissociated emotion from moral judgment. For example, consider the ‘torture case’ study (Batson et al 2009, Batson 2011). In this study, American respondents were asked to rate the moral wrongness of specific cases of torture and their emotional arousal. The experimental group is presented with a vignette in which an American soldier is tortured by militants, while a control group read a text in which a Sri-Lankan soldier is tortured by Tamil rebels. Even though there was no significant difference in the intensity of moral judgment, the respondents were ‘riled-up’ emotionally only in the case of a member of their in-group being tortured. This does not put moral emotions per se in question, but it neatly undermines a crude ‘moral judgment is just emotion’ model.

Now, let’s take a look at the ‘dual-process’ model of moral judgment. Pioneering research in the neuroscience of ethics (e.g. Greene et al. 2001) formulated a classification of dilemmas into so-called impersonal, such as the original trolley dilemma (e.g. whether to throw a switch to save five people and killing one) and personal dilemmas, such as the footbridge dilemma (e.g. whether to push one man to his death in order to save five people). Proponents of the view, take their data to show that the patterns of responses in trolley dilemmas favour a “utilitarian” view of morality based on abstract thinking and calculation, while responses in the footbridge dilemma suggest that emotional reactions drive answers. The purported upshot is that rational (driving utilitarian calculation) and emotional (driving aversion to personally causing injury) processes are competing for dominance.

Even though there were some initial studies that seemed to corroborate this hypothesis, it remains controversial, with certain empirical findings appearing to remain at odds with the dual-process approach. In particular, if utilitarian, outcome based judgment, is caused by abstract thinking (system 2), whereas non-consequentialist intent or duty based judgment is intuitive (system 1) and thus irrational, how come children ages 4 to 10 focus more on outcome than on intent (see Cushman 2013)? Given that abstract thought is developed after age 12, ‘fully rational’ utilitarian judgments should not be observable in children. And yet they are not only observed, but seem to dominate immature and dysfunctional moral cognition.

It is then safe to say that recent research has called the dual-process model into question. Recent studies have shown that favouring the “utilitarian” option has been actually linked to anti-social personality traits, such as Machiavelianism (Bartels & Pizarro, 2011), and psychopathy (Koenings 2012), as well as with temporary (increased anger, decreased responsibility, induced lower levels of serotonin Crockett & Rini 2015) and permanent conditions, such as vmPFC damage (Koenings 2007) and Fronto-temporal dementia (Mendez 2009), that are probably not facilitating “rational” decision making. Perhaps the most damning piece of evidence is a recent study (Duke & Begue 2015) establishing a correlation between study participants’ blood alcohol concentrations and utilitarian preferences. All in all, the empirical evidence seems to suggest a stronger role for impaired social cognition than intact deliberative reasoning in predicting utilitarian responses in the trolley dilemma, which in turn leads to a conclusion that the dual process model is on thin ice.

So which model is true? The data seems to suggest that an intuitionist model of moral judgment is most likely, however there are at least three competitors: the moral foundations theory (Haidt & Graham 2007), the universal moral grammar (Mikhail 2007, 2011) and the ADC approach (Dubljevic & Racine 2014).

Due to reasons of space I will not go into the specifics of all models apart from mentioning them and their feasibility, and since I am an interested party in this debate, I will briefly canvass the ADC approach.

The Agent-Deed-Consequence framework offers an insight into the types of simple and fast intuitive processes involved in moral appraisals. Namely, the heuristic principle of attribute substitution – quickly and efficiently substituting a complex and intractable problem with more accessible information – is applied to specific information relevant for moral appraisal. I argued (along with my co-author, Eric Racine) that there are three kinds of moral intuitions stemming from three kinds of heuristic processes that simultaneously modulate moral judgments. We posited that they also form the basis of three distinct kinds of moral theory by substituting the global attribute of moral praiseworthiness/blameworthiness with the simpler attributes of virtue/vice of the agent or character (as in virtue theory), right/wrong deed or action (as in deontology) and good/bad consequences or outcomes (as in consequentialism).

The Agent-Deed-Consequence framework provides a vocabulary to start breaking down moral judgment into cognitive components, which could increase explanatory and predictive power of future work on moral judgment in general and moral heuristics in particular. Furthermore, this research clarifies a wide set of findings from empirical and theoretical moral psychology (e.g., “intuitiveness” and “counter-intuitiveness” of certain judgments, moral “dumbfoundedness”, “ethical blind spots” of traditional moral principles, etc.). The framework offers a description of how moral judgment takes place (three aspects are computed at the same time), but also offers normative guidance on dissociating and clarifying relevant normative components.

Perhaps an example might be helpful to put things into perspective. Consider this (real life) case:

In October 2002, policemen in Frankfurt, Germany were faced with a chilling dilemma. They had in custody the man who they suspected had kidnapped a banker’s 11-year-old son and asked for ransom. Although the man was arrested while trying to take the ransom money, he maintained his innocence and denied having any knowledge of the whereabouts of the child. In the meantime, time was running out – if the kidnapper was in custody, who will feed and hydrate the child? The police officer in charge finally decided to use coercion to make the suspect talk. He had threatened to inflict serious pain upon the suspected kidnapper if he did not reveal where he had hidden the child. The threat worked – however, the child was already dead. (Dubljevic & Racine 2014, p. 12)

The ADC approach allows us to analyze the normative cues of the case. Here it is safe to assume that the evaluation of the agent is positive (as a virtuous person), evaluation of the deed or action is negative (torture is wrong), whereas the consequences are unclear ([A+] [D-] [C?] = [MJ?]).

Modulating any of the elements of the case can result in a different intuitive judgment, and the public controversy in Germany following this case created two camps: one stressing the uncertainty of guilt and a precedent of committing torture in police work, and the other stressing the potential to save a child by any means necessary. If the case is changed so that the consequence component is clearly bad (e.g., suspect is innocent AND the child died), the intuitive responses would be specific, precise and negative ([A+] [D-] [C-] = [MJ-]). And vice-versa, if we modulate the case so that the consequences are clearly good (e.g., the suspect is guilty AND a life has been saved), the intuitive responses would be specific, precise and clearly positive ([A+] [D-] [C+] = [MJ+]).

This is just one example of the frugality of the ADC framework. However, it would be premature to conclude that this model is obviously true or better than the remaining competitors, the moral foundations theory and universal moral grammar. Ultimately, it is most likely that evidence will force all models to accommodate new data and insights, but one thing is clear: this is an interesting time for the study of moral judgment and cognition.

 

References :

Ariely, D. 2008. Predictably irrational: The hidden forces that shape our decisions. New York, NY: Harper.

Bargh, J. A., and T. L. Chartrand. 1999. The unbearable automaticity of being. American Psychologist 54: 462–479.

Bartels, D.M. & Pizarro, D. (2011) : The mismeasure of morals : Antisocial personality traits predict utilitarian responses to moral dilemmas, Cognition 121 : 154-161.

Batson, C.D. (2011): What’s wrong with morality?, Emotion Review 3 (3): 230-236.

Batson, C.D., Chao, M.C. & Givens, J.M. (2009): Pursuing moral outrage: Anger at torture, Journal of Experimental Social Psychology, 45: 155-160.

Crockett, M.J., Clark, L., Hauser, M.D. & Robbins, T.W. (2010): Serotonin selectlively influences moral judgment and behavior through effect on harm aversion, PNAS, 107 (40): 17433-38.

Crockett, M.J. & Rini, R.A. (2015): Neuromodulators and the instability of moral cognition, in Decety, J. & Wheatley, T. (Eds.): The Moral Brain: A Multidisciplinary Perspective, Cambridge, MA: MIT Press, pp. 221-235.

Dubljević, V. & Racine, E. (2014): The ADC of Moral Judgment: Opening the Black Box of Moral Intuitions with Heuristics about Agents, Deeds and Consequences, AJOB–Neuroscience, 5 (4): 3-20.

Duke, A.A. & Begue, L. (2015): The drunk utilitarian: Blood alcohol concentration predicts utilitarian responses in moral dilemmas, Cognition 134: 121-127

Gigerenzer, G. (2010): Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in Cognitive Science, 2 (3): 528-554.

Greene, J. D. (2008): The secret joke of Kant’s soul, in Sinnott-Armstrong, W. (Ed.): Moral psychology Vol. 3, The neuroscience of morality, Cambridge, MA: MIT Press; 35-79.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. and Cohen, J. D. (2001): An fMRI investigation of emotional engagement in moral judgment, Science 293: 2105 – 2108.

Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108 (4): 814–834.

Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research 20(1): 98–116.

Hauser, M., Young, L. & Cushman, F. (2008): Reviving Rawls´s Linguistic Analogy, in Walter Sinott-Armstrong (Ed.): Moral psychology 2:, MIT Press, pp. 107-144.

Knoch, D., Pasqual-Leone, A., Meyer, K., Treyer, V. and Fehr, E. (2006): Diminishing reciprocal fairness by disrupting the right prefrontal cortex, Science 314: 829-832.

Knoch, D; Nitsche, M.A; Fischbacher, U; Eisenegger, C; Pasqual-Leone, A. and Fehr, E. (2008): Studying the neurobiology of social interaction with transcranial direct current stimulation—The example of punishing unfairness, Cerebral Cortex; 18:1987-1990.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M. & Damasio, A. (2007): Damage to the prefrontal cortex increases utilitarian moral judgements, Nature, 446: 908-911.

Koenigs M, Kruepke M, Zeier J, Newman JP. (2012): Utilitarian moral judgment in psychopathy, SCAN; 7(6): 708-14;

Kohlberg, L. (1968): The child as a moral philosopher, Psychology Today, 2: 25-30.

Mendez, M. F. 2009. The neurobiology of moral behavior: Review and neuropsychiatric implications. CNS Spectrums 14(11): 608–620.

Mikhail, J. 2007. Universal moral grammar: Theory, evidence and

the future. Trends in Cognitive Sciences 11(4): 143–152.

Mikhail, J. (2011): Elements of moral cognition, New York: Cambridge University Press.

Persson, I. & Savulescu, J. (2012): Unfit for the Future: The Need for Moral Enhancement. Oxford: Oxford University Press.

Sinnott‐Armstrong, W., Young, L. & Cushman, F. (2010): Moral Intuitions, in John M. Doris (Ed.): The Moral Psychology Handbook, Oxford: Oxford University Press, DOI: 10.1093/acprof:oso/9780199582143.003.0008

Terbeck, S., Kahane, G., McTavish, S., Savulescu, J., Levy, N., Hewstone, M. & Cowen, P.J. (2013): Beta adrenergic blockade reduces utilitarian judgment, Biological Psychology 92: 323-328.

Tversky, A., and D. Kahneman. 1981. The framing of decisions and the psychology of choice. Science 211(4481): 453–458.

Tulving, E., and D. L. Schacter. 1990. Priming and human memory systems. Science 247(4940): 301–306.

Young, L; Camprodon J.A; Hauser, M; Pascual-Leone, A. and Saxe, R. (2010): Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgements, PNAS, 107: 6753– 6758.

In the mind’s eye: cognitive science and the riddle of cave art

Eveline Seghers- PhD student in the Department of Art, Music and Theatre Studies, Ghent University

In his 2002 book The Mind in the Cave, the archaeologist David Lewis-Williams remarked that ‘art’ is a concept that everyone assumes they grasp, “until asked to define” it (2002: 41). This inevitably clouds our insights into its origins. While most researchers tend to agree that parietal and portable imagery from the Upper Palaeolithic constitute the earliest known art in human history, there is very little convergence on its function, if any, for our ancestors. One fairly popular view consists in describing cave paintings and small objects as instances of religious practices and beliefs, although opinions still differ as to which links there may have been between Prehistoric art and religion. While Lewis-Williams (2002) thinks cave paintings are the outcomes of shamanistic hallucinations, the explanation of ‘hunting magic’ also remains widely cited (for a discussion, see Bahn and Vertut, 1997). More secular interpretations have been proposed by authors such as the archaeologist John Halverson, who endorsed an art for art’s sake explanation, and the archaeologist R. Dale Guthrie, who suggests that the cave paintings may have been made by teens with grafitti-like intent, rather than by skilled artists with symbolic purposes, as is often assumed (Guthrie, 2005; Halverson, 1987). As such, figurative art, impressive as it appears to us, may not be a symbolic breakthrough after all (Currie 2011).

Few authors have investigated the origins of figurative art from the obvious, yet surprisingly underexplored perspective of the human mind and cognitive science. Although developmental stages in the mind are sometimes inferred from figurative and abstract cave paintings and portable art – the proposed link between figurative imagery and symbolic cognition for example – cognitive, and by extension neuroscientific frameworks are rarely systematically applied to interpret the record at our disposal. A notable exception to this is Steven Mithen’s cognitive fluidity approach, which provides both a hypothesis concerning both the evolution of cognition and the emergence of complex human culture (1996). The general reluctance to approach cave art from a cognitive perspective may be partly due to the fact that brains do not fossilize, which means we can merely study the remaining fossil crania. This poses a number of methodological challenges to researchers, who are left wondering about the mental lives and the behaviour of our ancestors without the elaborate toolkit of present-day cognitive psychologists, and the obvious absence of study subjects whose behaviour we are trying to assess.

Despite these challenges, many researchers have come up with creative approaches to address our lack of actual ancestral brains and behaviour to examine. By making endocasts – modelled reconstructions of Prehistoric brains based on skulls available in palaeoanthropological record – it becomes possible to estimate the size and surface structure of the brain. Another method involves making comparative analyses of overall brain volume and the volume of particular areas in extant primate species, to then make inferences about the brain structures of our ancestors. Admittedly, such methods leave a great deal to be estimated when it comes to the actual internal organization of ancestral brains as, for example, precise volumes associated with particular neural functions cannot be derived from endocasts or comparisons of extant primate brains. To remedy this, a new method was recently developed that involves a comparative analysis of the visual system – both eye and orbit size and the visual cortex – of Neanderthals and Anatomically Modern Humans, enabling inferences about how the brains of these two species may have been internally organized and how they may have differed, in turn sparking new insights into matters such as their socio-cognitive abilities and their behavioural repertoires (Pearce, Stringer and Dunbar, 2013).

But what does cognitive science in itself contribute to our understanding of Prehistoric art? Assuming that the goal of investigating the latter through the lens of the former is not too ambitious for the aforementioned methodological reasons, how can we apply research from present-day cognitive science to questions concerning the emergence and function of culture? Are we necessarily confined to archaeological and anthropological methods such as those mentioned above in order to then formulate hypotheses about the cognitive machinery present in our ancestors’ brains, or can we perhaps also apply insights from cognitive science more directly? Many researchers will agree that we can. The field of cognitive archaeology, notably developed and endorsed by authors such as Colin Renfrew, Steven Mithen, Thomas Wynn and Frederick Coolidge, undertaking investigations of the archaeological record with the help of the conceptual and methodological toolbox of cognitive science, investigating subjects as far apart as the evolution of consciousness and its role in artefact production, linguistic evolution in relation to tool manufacturing, mental modularity and the convergence of cognitive domains in the realm of art, the capacity of working memory in the creation of visual representations, and the cognitive nature of innovation and its reflection in the emergence of artefacts (e.g. Coolidge and Wynn 2009; de Beaune, Coolidge and Wynn, 2009; Mithen, 1996; Renfrew and Zubrow, 1994). In addition, neuroscience has proven to be a loyal companion to the field, resulting in new lines of research that can be referred to, collectively, as ‘neuroarchaeology’ (e.g. Malafouris 2013).

In a talk given at the inaugural iCog conference, we investigated an interesting case study at the intersection of Prehistoric archaeology and cognitive science. In 1998, the psychologist Nicholas Humphrey suggested that we may have been wrong to see figurative cave art as evidence of the breakthrough of fully modern cognition; something that others have described as the ultimate exponent and first unequivocal evidence of our ability to think symbolically. Perhaps, he argued, cave art is rather “the swan song of the old” (1998: 165), reflecting stages of cognitive evolution that precede the attainment of levels of cognitive and behavioural modernity that rival our present minds. Tying research on language evolution, theory of mind, autism, and the evolution of social cognition together, Humphrey attempted to pave the way for a new view on cave art. Methodologically, he proposed that we might understand the developmental trajectories of the human mind at the time of the Upper Palaeolithic transition by studying present-day individuals with autism spectrum disorders. This suggestion has elicited much controversy: parallelling a present-day autistic child with human ancestors who may have been in earlier developmental phases of cognitive evolution was seen as an ethical issue not justified given the overall speculative nature of Humphrey’s hypothesis. As a consequence, his ideas did not receive much support among researchers of Prehistoric cave art. In follow-up research, we therefore reassessed Humphrey’s original hypothesis by taking a fresh perspective that combines a cognitive anthropological framework, focussing on the emergence of metarepresentational ability, with sound empirical evidence produced by cognitive psychological studies on the relationship between visual imagery and theory of mind (e.g. Charman and Baron-Cohen, 1992, 1995; Leslie, 1987; Sperber, 1994). This primary analysis can also be anchored into other fields of research. By gathering recent findings on, for example, the evolution of spoken language and patterns of migratory movement by our ancestors across the globe – elements which turn out to be highly relevant when discussing the evolution of human social cognition – it becomes possible to establish a renewed and empirically-founded cognitive view on cave art, which may provide a starting point for other cognitively-based analyses of this subject.

Overall, exciting times lie ahead for archaeologists researching the emergence and nature of cave art. Will cognitive science provide us with the ultimate key to understanding art’s origins? Probably not, as a complex evolutionary occurrence such as the emergence of art, can only be understood better by combining insights from a wide variety of relevant scientific disciplines. But as cognitive scientists and humanities scholars interested in this approach will continue to join forces in order to shed light on the nature of Prehistoric art, our knowledge can only increase, sparking new questions and hypotheses, the boundaries of which are currently not even in view.

 

REFERENCES

Bahn, P.G. and Vertut, J. (1997) Journey Through the Ice Age, Berkeley: University of California Press.

Charman, T. and Baron-Cohen, S. (1992) ‘Understanding drawings and beliefs: a further test of the metarepresentation theory of autism: a research note’, Journal of Child Psychology and Psychiatry, vol. 33, pp. 1105-1112.

Charman, T. and Baron-Cohen, S. (1995) ‘Understanding photos, models, and beliefs: a test of the modularity thesis of theory of mind’, Cognitive Development, vol. 10, pp. 287-298.

Coolidge, F.L. and Wynn, T. (2009) The Rise of Homo Sapiens. The Evolution of Modern Thinking, Malden: Wiley-Blackwell.

Currie, G. (2011) ‘The master of the Masek Beds: handaxes, art, and the minds of early humans’, in Schellekens, E. and Goldie, P. (eds.) The Aesthetic Mind. Philosophy and Psychology, Oxford: Oxford University Press.

De Beaune, S.A., Coolidge, F.L. and Wynn, T. (eds.) (2009) Cognitive Archaeology and Human Evolution, Cambridge: Cambridge University Press.

Guthrie, R.D. (2005) The Nature of Paleolithic Art, Chicago: University of Chicago Press.

Halverson, J. (1987) ‘Art for art’s sake in the Paleolithic’, Current Anthropology, vol. 28, no. 1, February, pp. 63-71.

Humphrey, N. (1998) ‘Cave art, autism, and the evolution of the human mind’, Cambridge Archaeological Journal, vol. 8, no. 2, pp. 165-191.

Leslie, A.M. (1987) ‘Pretense and representation: the origins of “theory of mind”’, Psychological Review, vol. 94, no. 4, pp. 412-426.

Malafouris, L. (2013) How Things Shape the Mind. A Theory of Material Engagement, Cambridge: MIT Press.

Mithen, S. (1996) The Prehistory of the Mind. A Search for the Origins of Art, Religion, and Science, London: Thames and Hudson.

Lewis-Williams, D. (2002) The Mind in the Cave. Consciousness and the Origins of Art, London: Thames and Hudson.

Pearce, E., Stringer, C. and Dunbar, R.I.M. (2013) ‘New insights into differences in brain organization between Neanderthals and anatomically modern humans’, Proceedings of the Royal Society B¸ vol. 280, 20130168. http://dx.doi.org/10.1098/rspb.2013.0168

Renfrew, A.C. and Zubrow, E.B.W. (eds.) (1994) The Ancient Mind: Elements of Cognitive Archaeology, Cambridge: Cambridge University Press.

Seghers, E., & Blancke, S. (2013). Metarepresentational ability and the emergence of figurative cave art. Paper presentation at the iCog Inaugural Conference: Interdisciplinarity in Cognitive Science, University of Sheffield, 30 November – 1 December 2013.

Sperber, D. (1994) ‘The modularity of thought and the epidemiology of representations’, in Hirschfeld, L.A. and Gelman, S.A. (eds.) Mapping the Mind. Domain Specificity in Cognition and Culture, Cambridge: Cambridge University Press.