How stereotypes shape our perceptions of other minds

 by Evan Westra — Ph.D. Candidate, University of MarylandAmbiguous Pictures Task

McGlothlin & Killen (2006) showed groups of (pre­dom­in­antly white) American ele­ment­ary school chil­dren from ages 6 to 10 a series of vign­ettes depict­ing chil­dren in ambigu­ous situ­ations. For instance, one pic­ture (above) showed two chil­dren by a swing set, with one on the ground frown­ing, and one behind the swing with a neut­ral expres­sion. Two things might be going on in this pic­ture: i) the child on the ground may have fallen off by acci­dent (neut­ral scen­ario), or ii) the child on the ground may have been inten­tion­ally pushed by the one stand­ing behind the swing (harm­ful scen­ario). Crucially, McGlothlin and Killen var­ied the race of the chil­dren depic­ted in the image, such that some chil­dren saw a white child stand­ing behind the swing (left), and some saw a black child (right). Children were asked to explain what had just happened in the scen­ario, to pre­dict what would hap­pen next, and to eval­u­ate the action that had just happened. Overwhelmingly, chil­dren were more likely to give the harm­ful scen­ario inter­pret­a­tion — that the child behind the swing inten­tion­ally pushed the oth­er child — when the child behind the swing was black than when she was white. The race the child depic­ted, it seems, influ­enced wheth­er or not par­ti­cipants made an infer­ence to harm­ful intentions.

This is yet anoth­er depress­ing example of how racial bias can warp our per­cep­tions of oth­ers. But this study (and oth­ers like it: Sagar & Schofield 1990; Burnham & Harris 1992; Condry et al. 1985)) also hint at a rela­tion­ship between two forms of social cog­ni­tion that are not often stud­ied togeth­er: mindread­ing and ste­reo­typ­ing. The ste­reo­typ­ing com­pon­ent is clear enough. The mindread­ing com­pon­ent comes from the fact that race did­n’t just affect kids’ atti­tudes towards the tar­get — it affected what they thought was going on in the tar­get’s mind. Although these two ways of think­ing about oth­er people — mindread­ing and ste­reo­typ­ing — both seem to play an import­ant role in how we nav­ig­ate the social world, curi­ously little atten­tion has been paid to under­stand­ing the way they relate to one anoth­er. In this post, I want to explore this rela­tion­ship. I’ll first briefly explain what I mean by “mindread­ing” and “ste­reo­typ­ing.” Next, I’ll dis­cuss one exist­ing pro­pos­al about the rela­tion­ship between mindread­ing and ste­reo­typ­ing, and raise some prob­lems for it. Then I will lay out the begin­nings of a dif­fer­ent way of cash­ing out this relationship.

*          *          *

 First, lets get clear on what I mean by “mindread­ing” and “ste­reo­typ­ing.”


In order to achieve our goals in highly social envir­on­ments, we need to be able to accur­ately pre­dict what oth­er people will do, and how they will react to us. To do this, our brains gen­er­ate com­plex mod­els of oth­er people’s beliefs, desires, and inten­tions, which we use to pre­dict and inter­pret their beha­vi­or. This capa­city to rep­res­ent oth­er minds is known vari­ous as the­ory of mind, mindread­ing, men­tal­iz­ing, and folk psy­cho­logy. In human beings, this abil­ity begins to emerge very early in devel­op­ment. As adults, we use it con­stantly, in a fast, flex­ible, and uncon­scious fash­ion. We use it in many import­ant social activ­it­ies, includ­ing com­mu­nic­a­tion, social coordin­a­tion, and mor­al judgment.


Stereotypes are ways of stor­ing gen­er­ic inform­a­tion about social groups (includ­ing races, genders, sexu­al ori­ent­a­tion, age-groups, nation­al­it­ies, pro­fes­sions, polit­ic­al affil­i­ation, phys­ic­al or men­tal abil­ity, and so on) (Amodio 2014). A par­tic­u­larly import­ant aspect of ste­reo­types is that they often con­tain inform­a­tion about stable per­son­al­ity traits. Unfortunately, it is all too easy for us to think of ste­reo­types about how cer­tain social groups are lazy, or greedy, or aggress­ive, or sub­missive, and so on. According to Susan Fiske and col­leagues’ Stereotype Content Model (SCM), there is an under­ly­ing pat­tern to the way we attrib­ute per­son­al­ity traits to groups (Cuddy et al. 2009; Cuddy et al. 2007; Fiske et al. 2002; Fiske 2015). Personality trait attri­bu­tion, on this view, var­ies along two primary dimen­sions: warmth and com­pet­ence. The warmth dimen­sion includes traits like (dis-)honesty, (un-)trustworthiness, and (un-)friendliness. These are traits that tell you wheth­er or not someone is liable to help you or harm you. The com­pet­ence dimen­sion con­tains traits like (un-)intelligence, skill­ful­ness, per­sist­ence, lazi­ness, clum­si­ness, etc. These traits tell you how effect­ively someone is at achiev­ing their goals.

Together, these two dimen­sions com­bine to yield four dis­tinct clusters of traits, each of which picks out a dif­fer­ent kind of stereotype:

the stereotype content model

*          *          *

So what do ste­reo­typ­ing and mindread­ing have to do with one anoth­er? There are some obvi­ous dif­fer­ences, of course: ste­reo­types are mainly about groups, while mindread­ing is mainly about indi­vidu­als. But intu­it­ively, it seems like know­ing about some­body’s social group mem­ber­ship could tell you a lot about what they think: if I tell you that I am a lib­er­al, for instance, that should tell you a lot about my beliefs, val­ues, and social pref­er­ences — valu­able inform­a­tion, when it comes to pre­dict­ing and inter­pret­ing my behavior.

Some philo­soph­ers and psy­cho­lo­gists, such as Kristin Andrews, Anika Fiebich and Mark Coltheart, have sug­ges­ted that ste­reo­types and mindread­ing may actu­ally be altern­at­ive strategies for pre­dict­ing and inter­pret­ing beha­vi­or (Andrews 2012; Fiebich & Coltheart 2015). That is, it may be that some­times we use ste­reo­types instead of mindread­ing to fig­ure out what a per­son is going to do. According to one such pro­pos­al (Fiebich & Coltheart 2015), ste­reo­types allow us to pre­dict beha­vi­or because they encode asso­ci­ations between social cat­egor­ies, situ­ations, and beha­vi­ors. Thus, one might form a three-way asso­ci­ation between the social cat­egory police, the situ­ation donut shops, and the beha­vi­or eat­ing donuts, which would lead one to pre­dict that, when one sees a police officer in a donut shop, he or she will likely be eat­ing a donut. A more com­plex ver­sion of this asso­ci­ation­ist approach would be to asso­ci­ate social groups with par­tic­u­lar traits labels (as per the SCM), and thus con­sist in four-way asso­ci­ations between social cateo­gires, trait labels, situ­ations, and beha­vi­ors (Fiebich & Coltheart 2015; Andrews 2012). Thus, one might come to the trait of gen­er­os­ity with leav­ing large tips in res­taur­ants, and asso­ci­ate the social cat­egory of uncles with gen­er­os­ity, and thereby come to expect uncles to leave large tips in res­taur­ants. One might then explain this beha­vi­or by refer­ring to the uncle’s gen­er­os­ity. The key thing to notice about these accounts is that their pre­dic­tions do not rely at all upon mental-state attri­bu­tions. This is by design: these pro­pos­als are meant to show that we often don’t need mindread­ing to pre­dict or inter­pret behavior.

One prob­lem for this sort of view comes from its invoc­a­tion of “situ­ations.” What inform­a­tion, one might won­der, is con­tained with­in the scope of a par­tic­u­lar “situ­ation”? Surely, a situ­ation does not include everything about the state of the world at a giv­en moment. Situations are prob­ably meant to pick out loc­al states of affairs. But not all the facts about a loc­al state of affairs will be rel­ev­ant to beha­vi­or pre­dic­tion. The pres­ence of mice in the kit­chen of a res­taur­ant, for instance will not affect your pre­dic­tions about the size of your uncle’s tip. It might, how­ever, affect our pre­dic­tions about the beha­vi­or of the health inspect­or, should one sud­denly arrive. Which loc­al facts are pre­dict­ively use­ful will ulti­mately depend upon their rel­ev­ance to the agent whose beha­vi­or we are pre­dict­ing. But wheth­er or not a fact is rel­ev­ant to an agent will depend upon that agent’s beliefs about the loc­al state of affairs, as well as her goals and desires. If this is how rep­res­ent­a­tions of pre­dict­ively use­ful situ­ations are com­puted, then the pur­portedly non-mentalistic pro­pos­al giv­en above really includes a tacit appeal to mindread­ing. If this is not how situ­ations are com­puted, then we are owed an explan­a­tion for how the non-mentalistic behavior-predictor arrives at pre­dict­ively use­ful rep­res­ent­a­tions of situ­ations that do not depend upon con­sid­er­a­tions of relevance.

*          *          *

Instead of treat­ing mindread­ing and ste­reo­types as sep­ar­ate forms of behavior-prediction and inter­pret­a­tion, we might instead explore the ways in which ste­reo­types might inform mindread­ing. The key to this approach, I sug­gest, lies in the fact that ste­reo­types encode inform­a­tion about per­son­al­ity traits. In many ways, per­son­al­ity traits are like men­tal states: they are unob­serv­able men­tal prop­er­ties of indi­vidu­al, and they are caus­ally related to beha­vi­or. But they also dif­fer in one key respect: their tem­por­al sta­bil­ity. Beliefs and desires are inher­ently unstable: a belief that P can be changed by the obser­va­tion of not‑P; a desire for Q can be extin­guished by the attain­ment of Q. Personality traits, in con­trast, can­not be extin­guished or aban­doned based on every­day events. Rather, they tend to last through­out a per­son’s life­time, and mani­fest them­selves in many dif­fer­ent ways across many dif­fer­ent situ­ations. A unique fea­ture of per­son­al­ity traits, in oth­er words, is that they are highly stable men­tal entit­ies (Doris 2002). So when ste­reo­types ascribe traits to groups, they are ascrib­ing a prop­erty that one could reas­on­ably expect to remain con­sist­ent across many dif­fer­ent situations.

The tem­por­al prop­er­ties of men­tal states are extremely rel­ev­ant for mindread­ing, espe­cially in mod­els that employ Bayesian Predictive Coding (Kilner & Frith 2007; Koster-Hale & Saxe 2013; Hohwy & Palmer 2014; Hohwy 2013; Clark 2015). To see why, let’s start with an example:

Suppose that we believe that Laura is thirsty, and have attrib­uted to her the goal of get­ting a drink (G). As goals go, this one is rel­at­ively short-term (unlike, say, the goal of get­ting a PhD). But we know that in order to achieve (G), we pre­dict that Laura must form a num­ber of even shorter-term sub-goals: (G1) get the juice from the fridge, and (G2) pour her­self a glass of juice. But each of these requires the form­a­tion of still shorter-term sub-sub-goals: (G1a) walk over to kit­chen, (G1b) open fridge door, (G1c) remove juice con­tain­er, (G2a) remove cup from cup­board, (G2b) pour juice into cup. Predicting Laura’s beha­vi­or in this con­text thus begins with the ascrip­tion of a longer-duration men­tal state (G), fol­lowed by the ascrip­tion of suc­cess­ively shorter-term mental-state attri­bu­tions (G1, G2, G1a, G1b, G1c, G2a, G2b).

As mindread­ers, we can use attri­bu­tions of more abstract, tem­por­ally exten­ded men­tal states to make top-down infer­ences about more tran­si­ent men­tal states. At each level in this action-prediction hier­archy, we use higher-level goal-attributions to con­strain the space of pos­sible sub-goals that the agent might form. We then use our pri­or exper­i­ence to select the most likely sub-goal from the hypo­thes­is space, and the pro­cess repeats itself. Ultimately, this yields fairly fine-grained expect­a­tions about motor-intentions, which mani­fest them­selves as mirror-neuron activ­ity (Kilner & Frith 2007; Csibra 2008). Action-prediction thus plays out as a des­cent from more stable mental-state attri­bu­tions to more tran­si­ent ones, which ulti­mately bot­tom out in highly con­crete expect­a­tions about behavior.

Personality traits, which are dis­tin­guished by their high degree of tem­por­al sta­bil­ity, fit nat­ur­ally into the upper levels of this action-prediction hier­archy. Warmth traits, for instance, can tell us about the gen­er­al pref­er­ences of an agent: a gen­er­ous per­son prob­ably has a gen­er­al pref­er­ence for help­ing oth­ers, while a greedy per­son prob­ably has a gen­er­al desire to enrich her­self. These broad preference-attributions can in turn inform more imme­di­ate goal-attributions, which can then be used to pre­dict behavior.

This role for rep­res­ent­a­tions of per­son­al­ity traits in mental-state infer­ence fits well with what we know about how we reas­on about traits more gen­er­ally. For instance, we often make extremely rap­id judg­ments about the warmth and com­pet­ence traits of indi­vidu­als based on fairly super­fi­cial evid­ence, such as facial fea­tures (Todorov et al. 2008); we also tend to over attrib­ute the causes of beha­vi­or to per­son­al­ity traits, rather than situ­ation­al factors — a phe­nomenom com­monly known as the “fun­da­ment­al attri­bu­tion error” or the “cor­res­pond­ence bias (Gawronski 2004; Ross 1977; Gilbert et al. 1995). Prioritizing per­son­al­ity traits makes a lot of sense if they form the infer­en­tial basis for more com­plex forms of beha­vi­or pre­dic­tion. It also makes sense that this aspect of mindread­ing would need to rely on fast, rough-and-ready heur­ist­ics, since per­son­al­ity trait inform­a­tion would need to be inferred very quickly in order to be use­ful in action-planning.

From a com­pu­ta­tion­al per­spect­ive, thus, using per­son­al­ity traits to make infer­ences about beha­vi­or makes a lot of sense, and might make mindread­ing more effi­cient. But in exchange for this effi­ciency, we make a very dis­turb­ing trade. Stereotypes, which can be activ­ated rap­idly based on eas­ily avail­able per­cep­tu­al cues provide the mindread­ing sys­tem with a rap­id means for stor­ing trait inform­a­tion (Mason et al. 2006; Macrae et al. 1994). With this speed comes one of the most mor­ally per­ni­cious forms of human social cog­ni­tion, one that helps to per­petu­ate dis­crim­in­a­tion and social inequality.

*          *          *

 The pic­ture I’ve painted in this post is, admit­tedly, rather pess­im­ist­ic. But just because the roots of dis­crim­in­a­tion are cog­nit­ively deep, we should not con­clude that it is inev­it­able. More recent work from McGlothlin and Killen (2010) should give us some hope: while chil­dren from racially homo­gen­eous schools (who had little dir­ect con­tact with mem­bers of oth­er races) ten­ded to show signs of biased intention-attribution, McGlothlin and Killen also found that chil­dren from racially het­ero­gen­eous schools (who had reg­u­lar con­tact with mem­bers of oth­er races) did not dis­play such signs of bias. Evidently, inter­group con­tact is effect­ive in curb­ing the devel­op­ment of ste­reo­types — and, by exten­sion, biased mindreading.



Amodio, D.M., 2014. The neur­os­cience of pre­ju­dice and ste­reo­typ­ing. Nature Reviews: Neuroscience, 15(10), pp.670–682.

Andrews, K., 2012. Do apes read minds?: Toward a new folk psy­cho­logy, Cambridge, MA: MIT Press.

Burnham, D.K. & Harris, M.B., 1992. Effects of Real Gender and Labeled Gender on Adults’ Perceptions of Infants. Journal of Genetic Psychology, 15(2), pp.165–183.

Clark, A., 2015. Surfing uncer­tainty: Prediction, action, and the embod­ied mind, Oxford: Oxford University Press.

Condry, J.C. et al., 1985. Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children Development Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children. Child Development, 56(1), pp.225–233.

Csibra, G., 2008. Action mir­ror­ing and action under­stand­ing: an altern­at­ive account. In P. Haggard, Y. Rossetti, & M. Kawato, eds. Sensorymotor Foundations of Higher Cognition. Attention and Performance XXII. Oxford: Oxford University Press, pp. 435–459.

Cuddy, A.J.C. et al., 2009. Stereotype con­tent mod­el across cul­tures: Towards uni­ver­sal sim­il­ar­it­ies and some dif­fer­ences. British Journal of Social Psychology, 48(1), pp.1–33.

Cuddy, A.J.C., Fiske, S.T. & Glick, P., 2007. The BIAS map: beha­vi­ors from inter­group affect and ste­reo­types. Journal of per­son­al­ity and social psy­cho­logy, 92(4), pp.631–48.

Doris, J.M., 2002. Lack of char­ac­ter: Personality and mor­al beha­vi­or, Cambridge, UK: Cambridge University Press.

Fiebich, A. & Coltheart, M., 2015. Various Ways to Understand Other Minds: Towards a Pluralistic Approach to the Explanation of Social Understanding. Mind and Language, 30(3), pp.235–258.

Fiske, S.T., 2015. Intergroup biases: A focus on ste­reo­type con­tent. Current Opinion in Behavioral Sciences, 3(April), pp.45–50.

Fiske, S.T., Cuddy, A.J.C. & Glick, P., 2002. A Model of (Often Mixed Stereotype Content: Competence and Warmth Respectively Follow From Perceived Status and Competition. Journal of per­son­al­ity and social psy­cho­logy­er­son­al­ity and social psy­cho­logy, 82(6), pp.878–902.

Gawronski, B., 2004. Theory-based bias cor­rec­tion in dis­pos­i­tion­al infer­ence: The fun­da­ment­al attri­bu­tion error is dead, long live the cor­res­pond­ence bias. European Review of Social Psychology, 15(1), pp.183–217.

Gilbert, D.T. et al., 1995. The Correspondence Bias. Psychological Bulletin, 117(1), pp.21–38.

Hohwy, J., 2013. The pre­dict­ive mind, Oxford University Press.

Hohwy, J. & Palmer, C., 2014. Social Cognition as Causal Inference: Implications for Common Knowledge and Autism. In M. Gallotti & J. Michael, eds. Perspectives on Social Ontology and Social Cognition. Dordrecht: Springer Netherlands, pp. 167–189.

Kilner, J.M. & Frith, C.D., 2007. Predictive cod­ing: an account of the mir­ror neur­on sys­tem. Cognitive Processes, 8(3), pp.159–166.

Koster-Hale, J. & Saxe, R., 2013. Theory of Mind: A Neural Prediction Problem. Neuron, 79(5), pp.836–848.

Macrae, C.N., Stangor, C. & Milne, A.B., 1994. Activating Social Stereotypes: A Functional Analysis. Journal of Experimental Social Psychology, 30(4), pp.370–389.

Mason, M.F., Cloutier, J. & Macrae, C.N., 2006. On con­stru­ing oth­ers: Category and ste­reo­type activ­a­tion from facial cues. Social Cognition, 24(5), p.540.

McGlothlin, H. & Killen, M., 2010. How social exper­i­ence is related to children’s inter­group atti­tudes. European Journal of Social Psychology, 40(4), pp.625–634.

Mcglothlin, H. & Killen, M., 2006. Intergroup Attitudes of European American Children Attending Ethnically Homogeneous Schools. Child Development, 77(5), pp.1375–1386.

Ross, L., 1977. The Intuitive Psychologist And His Shortcomings: Distortions in the Attribution Process. Advances in Experimental Social Psychology, 10©, pp.173–220.

Sagar, H.A. & Schofield, J.W., 1990. Racial and beha­vi­or­al cues in Black and White chil­dren’ s per­cep­tion of ambigu­ously aggress­ive acts. Journal of per­son­al­ity and social psy­cho­logy, 39(October), pp.590–598.

Todorov, A. et al., 2008. Understanding eval­u­ation of faces on social dimen­sions. Trends in Cognitive Sciences, 12(12), pp.455–460.


Thanks to Melanie Killen and Joan Tycko for per­mis­sion to use images of exper­i­ment­al stim­uli from McGlothlin & Killen (2006, 2010).


Delusions as Explanations


by Matthew Parrott– Lecturer in the Department of Philosophy at King’s College London

One idea that has been extremely influ­en­tial with­in cog­nit­ive neuro­psy­cho­logy and neuro­psy­chi­atry is that delu­sions arise as intel­li­gible responses to highly irreg­u­lar exper­i­ences. For example, we might think that the reas­on a sub­ject adopts the belief that a house has inser­ted a thought into her head is because she has in fact had an extremely bizarre exper­i­ence rep­res­ent­ing a house push­ing a thought into her head (the case comes from Saks 2007; see Sollberger 2014 for an account of thought inser­tion along these lines). If this were to hap­pen, then delu­sions would arise for reas­ons that are famil­i­ar from cases of ordin­ary belief. A delu­sion­al sub­ject would simply be endors­ing or tak­ing on board the con­tent of her experience.

However the notion that a delu­sion is an under­stand­able response to an irreg­u­lar exper­i­ence need not be con­strued along the lines of a sub­ject accept­ing the con­tent of her exper­i­ence. Over a num­ber of years, Brendan Maher advoc­ated an influ­en­tial altern­at­ive pro­pos­al, accord­ing to which an indi­vidu­al adopts a delu­sion­al belief because it serves as an explan­a­tion of her ‘strange’ or ‘sig­ni­fic­ant’ exper­i­ence (see Maher 1974, 1988, 1999). Crucially, for Maher, the con­tent of the subject’s exper­i­ence is not identic­al to the con­tent of her delu­sion­al belief. Rather, the lat­ter is determ­ined in part by con­tex­tu­al factors, such as cul­tur­al back­ground or what Maher calls ‘gen­er­al explan­at­ory sys­tems’ (cf. 1974). Maher’s approach is often referred to as the ‘explan­a­tion­ist’ approach to under­stand­ing delu­sions (Bayne and Pacherie 2004).

Explanationist accounts have been espe­cially pop­u­lar with respect to the Capgras delu­sion that one’s friend or rel­at­ive is really an imposter (e.g., Stone and Young 1997) and delu­sions of ali­en con­trol (e.g., Blakemore, et. al. 2002). Yet, des­pite their pre­val­ence, the explan­a­tion­ist approach has been called into ques­tioned by a num­ber of philo­soph­ers on the grounds that delu­sions are quite obvi­ously very bad explanations.

For instance, Davies and col­leagues argue:

‘The sug­ges­tion that delu­sions arise from the nor­mal con­struc­tion and adop­tion of an explan­a­tion for unusu­al fea­tures of exper­i­ence faces the prob­lem that delu­sion­al patients con­struct explan­a­tions that are not plaus­ible and adopt them even when bet­ter explan­a­tions are avail­able. This is a strik­ing depar­ture from the more nor­mal think­ing of non-delusional sub­jects who have sim­il­ar exper­i­ences.’ (Davies, et. al. 2001, pg. 147; but see also Bayne and Pacherie 2004, Campbell 2001, Pacherie, et. al. 2006)

Indeed, since delu­sions strike most of us as highly implaus­ible, it is hard to see how they could explain any exper­i­ence, no mat­ter how unusu­al. So if we want to under­stand delu­sion­al cog­ni­tion along Maher’s lines, we will need to cla­ri­fy the cog­nit­ive trans­ition from anom­al­ous exper­i­ence to delu­sion­al belief in a way that illu­min­ates how it could be a genu­inely explan­at­ory transition.

In what fol­lows, I would like to dis­tin­guish three dis­tinct ways in which a delu­sion­al belief might be thought to be explan­at­or­ily inad­equate, each of which I think poses a dis­tinct chal­lenge for the explan­a­tion­ist approach.

The first con­cerns the phe­nom­en­al char­ac­ter of a delu­sion­al subject’s anom­al­ous exper­i­ence. Maher claims that the strange exper­i­ences we find in cases of delu­sion ‘demand’ explan­a­tions. But why is that? If the exper­i­ences that give rise to delu­sions do not them­selves rep­res­ent highly unusu­al states of affairs (as Maher seems to think), what is it about them that calls for or ‘demands’ an explan­a­tion? And does the par­tic­u­lar phe­nom­en­al char­ac­ter of a ‘strange’ exper­i­ence ‘demand’ a spe­cif­ic form of explan­a­tion, or are all ‘strange’ exper­i­ences rel­at­ively equal when it comes to their demands? The chal­lenge for the explan­a­tion­ist is to cla­ri­fy the phe­nom­en­al char­ac­ter of a delu­sion­al subject’s anom­al­ous exper­i­ence in such a man­ner that makes clear how it could be the explanad­um of a delu­sion. Let’s call this the Phenomenal Challenge.

I actu­ally think some very influ­en­tial neuro­psy­cho­lo­gic­al accounts have dif­fi­culty with the Phenomenal Challenge. To briefly take one example, Ellis and Young (1990) pro­posed that the Capgras delu­sion arises because of a lack of respons­ive­ness to famil­i­ar faces in the auto­nom­ic nervous sys­tem. In non-delusional sub­jects, an exper­i­ence of a famil­i­ar face is asso­ci­ated with an affect­ive response in the auto­nom­ic nervous sys­tem, but Capgras sub­jects fail to have this response. Ellis and Young’s the­ory pre­dicted that there would be no sig­ni­fic­ant dif­fer­ence in the skin con­duct­ance responses of Capgras sub­jects when they are shown famil­i­ar verses unfa­mil­i­ar faces, which has sub­sequently been con­firmed by a num­ber of stud­ies. Thus it seems there is good evid­ence that a typ­ic­al Capgras subject’s auto­nom­ic nervous sys­tem is not sens­it­ive to famil­i­ar faces.

This seems prom­ising but I don’t think it answers the Phenomenal Challenge because it doesn’t tell us any­thing about what a Capgras subject’s exper­i­ence of a face is like. As John Campbell notes, ‘the mere lack of affect does not itself con­sti­tute the perception’s hav­ing a par­tic­u­lar con­tent.’ (2001, pg. 96) Moreover, indi­vidu­als are not nor­mally con­scious of their auto­nom­ic nervous sys­tem (see Coltheart 2005). So it isn’t clear how dimin­ished sens­it­iv­ity with­in that sys­tem con­sti­tutes an exper­i­ence that ‘demands’ an explan­a­tion involving imposters. To really under­stand why an anom­al­ous exper­i­ence of a famil­i­ar face calls for a delu­sion­al explan­a­tion, we need to get a bet­ter sense on what that exper­i­ence is like.

A second worry raised in the pre­vi­ous pas­sage is that delu­sion­al sub­jects adopt delu­sion­al explan­a­tions ‘even when bet­ter explan­a­tions are avail­able’. Why does this hap­pen? Why does a delu­sion­al sub­ject select an inferi­or hypo­thes­is from the set of those avail­able to her? Let’s call this the Abductive Challenge.

To illus­trate, let’s stick with Capgras. The explan­a­tion­ist pro­pos­al is that a sub­ject adopts the belief that her friend has been replaced by an imposter in order to explain some odd exper­i­ence. But even if we sup­pose the imposter hypo­thes­is is empir­ic­ally adequate, it is highly unlikely to be the best explan­a­tion avail­able. As Davies and Egan remark, ‘one might ask wheth­er there is an altern­at­ive to the imposter hypo­thes­is that provides a bet­ter explan­a­tion of the patient’s anom­al­ous exper­i­ence. There is, of course, an obvi­ous can­did­ate for such a pro­pos­i­tion.’ (2013, pg. 719) In fact, there seems to be a num­ber of bet­ter avail­able hypo­theses; for example, that one has suffered brain injury or any hypo­thes­is that appealed to more famil­i­ar changes affect­ing facial appear­ance, such as hair-style or illness.

Put simply, the Abductive Challenge is that even if we assume the cog­nit­ive trans­ition from unusu­al exper­i­ence to delu­sion involves some­thing like abduct­ive reas­on­ing or infer­ence to the best explan­a­tion, delu­sion­al sub­jects select poor explan­a­tions instead of bet­ter avail­able altern­at­ives. The explan­a­tion­ist needs to tell us why this hap­pens (for some attempts see Coltheart et. al. 2010, Davies and Egan 2013, McKay 2012, Parrott and Koralus, 2015).

The final chal­lenge for explan­a­tion­ism is, in my view, the most prob­lem­at­ic. In the above pas­sage, Davies and col­leagues remark that delu­sions are extremely implaus­ible. Along these lines, we might nat­ur­ally won­der why a sub­ject would even con­sider one to be a can­did­ate explan­a­tion of her unusu­al exper­i­ence. Why would she not instead imme­di­ately rule out a delu­sion­al hypo­thes­is on the grounds that it is far too implaus­ible to be giv­en ser­i­ous con­sid­er­a­tion? This con­cern is echoed by Fine and colleagues:

‘They explain the anom­al­ous thought in a way that is so far-fetched as to strain the notion of explan­a­tion. The explan­a­tions pro­duced by patients with delu­sions to account for their anom­al­ous thoughts are not just incor­rect; they are non­starters. Appealing to the notion of explan­a­tion, there­fore, does not cla­ri­fy how the delu­sion­al belief comes about in the first place because the explan­a­tions of the delu­sion­al patients are noth­ing like explan­a­tions as we under­stand them.’ (Fine, et. al. 2005, pg. 160)

The task of explain­ing some tar­get phe­nomen­on demands cog­nit­ive resources and the idea that delu­sions are explan­at­ory ‘non­starters’ means that they nor­mally would be imme­di­ately rejec­ted. When engaged in an explan­at­ory task, we know that a per­son con­siders only a restric­ted set of hypo­theses and it seems quite nat­ur­al to exclude ones that are incon­sist­ent with one’s back­ground know­ledge. Since delu­sions seem to be in con­flict with our back­ground know­ledge, this is per­haps why we find it dif­fi­cult to under­stand how someone could think a delu­sion is even poten­tially explan­at­ory (for fur­ther dis­cus­sion, see Parrott 2016).

So why do sub­jects con­sider delu­sion­al explan­a­tions as can­did­ate hypo­theses? This is the final chal­lenge for the explan­a­tion­ist. Let’s call it the implaus­ib­il­ity chal­lenge. Notice that where­as the abduct­ive chal­lenge asks why a sub­ject even­tu­ally adopts one hypo­thes­is instead of anoth­er from among a fixed set of avail­able altern­at­ives, the implaus­ib­il­ity chal­lenge is more gen­er­al. It asks where these hypo­theses, the ones sub­ject to fur­ther invest­ig­a­tion, come from in the first place.

Can these three chal­lenges be over­come? I am optim­ist­ic and have tried to address them for the case of thought inser­tion (see Parrott forth­com­ing). However, I also think much more work needs to be done.

First, as I men­tioned above, it is not clear that we have a good under­stand­ing of what it is like for an indi­vidu­al to have the sorts of exper­i­ences we think are implic­ated in many cases of delu­sion. Without such under­stand­ing, I think it is hard to see why some exper­i­ences make demands on a person’s cog­nit­ive explan­at­ory resources. I also sus­pect that under­stand­ing what vari­ous anom­al­ous exper­i­ences are like might shed more light on why delu­sion­al indi­vidu­als tend to adopt very sim­il­ar explanations.

Second, I think that address­ing the implaus­ib­il­ity chal­lenge requires us to obtain a far bet­ter under­stand­ing of how hypo­theses are gen­er­ated than we cur­rently have. In both delu­sion­al and non-delusional cog­ni­tion, an explan­at­ory task presents a com­pu­ta­tion­al prob­lem. Which can­did­ate hypo­theses should be selec­ted for fur­ther empir­ic­al test­ing? Although I have sug­ges­ted that epi­stem­ic­ally impossible hypo­theses are nor­mally ruled out, that doesn’t tell us how can­did­ates are ruled in. Plausibly, there is some selec­tion function(s) that chooses can­did­ate explan­a­tions of a tar­get phe­nomen­on, but, as Thomas and col­leagues note, we have very little sense of how this might work:

‘Hypothesis gen­er­a­tion is a fun­da­ment­al com­pon­ent of human judg­ment. However, des­pite hypo­thes­is generation’s import­ance in under­stand­ing judg­ment, little empir­ic­al and even less the­or­et­ic­al work has been devoted to under­stand­ing the pro­cesses under­ly­ing hypo­thes­is gen­er­a­tion (Thomas, et. al. 2008, pg. 174).

The implaus­ib­il­ity chal­lenge strikes me as espe­cially puzz­ling because I think we can eas­ily see that cer­tain strategies for hypo­thes­is gen­er­a­tion would be bad. For instance, it wouldn’t gen­er­ally be good to con­sider hypo­theses only if they have a pri­or prob­ab­il­ity above a cer­tain threshold, because a hypo­thes­is with a low pri­or prob­ab­il­ity might best explain a new piece of evidence.

Delusional cog­ni­tion raises quite a few deep and inter­est­ing ques­tions, many of which bear on how we think about belief form­a­tion and reas­on­ing. And I have only scratched the sur­face when it comes to the kinds of puzzles that arise when we start think­ing about the ori­gins of delu­sion. But I hope that dis­tin­guish­ing these explan­at­ory chal­lenges will help us in think­ing about the ques­tions which need to be pur­sued if we are to assess the plaus­ib­il­ity of the explan­a­tion­ist strategy.



Bayne, T. and E. Pacherie. 2004. “Bottom-up or Top-down?: Campbell’s Rationalist Account of Monothematic Delusions.” Philosophy, Psychiatry, and Psychology, 11: 1–11.

Blakemore, S., D. Wolpert, and C. Frith. 2002. “Abnormalities in the Awareness of Action.” Trends in Cognitive Science, 6: 237–242.

Campbell, J. 2001. “Rationality, Meaning and the Analysis of Delusion.” Philosophy, Psychiatry and Psychology,8: 89–100.

Coltheart, M., P. Menzies, and J. Sutton. 2010. “Abductive Inference and Delusional Belief.” Cognitive Neuropsychiatry, 15: 261–87.

Coltheart, M. 2005. “Conscious Experience and Delusional Belief.” Philosophy, Psychiatry and Psychology, 12: 153–57.

Davies, M., M. Coltheart, R. Langdon, and N. Breen. 2001. “Monothematic Delusions: Towards a Two-Factor Account.” Philosophy, Psychiatry and Psychology, 8: 133–158.

Davies, M. and Egan, A. 2013. “Delusion: Cognitive Approaches, Bayesian Inference and Compartmentalization.” In K.W.M. Fulford, M. Davies, R.G.T. Gipps, G. Graham, J. Sadler, G. Stanghellini and T. Thornton (eds.), The Oxford Handbook of Philosophy of Psychiatry. Oxford: Oxford University Press.

Ellis, H. and A. Young. 1990. “Accounting for Delusional Misidentifications.” British Journal of Psychiatry, 157: 239–48.

Fine, C. M, J. Craigie, & I. Gold. 2005. “The Explanation Approach to Delusion.” Philosophy, Psychiatry, and Psychology, 12 (2): 159–163.

Maher, B. 1974. “Delusional Thinking and Perceptual Disorder.” Journal of Individual Psychology, 30: 98–113.

Maher, B. 1988. “Anomalous Experience and Delusional Thinking: The Logic of

Explanations”, in T. Oltmanns and B. Maher (eds.), Delusional Beliefs, Chichester: John Wiley and Sons, pp. 15–33.

Maher, B. 1999. “Anomalous Experience in Everyday Life: Its Significance for Psychopathology.” The Monist, 82: 547–570.

McKay, R. 2012. “Delusional Inference.” Mind and Language, 27: pp. 330–55.

Pacherie, E., M. Green, and T. Bayne. 2006. “Phenomenology and Delusions: Who Put the ‘Alien’ in Alien Control?” Consciousness and Cognition, 15: 566–577.

Parrott, M. 2016. “Bayesian Models, Delusional Beliefs, and Epistemic Possibilities.” The British Journal for the Philosophy of Science, 67: 271–296.

Parrott, M. forth­com­ing. “Subjective Misidentification and Thought Insertion.” Mind and Language.

Parrott, M. and P. Koralus. 2015. “The Erotetic Theory of Delusional Thinking.” Cognitive Neuropsychiatry, 20 (5): 398–415.

Saks, E. 2007. The Center Cannot Hold. New York: Hyperion.

Sollberger, M. 2014. “Making Sense of an Endorsement Model of Thought Insertion.” Mind and Language, 29: 590–612.

Stone, T. and A. Young. 1997. “Delusions and Brain Injury: the Philosophy and Psychology of Belief.” Mind and Language, 12: 327–364.

Thomas, R., M. Dougherty, A. Sprenger, and J. Harbison. 2008. “Diagnostic Hypothesis Generation and Human Judgment.” Psychological Review, 115(1): 155–185.

How much of an animal are you?

Baby chimpanzee

by Léa Salje- Lecturer in Philosophy of Mind and Language at The University of Leeds

I’m an anim­al, and so are you. We might be rather spe­cial anim­als, but we are anim­als all the same: bio­lo­gic­al organ­isms oper­at­ing in a par­tic­u­lar eco­lo­gic­al niche. For most of us, this is some­thing we’ve known for a long time, prob­ably since primary school. It’s per­haps sur­pris­ing, then, how little it seems to per­meate our every­day think­ing about ourselves, for many of us at least. I’m hardly minded to earn­estly con­tem­plate the fact of my anim­al­ity in my deal­ings with myself as I go about my daily busi­ness of coffee-ordering and Facebook-posting.

There’s also a ques­tion about how deeply the fact of our anim­al­ity genu­inely pen­et­rates the con­cep­tion of ourselves that guides our philo­sophy of mind, even among those of us happy to accept it on its sur­face. This was the ques­tion at the heart of the Persons as Animals pro­ject – an AHRC-funded pro­ject at the University of Leeds that I’ve been work­ing on for the last year led by Helen Steward, that aims to explore the ways in which cer­tain areas in philo­sophy of mind might be illu­min­ated by a per­spect­ive that fore­fronts the fact that we are anim­als. A couple of things (at least) fol­low from tak­ing such a per­spect­ive ser­i­ously. The first is that if we are anim­als, we are thereby not Cartesian egos, or brains, or sys­tems of inform­a­tion, or func­tion­al sys­tems, or bundles of men­tal states. We are entire embod­ied wholes, such that an under­stand­ing of ourselves requires a much more hol­ist­ic per­spect­ive than that which is often taken in philo­sophy of mind. And second, if we are anim­als then our powers and capa­cit­ies must be related in an evol­u­tion­ary way to those of oth­er creatures. This means that a decent under­stand­ing of those powers and capa­cit­ies – even rel­at­ively hifalutin powers like lan­guage and the capa­city to make choices – should bene­fit from a per­spect­ive that takes account of what is known of anim­al per­cep­tion, cog­ni­tion and agency.

Clearly, mere know­ledge of the bio­lo­gic­al fact of our anim­al­ity is not enough to mobil­ise these sorts of changes. One of the cent­ral planks of the pro­ject was that we need new and bet­ter ways to artic­u­late our place in the anim­al king­dom if we are to make philo­soph­ic­al pro­gress in these areas. And before we can do that, we need to under­stand what sorts of obstacles might have so far pre­ven­ted such an anim­al­ist­ic self-conception from really tak­ing hold.

To this end, the Persons as Animals pro­ject came togeth­er earli­er this year with con­ser­va­tion social sci­ent­ist Andy Moss from the edu­ca­tion depart­ment at Chester Zoo to run a series of semi-structured focus groups, designed to explore how we think of ourselves and our rela­tion to the anim­al world. What sorts of things get in the way of anim­al­ist­ic think­ing about ourselves? How might it be encour­aged? We ran 12 groups in all, 6 made up of zoo vis­it­ors, and anoth­er 6 of stu­dents from Leeds University.

What we found was a strik­ing absence of any uni­vocal nar­rat­ive about our sense of our own anim­al­ity. Instead, we found a deeply frac­tured and uneasy pic­ture: we do see ourselves as anim­als, and we don’t. And many of us struggle to recon­cile these two viewpoints.

Interestingly, this sense of unease came out in dif­fer­ent ways for dif­fer­ent par­ti­cipants. Some began with a firm sense of their own anim­al­ity, often accom­pan­ied by expres­sions of indig­na­tion at the very sug­ges­tion that we might think oth­er­wise. (Of course we’re anim­als; how dare we count ourselves as spe­cial?) The dis­cus­sion of these par­ti­cipants ten­ded to high­light the intel­li­gent beha­viours of oth­er anim­als, and to down­play our own beha­viours and capa­cit­ies as largely instinct-driven under a flimsy ven­eer of civility.

This is, of course, to fore­front the fact of our anim­al­ity in a way. But by so mag­ni­fy­ing our con­tinu­ity with the rest of the anim­al world, these par­ti­cipants seemed to face a spe­cial chal­lenge: they seemed to struggle to absorb into that anim­al­ist­ic self-image our ali­en­a­tion from and – even more troub­lingly – dom­in­a­tion over the nat­ur­al world around us. How can we recon­cile this self-conceived status as one spe­cies of anim­al among oth­ers on the one hand, with the eye-watering extent of our dam­aging impos­i­tions on the world around us on the oth­er? It’s one thing to think of ourselves as a spe­cial cat­egory of being, per­haps one that has the right (or even the duty) to organ­ise things for the whole of the nat­ur­al world. But that option is ruled out by a robust insist­ence on our lack of spe­cial­ness, of con­tinu­ity with oth­er anim­als. The only option remain­ing, how­ever, seems to be infin­itely more dis­turb­ing – that we are mere anim­als who have simply spir­alled out of con­trol. In the end, we often found these par­ti­cipants adopt­ing the rather ingeni­ous solu­tion of mov­ing from first per­son­al locu­tions to speak­ing in gen­er­al­isa­tions when dis­cuss­ing power asym­met­ries with the rest of the nat­ur­al world; ‘I don’t think we’re spe­cial, but the prob­lem is that people do’.

Others, by con­trast, began from a heightened sense of fun­da­ment­al dis­tinct­ness from oth­er anim­als. Even if we’re anim­als (sotto voce), we’re obvi­ously spe­cial. No danger among these groups of fail­ing to cel­eb­rate the spe­cial com­plex­ity of human beings. But these par­ti­cipants faced anoth­er chal­lenge, of recon­cil­ing this self-conception as fun­da­ment­ally dif­fer­ent from oth­er anim­als with know­ledge of the bio­lo­gic­al fact of our animality.

Typically par­ti­cipants express­ing this sort of view repor­ted that their know­ledge of their anim­al­ity is highly muted or recess­ive as they go about their daily lives. Indeed, some repor­ted not only that it nor­mally faded into the back­ground, but more strongly that it took con­sid­er­able cog­nit­ive effort to bring it to mind and make it fit with how they really see them­selves. In one par­tic­u­larly mem­or­able artic­u­la­tion of this feel­ing, one par­ti­cipant recalled find­ing out that she was an anim­al, and think­ing of it ‘as more of a clas­si­fic­a­tion like fit­ting everything into bubbles, like when I real­ised the sun was a star. It has all the same prop­er­ties as the oth­er stars and that’s weird to you because you regard them very dif­fer­ently in your every­day life.’ Our anim­al­ity, the idea seems to be, is a mat­ter of indis­put­able sci­entif­ic fact which is nev­er­the­less some­how com­pletely at odds with our every­day con­cep­tu­al­isa­tions and categorisations.

Through dis­cus­sion, these groups too found cre­at­ive ways of dis­solv­ing the ten­sion. An extreme minor­ity reac­tion was to give up on the claim that we are anim­als as simply ‘not ringing true’. Another strategy, observed in an exten­ded dis­cus­sion by a group of phys­ics stu­dents, was to redraw the con­cep­tu­al bound­ar­ies of what it is to be an anim­al. If we aban­don the idea that anim­als must be bio­lo­gic­al organ­isms, then we cre­ate more space to com­fort­ably hold togeth­er both the fact that we are anim­als and the con­vic­tion that we are import­antly dif­fer­ent from oth­er mem­bers of the anim­al king­dom. To say that we are anim­als, after all, might now be to pos­i­tion ourselves us just as closely to com­puters as to cater­pil­lars. A third sort of res­ol­u­tion was to asso­ci­ate anim­al­ity with a very basic form of exist­ence; one that we have, by now, tran­scen­ded. We might once have been anim­als, the idea is, but we’ve now moved bey­ond it. With this response, par­ti­cipants were able to brack­et out uncom­fort­able facts about our anim­al natures as part of our evol­u­tion­ary his­tory, rather than present­ing them­selves as call­ing for incor­por­a­tion into our live self-conceptions. For the most part, how­ever, all of these responses were giv­en with observ­able unease and frank state­ments of felt dif­fi­culty in incor­por­at­ing the fact of our anim­al­ity into their every­day self-conceptions.

Among yet oth­er par­ti­cipants there emerged a quite dif­fer­ent view­point, this time one that seemed much bet­ter able to accom­mod­ate our claims to both anim­al­ity and to dis­tinct­ness. For this group, the traits, beha­viours and capa­cit­ies that might at first glance seem to sep­ar­ate us from the rest the anim­al king­dom are really just the res­ults of evol­u­tion­ary pro­cesses, like any oth­er. Cinemas, reli­gion, prog rock, i‑pads, sar­casm, nuc­le­ar weapons, cryptic cross­words and Shoreditch apart­ments don’t cut us off from the nat­ur­al world; they are part of it. We are, on this view, placed unflinch­ingly along­side oth­er anim­als in the nat­ur­al world, but not at the cost of a deni­al or deprec­a­tion of human complexity.

One of the cent­ral aims of the Persons as Animals pro­ject was to bet­ter under­stand our rela­tion­ship to our own anim­al­ity, so that we might in turn bet­ter under­stand how to instill more deep-rooted ways of think­ing of ourselves as anim­als into our philo­sophy of mind. Our res­ults seem to sug­gest that for many of us the answer is that the rela­tion­ship is a pro­foundly awk­ward one; we seem to be far from find­ing a stable rest­ing place for our sense of pos­i­tion in the anim­al world. This find­ing ought to put us on our guard in our philo­soph­ic­al prac­tices. We are not insu­lated, as philo­soph­ers, from the uneasy and con­flic­ted anim­al­ist­ic self-conceptions that seem­ingly under­lie our every­day think­ing about ourselves.

Is implicit cognition bad cognition?


by Sophie Stammers– incom­ing postdoc­tor­al fel­low on pro­ject PERFECT

A sig­ni­fic­ant body of research in cog­nit­ive sci­ence holds that human cog­ni­tion com­prises two kinds of pro­cesses: expli­cit and impli­cit. According to this research, expli­cit pro­cesses oper­ate slowly, requir­ing atten­tion­al guid­ance, whilst impli­cit pro­cesses oper­ate quickly, auto­mat­ic­ally and without atten­tion­al guid­ance (Kahneman, 2012; Gawonski and Bodenhausen, 2014). A prom­in­ent example of impli­cit cog­ni­tion that has seen much recent dis­cus­sion in philo­sophy is that of impli­cit social bias, where asso­ci­ations between (often) stig­mat­ized social groups and (often) neg­at­ive traits mani­fest in beha­viour, res­ult­ing in dis­crim­in­a­tion (see Brownstein and Saul, 2016a; 2016b). This is the case even though the indi­vidu­al in ques­tion isn’t dir­ect­ing their beha­viour to be dis­crim­in­at­ory with the use of atten­tion­al guid­ance, and is appar­ently unaware that they’re exhib­it­ing any kind of dis­fa­vour­ing treat­ment at the time (although see Holroyd 2015 for the sug­ges­tion that indi­vidu­als may be able to observe bias in their behaviour).

Examples of impli­cit social bias mani­fest­ing in beha­viour include exhib­it­ing great­er signs of social unease, less smil­ing and more speech errors when con­vers­ing with a black exper­i­menter com­pared to when the exper­i­menter is white (McConnell and Leibold, 2001); less eye con­tact and increased blink­ing in con­ver­sa­tions with a black exper­i­menter versus their white coun­ter­part (Dovidio et al., 1997), and reduced will­ing­ness for skin con­tact with a black exper­i­menter versus a white one (Wilson et al., 2000). Implicit social biases also arise in more delib­er­at­ive scen­ari­os: Swedish recruit­ers who har­bor impli­cit racial asso­ci­ations are less likely to inter­view applic­ants per­ceived to be Muslim, as com­pared to applic­ants with a Swedish name (Rooth, 2007), and doc­tors who har­bor impli­cit racial asso­ci­ations are less likely to offer treat­ment to black patients with the clin­ic­al present­a­tion of heart dis­ease than to white patients with the same clin­ic­al present­a­tion of the dis­ease (Green, et al., 2007). These stud­ies estab­lish that there is no cor­rel­a­tion between par­ti­cipants’ dis­crim­in­at­ory beha­viour and the beliefs and val­ues that they pro­fess to have when questioned.

Both the mech­an­isms of impli­cit bias, and impli­cit pro­cesses more gen­er­ally, are often char­ac­ter­ised in the lan­guage of the sup-optimal. Variously, they deliv­er “a more inflex­ible form of think­ing” than expli­cit cog­ni­tion (Pérez, 2016: 28), they are “ara­tion­al” com­pared to the ration­al pro­cesses that gov­ern belief update (Gendler, 2008a: 641; 2008b: 557), and their con­tent is “dis­uni­fied” with our set of expli­cit atti­tudes (Levy, 2014: 101–103). As such, one might be temp­ted to think of impli­cit cog­ni­tion as reg­u­larly, or even neces­sar­ily bad cog­ni­tion. A strong inter­pret­a­tion of that value-laden assess­ment might mean that the pro­cesses in ques­tion deliv­er object­ively bad out­puts, how­ever these are to be defined, but we could also mean some­thing a bit weak­er, such as that out­puts are not aligned with the agent’s goals, or sim­il­ar. It’s easy to see why one might apply this value-laden assess­ment to the mech­an­isms which res­ult in impli­citly biased beha­viour: indi­vidu­als simply have no reas­on to dis­crim­in­ate against already mar­gin­al­ized people in the ways out­lined above, and yet they do any­way – that seems like a good can­did­ate for bad cog­ni­tion. That impli­citly biased beha­viours are the product of what appears to be a sub­op­tim­al pro­cessing sys­tem might motiv­ate the argu­ment that we’re not the agents of our impli­citly biased beha­vi­ors, as well as argu­ments that might fol­low from this, such as that it is not appro­pri­ate to hold people mor­ally respons­ible for their impli­cit biases (Levy, 2014).

But I think it would be wrong to con­clude that impli­cit cog­ni­tion neces­sar­ily deliv­ers sub­op­tim­al out­puts, and that impli­cit bias is an example of bad cog­ni­tion simply for the reas­on that it is impli­cit. Moreover, as I’ll argue below, main­tain­ing the former claim may well do a dis­ser­vice to the pro­ject of redu­cing impli­cit social biases.

Whilst expli­cit pro­cesses may be ‘bet­ter’ at some cog­nit­ive tasks, research sug­gests that impli­cit pro­cesses can actu­ally deliv­er a more favour­able per­form­ance than expli­cit pro­cesses in a vari­ety of domains. For instance, non-attentional, auto­mat­ic pro­cesses gov­ern the fast motor reac­tions employed by skilled ath­letes (Kibele, 2006). Trying to bring these pro­cesses under atten­tion­al con­trol can actu­ally dis­rupt sport­ing per­form­ance: Fleagal and Anderson (2008) show that dir­ect­ing atten­tion to their action per­form­ance sig­ni­fic­antly impairs the abil­ity of high-skill golfers on a put­ting task, whilst high-skill foot­ballers per­form less pro­fi­ciently when dir­ect­ing atten­tion to their exe­cu­tion of drib­bling (Beilock et al., 2002). Engaging atten­tion­al pro­cesses when learn­ing new motor skills can also dis­rupt per­form­ance (McKay et al., 2015).

Meanwhile, func­tion­al MRI stud­ies sug­gest that impro­visa­tion implic­ates non-attentional pro­cesses. One study shows that when pro­fes­sion­al jazz pian­ists impro­vise, they do so in the absence of cent­ral pro­cesses implic­ated in atten­tion­al guid­ance (Limb and Braun, 2008). Another study demon­strates that trained musi­cians inhib­it net­works asso­ci­ated with atten­tion­al pro­cessing dur­ing impro­visa­tion, (Berkowitz and Ansari, 2010).

Further, delib­er­ately dis­en­ga­ging atten­tion­al resources can facil­it­ate cre­ativ­ity, a pro­cess known as ‘incub­a­tion’. Subjects who return to work on a cre­at­ive task after a peri­od dir­ect­ing atten­tion­al resources to some­thing unre­lated to the task at hand often deliv­er enhanced out­puts com­pared with those who con­tinu­ally engage their atten­tion­al resources (Dodds et al., 2003). It has been pro­posed that task-relevant impli­cit pro­cesses remain act­ive dur­ing the incub­a­tion peri­od and con­trib­ute to enhanced cre­at­ive out­put (Ritter and Dijksterhuis, 2014).

So it would be wrong to sug­gest that impli­cit pro­cesses neces­sar­ily, or even typ­ic­ally, deliv­er sub-optimal out­puts com­pared with their expli­cit cous­ins. And per­tin­ent to our dis­cus­sion of impli­cit social bias, impli­cit pro­cesses them­selves can actu­ally be recruited to inhib­it the mani­fest­a­tion of bias. Research demon­strates that par­ti­cipants with genu­ine long-term egal­it­ari­an com­mit­ments (Moskowitz et al. 1999) as well as those in whom egal­it­ari­an com­mit­ments are activ­ated dur­ing in an exper­i­ment­al task (Moskowitz and Li, 2011) actu­ally mani­fest less impli­cit bias than those without such com­mit­ments. Crucially, the pro­cesses which bring impli­cit responses in line with an agent’s long-term com­mit­ments are not driv­en by atten­tion­al guid­ance, instead oper­at­ing auto­mat­ic­ally to pre­vent the facil­it­a­tion of ste­reo­typ­ic cat­egor­ies in the pres­ence of the rel­ev­ant social con­cepts (Moskowitz et al. 1999: 168). The sug­ges­tion here is that devel­op­ing genu­ine com­mit­ments to egal­it­ari­an val­ues and treat­ment can actu­ally recal­ib­rate impli­cit pro­cesses to deliv­er value-consistent beha­vi­or (see Holroyd and Kelly, 2016), without need­ing to effort­fully over­ride impli­cit responses each time one encoun­ters social con­cepts that might oth­er­wise trig­ger biased reac­tions. It would seem that the pro­file of impli­cit pro­cesses as inflex­ible, ara­tion­al and dis­uni­fied with expli­cit val­ues and com­mit­ments is ill-fitted to account for this example.

So, in a num­ber of cases it seems that impli­cit pro­cesses can serve our goals and val­ues. If this is right, then we should per­haps be more will­ing to loc­ate ourselves as agents not just in the beha­vi­or that arises from our expli­cit pro­cesses, but in that which arises from our impli­cit ones as well.

I think this has an import­ant implic­a­tion for prac­tices related to impli­cit bias train­ing. We should be wary of the rhet­or­ic that dis­tances us as agents from our impli­cit pro­cesses: for instance, char­ac­ter­iz­ing impli­cit bias as “racism without racists”1 might be com­fort­ing for those of us with impli­cit racial biases, but dis­own­ing the impli­cit pro­cesses that lead to racial dis­crim­in­a­tion, while not dis­own­ing those that lead to skilled music­al impro­visa­tion or cre­ativ­ity as above, seems some­what incon­sist­ent. I won­der wheth­er great­er will­ing­ness to accept one’s impli­cit pro­cesses as aspects of one’s agency (not neces­sar­ily as cent­ral, defin­ing aspects of one’s agency — but some­where in there non­ethe­less) might help to motiv­ate the pro­ject of realign­ing one’s impli­citly biased responses?



  1. In U.S. Department of Justice. 2016. “Implicit Bias.” Community Oriented Policing Services report, page 1. Accessed 27/07/16, URL:



Berkowitz, A. L. and D. Ansari. 2010. “Expertise-Related Deactivation of the Right Temporoparietal Junction dur­ing Musical Improvisation.” NeuroImage 49 (1): 712–19.

Brownstein, M and J. Saul. 2016a. Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, New York: Oxford University Press.

Brownstein, M and J. Saul. 2016b. Implicit Bias and Philosophy, Volume 2: Moral Responsibility, Structural Injustice, and Ethics, New York: Oxford University Press.

Dodds R. D., T. B. Ward and S. M. Smith. 2003. “Incubation in prob­lem solv­ing and cre­ativ­ity.” in The Creativity Research Handbook, edited by Runco M. A., Cresskill, NJ: Hampton Press.

Dovidio, J. F., K. Kawakami, C. Johnson, B. Johnson and A. Howard. 1997. “On the Nature of Prejudice: Automatic and Controlled Processes.” Journal of Experimental Social Psychology 33 (5): 510–40.

Gawronski, B. and G. V. Bodenhausen. 2014. “Implicit and Explicit Evaluation: A Brief Review of the Associative-Propositional Evaluation Model: APE Model.” Social and Personality Psychology Compass 8 (8): 448–62.

Gendler, T. S. 2008a. “Alief and Belief.” The Journal of Philosophy 105 (10): 634–63.

———. 2008b. “Alief in Action (and Reaction).” Mind & Language 23 (5): 552– 85.

Green, A. R., D. R. Carney, D. J. Pallin, L. H. Ngo, K. L. Raymond, L. I. Iezzoni and M. R. Banaji. 2007. “Implicit Bias among Physicians and Its Prediction of Thrombolysis Decisions for Black and White Patients.” Journal of General Internal Medicine 22 (9): 1231–38.

Holroyd, J. 2015. “Implicit Bias, Awareness and Imperfect Cognitions.” Consciousness and Cognition 33 (May): 511–23.

Holroyd, J. and D. Kelly. 2016. “Implicit Bias, Character, and Control.” In From Personality to Virtue, edited by A. Masala and J. Webber, Oxford: Oxford University Press.

Kahneman, D. 2012. Thinking, Fast and Slow, London: Penguin Books.

Kibele, A. 2006. “Non-Consciously Controlled Decision Making for Fast Motor Reactions in sports—A Priming Approach for Motor Responses to Non-Consciously Perceived Movement Features.” Psychology of Sport and Exercise 7 (6): 591–610.

Levy, N. 2014. Consciousness and Moral Responsibility, Oxford; New York: Oxford University Press.

Limb, C. J. and A. R. Braun. 2008. “Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation.” Edited by E. Greene. PLoS ONE 3 (2): e1679.

McConnell, A. R. and J. M. Leibold. 2001. “Relations among the Implicit Association Test, Discriminatory Behavior, and Explicit Measures of Racial Attitudes.” Journal of Experimental Social Psychology 37 (5): 435–42.

McKay, B., G. Wulf, R. Lewthwaite and A. Nordin. 2015. “The Self: Your Own Worst Enemy? A Test of the Self-Invoking Trigger Hypothesis.” The Quarterly Journal of Experimental Psychology 68 (9): 1910–19.

Moskowitz, G. B., P. M. Gollwitzer, W. Wasel and B. Schaal. 1999. “Preconscious Control of Stereotype Activation Through Chronic Egalitarian Goals.” Journal of Personality and Social Psychology 77 (1): 167–184

Moskowitz, G. B., and P. Li. 2011. “Egalitarian Goals Trigger Stereotype Inhibition: A Proactive Form of Stereotype Control.” Journal of Experimental Social Psychology 47 (1): 103–16.

Pérez, E. O. 2016. Unspoken Politics: Implicit Attitudes and Political Thinking, New York, NY: Cambridge University Press.

Ritter, S. M. and A. Dijksterhuis. 2014. “Creativity–the Unconscious Foundations of the Incubation Period.” Frontiers in Human Neuroscience 8: 22–31.

Rooth, D‑O. 2007. “Implicit Discrimination in Hiring: Real World Evidence.” (IZA Discussion Paper No. 2764). Bonn, Germany: Forschungsinstitut Zur Zukunft Der Arbeit (Institute for the Study of Labor).

Wilson, T. D., S. Lindsey and T. Y. Schooler. 2000. “A Model of Dual Attitudes.” Psychological Review 107 (1): 101–26.



Trusting the Uncanny Valley: Exploring the Relationship Between AI, Mental State Ascriptions, and Trust.


Henry Powell- PhD Candidate in Philosophy at the University of Warwick

Interactive arti­fi­cial agents such as social and pal­li­at­ive robots have become increas­ingly pre­val­ent in the edu­ca­tion­al and med­ic­al fields (Coradeshi et al. 2006). Different kinds of robots, how­ever, seem to engender dif­fer­ent kinds of inter­act­ive exper­i­ences from their users. Social robots, for example, tend to afford pos­it­ive inter­ac­tions that look ana­log­ous to the ones we might have with one anoth­er. Industrial robots, on the oth­er hand, rarely, if ever, are treated in the same way. Some very life­like humanoid robots seem to fit some­where out­side of these two spheres, inspir­ing feel­ings of dis­com­fort or dis­gust from people who come into con­tact with them. One way of under­stand­ing why this phe­nomen­on obtains is via a con­jec­ture developed by the Japanese roboti­cist Masahiro Mori in 1970 (Mori, 1970, pp. 33–35). This so called “uncanny val­ley” con­jec­ture has a num­ber of poten­tially inter­est­ing the­or­et­ic­al rami­fic­a­tions. Most import­antly, that it may help us to under­stand a set of con­di­tions under which humans could poten­tially ascribe men­tal states to beings without minds – in this case, that trust­ing an arti­fi­cial agent can lead one to do just that. With this in mind the aims of this post are two-fold. Firstly, I wish to provide an intro­duc­tion to the uncanny val­ley con­jec­ture and secondly, I want to raise doubts con­cern­ing its abil­ity to shed light on the con­di­tions under which men­tal state ascrip­tions occur. Specifically, in exper­i­ment­al paradigms that see sub­jects as trust­ing their AI coactors.

Mori’s uncanny val­ley con­jec­ture pro­poses that as robots increase in their like­ness to human beings, their famili­ar­ity like­wise increases. This trend con­tin­ues up to a point at which their life­like qual­it­ies are such that we become uncom­fort­able inter­act­ing with them. At around 75% human like­ness, robots are seen as uncan­nily like human beings and are viewed with dis­com­fort, or, in more extreme cases, dis­gust, sig­ni­fic­antly hinder­ing their poten­tial to gal­van­ise pos­it­ive social interactions.


This effect has been explained in a num­ber of ways. For instance, Saygin et al. (2011, 2012), have sug­ges­ted that the uncanny val­ley effect is pro­duced when there is a per­ceived incon­gru­ence between an arti­fi­cial agent’s form and its motion. If an agent is seen to be clearly robot­ic but move in a very human-like way, or vice-versa, there is an incom­pat­ib­il­ity effect in the pre­dict­ive, action sim­u­lat­ing cog­nit­ive mech­an­isms that seek to pick out and fore­cast the actions of human­like and non-humanlike objects. This pre­dict­ive cod­ing mech­an­ism is provided con­tra­dict­ing inform­a­tion by the visu­al sys­tem ([human agent] with [non­hu­man move­ment]) that pre­vents it from car­ry­ing out pre­dict­ive oper­a­tions to its nor­mal degree of accur­acy (Urgen & Miller, 2015). I take it that the out­put of this cog­nit­ive sys­tem is presen­ted in our exper­i­ence as being uncer­tain and that this uncer­tainty accounts for the feel­ings of unease that we exper­i­ence when inter­act­ing with these uncanny arti­fi­cial agents.

Of par­tic­u­lar philo­soph­ic­al interest in this regard is a strand of research that has sug­ges­ted that humans can be seen to make men­tal state ascrip­tions to arti­fi­cial agents that fall out­side the uncanny val­ley in giv­en situ­ations. This story was pos­ited in two stud­ies pub­lished in 2011 and 2015 by Kurt Gray & Daniel Wegner and Maya Mathur & David Reichling respect­ively. As I believe that it con­tains the most inter­est­ing evid­en­tial basis for think­ing along these lines I will lim­it my dis­cus­sion here to the lat­ter experiment.

Mathur & Reichling’s study saw sub­jects par­take in an “invest­ment game” (Berg et al. 1995) – a gen­er­ally accep­ted exper­i­ment­al stand­ard in meas­ur­ing trust – with a num­ber of arti­fi­cial agents whose facial fea­tures var­ied in their human like­ness. This was to test wheth­er sub­jects were will­ing to trust dif­fer­ent kinds of arti­fi­cial agents depend­ing on where they fell on the uncanny val­ley scale. What they found was that sub­jects played the game in such a way that indic­ated that they trus­ted robots with cer­tain kinds of facial fea­tures to act in cer­tain ways so as to reach an out­come that was mutu­ally bene­fi­cial to both of them, rather than favour­ing one or the oth­er. The authors sur­mised that because the sub­jects seemed to trust these arti­fi­cial agents, in a way that sug­ges­ted that they had thought about what the arti­fi­cial agent’s inten­tions might be, the sub­jects had ascribed men­tal states to their robot­ic part­ners in these cases.

It was pro­posed that sub­jects had believed that the arti­fi­cial agents had men­tal states encom­passing inten­tion­al pro­pos­i­tion­al atti­tudes (beliefs, desires, inten­tions etc.). This was because sub­jects seemed to assess the arti­fi­cial agent’s decision mak­ing pro­cesses in the form of what the robots “interests” in the vari­ous out­comes might be. This res­ult is poten­tially very excit­ing but I think that it jumps to con­clu­sions rather too quickly. I’d now like to briefly give reas­ons for my think­ing along these lines.

Mathur and Reichling seem to be mak­ing two claims in the dis­cus­sion of their study’s results.

  1. That sub­jects trus­ted the arti­fi­cial agents.
  2. That this trust implies the ascrip­tion of men­tal states.

My objec­tions here are the fol­low­ing. I think that i) is more com­plic­ated than the authors make it out to be and that ii) is just not at all obvi­ous and does not fol­low from i) when i) is ana­lysed in the prop­er way. Let us address i) first as it leads into the prob­lem with ii).

When elab­or­ated, I think that i) is mak­ing a claim that the sub­jects believed that the arti­fi­cial agents would act in a cer­tain way and that this action would be sat­is­fact­or­ily reli­able. I think that this is plaus­ible but I also think that the form of trust here is not that which is inten­ded by Mathur and Reichling and is thus unin­ter­est­ing in rela­tion to ii). There are, as far as I can tell, at least two ways in which we can trust things. The first and per­haps most inter­est­ing form of trust is that one express­ible in sen­tences like “I trust my broth­er to return the money that I lent him”. This implies that I think of my broth­er as the sort of per­son who would not, giv­en the oppor­tun­ity and upon ration­al reflec­tion, do some­thing con­trary to what he had told me he would do. The second form of trust is that which we might have towards a lad­der or some­thing sim­il­ar. We might say of such objects that “I trust that if I walk up this lad­der it will not col­lapse because I know that it is sturdy”. The dif­fer­ence here should be obvi­ous. I trust the lad­der because I can infer from its phys­ic­al state that it will per­form its des­ig­nated func­tion. It has no loose fix­tures, rot­ting parts or any­thing else that might make it col­lapse when I walk up it. To trust the lad­der in this way I do not think that it has to make com­mit­ments to the action expec­ted of it based on a giv­en set of eth­ic­al stand­ards. In the case of trust­ing my broth­er, my trust in him is express­ible as a belief in the idea that giv­en the oppor­tun­ity to choose not do what I have asked of him he will chose in favour of that which I have asked. The trust that I have in my broth­er requires that I believe that he has men­tal states that inform and help him to choose to act in favour of my ask­ing him to do some­thing. One form of trust implies the exist­ence of men­tal states, the oth­er does not. In regards to ii) then, as has just been argued, trust only implies men­tal states if it is of the form that I would ascribe to my broth­er in the example just giv­en, but not if that trust was of the sort that we would nor­mally ascribe to reli­ably func­tion­al objects like lad­ders. So ii) only fol­lows from i) if the former kind of trust is evinced and not otherwise.

This ana­lys­is sug­gests that if we are to believe that the sub­jects in this exper­i­ment ascribed men­tal states to the arti­fi­cial agents (or indeed sub­jects in any oth­er exper­i­ment that reaches the same con­clu­sions) then we need suf­fi­cient reas­ons for think­ing that the sub­jects were treat­ing the arti­fi­cial agents like I would treat my broth­er and not like I would treat the lad­der in respect to ascrip­tions of trust. Mathur and Reichling are silent as to these and thus we have no good reas­on for think­ing that men­tal state ascrip­tions were tak­ing place in the minds of the sub­jects in their exper­i­ment. While I do not think that it is entirely impossible that such a thing might obtain in some cir­cum­stances it is just not clear from this exper­i­ment that it obtains in this instance.

What I have hope­fully shown in this post is that is import­ant that pro­ceed with cau­tion when mak­ing claims about our will­ing­ness to ascribe oth­er minds to cer­tain kinds of objects and agents (either arti­fi­cial or oth­er­wise). Specifically, it is import­ant to do so in rela­tion to our abil­ity to hold such things in seem­ingly spe­cial kinds of rela­tions with ourselves, trust being an import­ant example of this.



Berg, J., Dickhaut J., McCabe, K., (1995). Trust, Reciprocity, and Social History. Game and Economic Behaviour, 10, 122–142.

Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., … Ishida, H. (2006). Human-inspired robots. IEEE Intelligent Systems, 21(4), 74–85.

Gray, K., & Wegner, D. M. (2012). Feeling robots and human zom­bies: Mind per­cep­tion and the uncanny val­ley. Cognition, 125(1), 125–130.

MacDorman, K. F. (2005). Androids as an exper­i­ment­al appar­at­us: Why is there an uncanny val­ley and can we exploit it. In CogSci-2005 work­shop: toward social mech­an­isms of android sci­ence (pp. 106–118).

  1. B. Mathur and D. B. Reichling, “An uncanny game of trust: Social trust­wor­thi­ness of robots inferred from subtle anthro­po­morph­ic facial cues,“Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on, La Jolla, CA, 2009, pp. 313–314.

Saygin, A. P. (2012). What can the Brain Tell us about Interactions with Artificial Agents and Vice Versa? In Workshop on Teleoperated Androids, 34th Annual Conference of the Cognitive Science Society.

Saygin, A. P., & Stadler, W. (2012). The role of appear­ance and motion in action pre­dic­tion. Psychological Research, 76(4), 388–394.‑0426‑z

Urgen, B. A., & Miller, L. E. (2015). Towards an Empirically Grounded Predictive Coding Account of Action Understanding. Journal of Neuroscience, 35(12), 4789–4791.




Split Brains and the Compositional Metaphysics of Consciousness


Luke Roelofs- Postdoctoral Fellow in Philosophy at the Australian National University

The mam­mali­an brain has an odd sort of redund­ancy: it has two hemi­spheres, each cap­able of sup­port­ing more-or-less nor­mal human con­scious­ness without the oth­er. We know this because des­troy­ing, inca­pa­cit­at­ing, or remov­ing one hemi­sphere leaves a patient who, des­pite some dif­fi­culties with par­tic­u­lar activ­it­ies, is clearly lucid and con­scious. The puzz­ling implic­a­tions of this redund­ancy are best brought out by con­sid­er­ing the unusu­al phe­nomen­on called the ‘split-brain’.

The hemi­spheres are con­nec­ted by a bundle of nerve fibres called the cor­pus cal­losum, as well as both being linked to the non-hemispheric parts of the brain (the ‘brain­stem’). To con­trol the spread of epi­leptic seizures, some patients had their cor­pus cal­losum severed while leav­ing both hemi­spheres, and the brain­stem, intact (Gazzaniga et al. 1962, Sperry 1964). These patients appear nor­mal most of the time, with no abnor­mal­it­ies in thought or action, but when exper­i­menters man­age to present stim­uli to sens­ory chan­nels which will take them exclus­ively to one hemi­sphere or the oth­er, strange dis­so­ci­ations appear. For example, when we show the word ‘key’ to the right hemi­sphere (such as by flash­ing it in the left half of the patient’s visu­al field), it can­not be verbally repor­ted (because the left hemi­sphere con­trols lan­guage), but if we ask the patient to pick up the object they saw the word for, they will read­ily pick out a key — but only if they can use their left hand (con­trolled by the right hemi­sphere). Moreover, for example, if the patient is shown the word ‘keyring’, with ‘key’ going to the right hemi­sphere and ‘ring’ going to the left, they will pick out a key (with their left hand) and a ring (with their right hand), but not a keyring. They will even report hav­ing seen only the word ‘ring’, and deny hav­ing seen either ‘key’ or ‘keyring’.

Philosophical dis­cus­sion of the split-brain phe­nomen­on takes two forms: arguing in sup­port of a par­tic­u­lar account of what is going on (e.g. Marks 1980, Hurley 1998, Tye 2003, pp.111–129, Bayne & Chalmers 2003, pp.111–112, Bayne 2008, 2010, pp.197–220), or explor­ing how the case chal­lenges the very way that we frame such accounts. A sem­in­al example of the lat­ter form is Nagel (1971) which reviews sev­er­al ways to make sense of the split-brain patient — as one per­son, as two people, as one per­son who occa­sion­ally splits into two people, etc. — and rejects them all for dif­fer­ent reas­ons, con­clud­ing that we have found a case where our ordin­ary concept of ‘a per­son’ breaks down and can­not be coher­ently applied. My work devel­ops an idea in the vicin­ity of Nagel’s: that our ordin­ary concept of ‘a per­son’ can handle the split-brain phe­nomen­on if we trans­form it to allow for com­pos­ite sub­jectiv­ity — some­thing which we have inde­pend­ent argu­ments for.

Start with what Nagel says about one of the pro­posed inter­pret­a­tions of the split-brain patient: as two people inhab­it­ing one body. Pointing out that when not in exper­i­ment­al situ­ations, the patient shows fully integ­rated beha­viour, he asks wheth­er we can really refuse to ascribe all their beha­viour to a single per­son, “just because of some pecu­li­ar­it­ies about how the integ­ra­tion is achieved”(Nagel 1971, p.406). Of course some­times two people do seem to work ‘as one’, as in “pairs of indi­vidu­als engaged in a per­form­ance requir­ing exact beha­vi­or­al coordin­a­tion, like using a two-handed saw, or play­ing a duet.” Perhaps the two hemi­spheres are like this? But Nagel wor­ries that this pos­i­tion is unstable:

“If we decided that they def­in­itely had two minds, then [why not] con­clude on ana­tom­ic­al grounds that every­one has two minds, but that we don’t notice it except in these odd cases because most pairs of minds in a single body run in per­fect par­al­lel?” (Nagel 1971, p.409)

Nagel’s worry here is cogent: if we accept that there can be two dis­tinct sub­jects des­pite it appear­ing for all the world as though there was only one, we seem to lose any basis for con­fid­ence that the same thing is not hap­pen­ing in oth­er cases. He continues:

“In case any­one is inclined to embrace the con­clu­sion that we all have two minds, let me sug­gest that the trouble will not end there. For the men­tal oper­a­tions of a single hemi­sphere, such as vis­ion, hear­ing, speech, writ­ing, verbal com­pre­hen­sion, etc. can to a great extent be sep­ar­ated from one anoth­er by suit­able cor­tic­al decon­nec­tions; why then should we not regard each hemi­sphere as inhab­ited by sev­er­al cooper­at­ing minds with spe­cial­ized capa­cit­ies? Where is one to stop?” (Nagel 1971, Fn11)

Where indeed? If one appar­ently uni­fied mind could be really a col­lec­tion of inter­act­ing minds, why not think that all appar­ently uni­fied minds are really such col­lec­tions? What evid­ence could decide one way or the oth­er? Taking this line seems to leave us with empir­ic­ally unde­cid­able ques­tions about every mind we encounter.

What is strik­ing is that this way of think­ing isn’t prob­lem­at­ic for any­thing oth­er than minds — indeed it is plat­it­ud­in­ous. Most things can be equally well under­stood as one or as many, because we are happy to regard them sim­ul­tan­eously as a col­lec­tion of parts and as a single whole. What makes the split-brain phe­nomen­on so per­plex­ing is our dif­fi­culty in extend­ing this atti­tude to minds.

Consider, for instance, the phys­ic­al brain. Do we have one brain, or do we have sev­er­al bil­lion neur­ones, or even 8‑or-so lobes? The answer of course is ‘all of the above’: the brain is noth­ing sep­ar­ate from the bil­lions of neur­ones, in the right rela­tion­ships, and neither are the 8 lobes any­thing sep­ar­ate from the brain (which they com­pose) or the neur­ones (which com­pose them). And as a res­ult of the ease with which we shift between one-whole and many-parts modes of descrip­tion, we can be san­guine about the ques­tion ‘how many brains does the split-brain patient have?’ There is some basis for say­ing ‘one’, and some basis for say­ing ‘two’, but it’s fine if we can’t settle on a single answer, because the ques­tion is ulti­mately a verbal one. There are all the nor­mal parts of a brain, stand­ing in some but not all of their nor­mal rela­tions, and so not fit­ting the cri­ter­ia for being ‘a brain’ as well as they nor­mally would. And there are two over­lap­ping sub­sys­tems with­in the one whole, which indi­vidu­ally fit the cri­ter­ia for being ‘a brain’ mod­er­ately well. But there is no fur­ther fact about which form of descrip­tion — call­ing the whole a brain or call­ing the two sub­sys­tems each a brain — is ulti­mately correct.

The chal­lenge is to take the same relaxed atti­tude to the ques­tion ‘how many people?’ Here is what I would like to say: the two hemi­spheres are con­scious, and the one brain that they com­pose is con­scious in vir­tue of their con­scious­ness and the rela­tions between them. Under nor­mal cir­cum­stances their inter­ac­tions ensure that the com­pos­ite con­scious­ness of the whole brain is well-unified: in the split-brain exper­i­ments, their inter­ac­tions are dif­fer­ent and estab­lish a less­er degree of unity. And each hemi­sphere is itself a com­pos­ite of smal­ler con­scious parts. This amounts to embra­cing what Nagel views as a reductio.

There is some­thing very dif­fi­cult to think through about the com­pos­ite con­scious­ness view. It seems as though if each hemi­sphere is someone, that’s one thing, and if the whole brain is someone, that’s anoth­er: they can­not be just two equi­val­ent ways of describ­ing the same state of affairs. And this intu­it­ive res­ist­ance to see­ing con­scious minds as com­posed of oth­ers (call it the ‘Anti-Combination intu­ition’) goes well bey­ond the split-brain phe­nomen­on. It has a long his­tory in the form of the ‘sim­pli­city argu­ment’, which anti-materialist philo­soph­ers from Plotinus (1956, pp.255-­258, 342-­356) to Descartes (1985, Volume 2, p.59) to Brentano (1987, pp. 290­-301) have used to show the imma­ter­i­al­ity of the soul. In a nut­shell, this argu­ment says that since minds can­not be thought of as com­pos­ite, they must be indi­vis­ible, and since all mater­i­al things are divis­ible, the mind can­not be mater­i­al (for fur­ther ana­lys­is see Mijuskovic 1984, Schachter 2002, Lennon & Stainton 2008). Nor is the sig­ni­fic­ance of this dif­fi­culty just his­tor­ic­al: many recent mater­i­al­ist the­or­ies either stip­u­late that no con­scious being can be part of anoth­er (Putnam 1965, pp.163, Tononi 2012, pp.59­-68), or else advance argu­ments based on the intu­it­ive absurdity of con­scious­ness in a being com­posed of oth­er con­scious beings (Block 1978, cf. Barnett 2008, Schwitzgebel 2015).

All of the just-cited authors take the Anti-Combination intu­ition as a datum, and draw con­clu­sions from it about the nature of con­scious­ness — con­clu­sions up to and includ­ing sub­stance dual­ism. I prefer the oppos­ite approach: to see the Anti-Combination intu­ition as a fact about humans which impedes our under­stand­ing of how con­scious­ness fits into the nat­ur­al world, and thus as some­thing which philo­soph­ers should seek to ana­lyse, under­stand, and ulti­mately move bey­ond. As it hap­pens, there is a group of con­tem­por­ary philo­soph­ers engaged in just this task: con­stitutive pan­psych­ists. Panpsychists think that the best explan­a­tion for human con­scious­ness is that con­scious­ness is a gen­er­al fea­ture of mat­ter, and con­stitutive pan­psych­ists see human con­scious­ness as con­sti­tuted out of sim­pler con­scious­nesses just as the human brain is con­sti­tuted out of sim­pler phys­ic­al struc­tures. The most press­ing objec­tion to this view, which has received extens­ive recent dis­cus­sion, is the ‘com­bin­a­tion prob­lem’: can mul­tiple simple con­scious­ness really com­pose a single com­plex con­scious­ness (Seager 1995, p.280, Goff 2009, Coleman 2013, Mørch 2014, Roelofs 2014, Forthcoming‑a, Forthcoming‑b, Chalmers Forthcoming)? And this is at bot­tom the same issue as we have been grap­pling with con­cern­ing the split-brain phe­nomen­on. In my research, I try to explore the Anti-Combination intu­ition, its basis, and how to move past it, with an eye both to the gen­er­al meta­phys­ic­al ques­tions raised by con­stitutive pan­psych­ism, and to par­tic­u­lar neur­os­cientif­ic phe­nom­ena like the split-brain.



Barnett, David. 2008. ‘The Simplicity Intuition and Its Hidden Influence on Philosophy of Mind.’ Nous 42(2): 308-335

Bayne, Timothy. 2008. ‘The Unity of Consciousness and the Split-Brain Syndrome.’ The Journal of Philosophy 105(6): 277–300.

Bayne, Timothy. 2010. The Unity of Consciousness. Oxford: Oxford University Press

Bayne, Timothy, & Chalmers, David. 2003. ‘What is the Unity of Consciousness?’ In Cleeremans, A. (ed.), The Unity of Consciousness: Binding, Integration, Dissociation, Oxford: Oxford University Press: 23–58

Block, Ned. 1978. ‘Troubles with Functionalism.’ In Savage, C. W. (ed.), Perception and Cognition: Issues in the Foundations of Psychology¸ University of Minneapolis Press: 261–325

Brentano, Franz. 1987. The Existence of God: Lectures giv­en at the Universities of Worzburg and Vienna, 1868-­1891. Ed. and trans. Krantz, S., Nijhoff International Philosophy Series

Chalmers, David. Forthcoming­. ‘The Combination Problem for Panpsychism.’ In Bruntrup, G. and Jaskolla, L. (eds.), Panpsychism, Oxford: Oxford University Press

Coleman, Sam. 2014. ‘The Real Combination Problem: Panpsychism, Micro-­Subjects, and Emergence.’ Erkenntnis 79:19–44

Descartes, René. 1985. ‘Meditations on First Philosophy.’ Originally pub­lished 1641. In Cottingham, John, Stoothoff, Robert, and Murdoch, Dugald, (trans and eds.) The Philosophical Writings of Descartes, 2 vols., Cambridge: Cambridge University Press

Gazzaniga, Michael, Bogen, Joseph, and Sperry, Roger. 1962. ‘Some Functional Effects of Sectioning the Cerebral Commissures in Man.’ Proceedings of the National Academy of Sciences 48(2): 17–65

Goff, Philip. 2009. ‘Why Panpsychism doesn’t Help us Explain Consciousness.’ Dialectica 63(3):289-­311

Hurley, Katherine. 1998. Consciousness in Action. Harvard University Press.

Lennon, Thomas, and Stainton, Robert. (eds.) 2008. The Achilles of Rationalist Psychology. Studies In The History Of Philosophy Of Mind, V7, Springer

Marks, Charles. 1980. Commissurotomy, Consciousness, and Unity of Mind. MIT Press

Mijuskovic, Benjamin. 1984. The Achilles of Rationalist Arguments: The Simplicity, Unity, and Identity of Thought and Soul From the Cambridge Platonists to Kant: A Study in the History of an Argument. Martinus Nijhoff.

Mørch, Hedda Hassel. 2014. Panpsychism and Causation: A New Argument and a Solution to the Combination Problem. Doctoral Dissertation, University of Oslo

Nagel, Thomas. 1971. ‘Brain Bisection and the Unity of Consciousness.’ Synthese 22:396–413.

Plotinus. 1956. Enneads. Trans. and eds. Mackenna, Stephen, and Page, B. S. London: Faber and Faber Ltd.

Putnam, Hilary. 1965. ‘Psychological pre­dic­ates’. In Capitan, William, and Merrill, Daniel. (eds.), Art, mind, and reli­gion. Liverpool: University of Pittsburgh Press

Roelofs, Luke. 2014. ‘Phenomenal Blending and the Palette Problem.’ Thought 3:59–70.

Roelofs, Luke. Forthcoming‑a. ‘The Unity of Consciousness, Within and Between Subjects.’ Philosophical Studies.

Roelofs, Luke. Forthcoming‑b. ‘Can We Sum sub­jects? Evaluating Panpsychism’s Hard Problem.’ In Seager, William (ed.), The Routledge Handbook of Panpsychism, Routledge.

Schachter, Jean-Pierre. 2002. ‘Pierre Bayle, Matter, and the Unity of Consciousness.’ Canadian Journal of Philosophy 32(2): 241-265

Seager, William. 1995. ‘Consciousness, Information and Panpsychism.’ Journal of Consciousness Studies 2­3:272–2­88

Sperry, Roger. 1964. ‘Brain Bisection and Mechanisms of Consciousness.’ In Eccles, John (ed.), Brain and Conscious Experience. Springer-Verlag

Tye, Michael. 2003. Consciousness and Persons: Unity and Identity. MIT Press

Tononi, Giulio. 2012. ‘Integrated inform­a­tion the­ory of con­scious­ness: an updated account.’ Archives Italiennes de Biologie 150(2­3): 56-90

Investigating the Stream of Consciousness

Oliver Rashbrook-Cooper–British Academy Postdoctoral Fellow in Philosophy at the University of Oxford

There are a num­ber of dif­fer­ent ways in which we can fruit­fully study our streams of con­scious­ness. We might try to provide a detailed char­ac­ter­isa­tion of how con­scious exper­i­ence seems ‘from the inside’, and closely scru­tin­ize the phe­nomen­o­logy. We might try to uncov­er the struc­ture of con­scious­ness by focus­sing upon our tem­por­al acu­ity, and examin­ing when and how we are sub­ject to tem­por­al illu­sions. Or we might focus upon invest­ig­at­ing the neur­al mech­an­isms upon which con­scious exper­i­ence depends.

Sometimes, these dif­fer­ent approaches appear to yield con­tra­dict­ory res­ults. In par­tic­u­lar, the deliv­er­ances of intro­spec­tion some­times appear to be at odds with what is revealed both by cer­tain tem­por­al illu­sions and by research into neur­al mech­an­isms. When this occurs, what should we do? We can begin by con­sid­er­ing two fea­tures of how con­scious­ness phe­nomen­o­lo­gic­ally seems.

It is nat­ur­al to think of exper­i­ence as unfold­ing in step with its objects. Over a ten second inter­val, for instance, I might watch someone sprint 100 metres. If I watch this event, my exper­i­ence will unfold over a ten second inter­val. First I will hear the pis­tol fire, see the race begin, and so on, until I see the lead­er cross the fin­ish line. My exper­i­ence of the race has two fea­tures. Firstly, it seems to unfold in step with the race itself, secondly it seems to unfold smoothly — it seems as if I am con­tinu­ously aware of the race, rather than my aware­ness of it being frag­men­ted into dis­crete episodes.

Can this char­ac­ter­isa­tion of how things seem be recon­ciled with what we learn from oth­er ways of invest­ig­at­ing the stream of con­scious­ness? To answer this ques­tion we can con­sider two dif­fer­ent cases: the case of the col­our phi phe­nomen­on, and the case of dis­crete neur­al processing.

The col­our phi phe­nomen­on is a case in which the present­a­tion of two stat­ic stim­uli gives rise to an illus­ory exper­i­ence of motion. When two col­oured dots that are suf­fi­ciently close to one anoth­er are illu­min­ated suc­cess­ively in a suf­fi­ciently brief win­dow of time, one is left with the impres­sion that there is a single dot mov­ing from one loc­a­tion to the oth­er (examples can be found here and here)

This phe­nomen­on gen­er­ates a puzzle about wheth­er exper­i­ence really unfolds in step with its objects. In order for us to exper­i­ence appar­ent motion between the two loc­a­tions, we need to register the occur­rence of the second dot. This makes it seem as if the exper­i­ence of motion can only occur after the second dot has flashed, for without regis­ter­ing the second dot, we wouldn’t exper­i­ence motion at all. So it seems that, in this case, the exper­i­ence of motion doesn’t unfold in step with its appar­ent object at all. If this is right, then we have reas­on to doubt that exper­i­ence nor­mally unfolds in step with its objects, for if we can be wrong about this in the col­our phi case, per­haps we are wrong about it in all cases.

The second kind of case is the case of dis­crete neur­al pro­cessing. There is reas­on to think that the neur­al mech­an­isms under­pin­ning con­scious per­cep­tion are dis­crete (see, for example, VanRullen and Koch, 2003). This looks to be in ten­sion with the second fea­ture we noted earli­er – that our aware­ness of things appears to be con­tinu­ous. As in the case of col­our phi, it might be tempt­ing to think that this tells us that our impres­sion of how things seem ‘from the inside’ is mistaken.

However, when we con­sider how things really strike us phe­nomen­o­lo­gic­ally, it becomes clear that there is an altern­at­ive way to recon­cile these appar­ently con­tra­dict­ory res­ults. We can begin by not­ing that when we intro­spect, it isn’t pos­sible for us to focus our atten­tion upon con­scious exper­i­ence without focus­sing upon a tem­por­ally exten­ded por­tion of exper­i­ence – there is always a min­im­al inter­val upon which we are able to focus.

The claims that exper­i­ence seems to unfold in step with its objects and seems con­tinu­ous apply to these tem­por­ally exten­ded por­tions of exper­i­ence that we are able to focus upon when we intro­spect. If this is right, then we have a dif­fer­ent way of think­ing about the col­our phi case. On this approach, over an inter­val, we have an exper­i­ence of appar­ent motion that unfolds over the time it takes the two dots to flash. The phe­nomen­o­logy is, how­ever, neut­ral about what occurs over the sub-intervals of this experience.

The claim that this exper­i­ence unfolds over an exten­ded inter­val of time isn’t incon­sist­ent with what goes on in the col­our phi case. The appar­ent incon­sist­ency only arises if we think that the claim that exper­i­ence seems to unfold in step with its object applies to all of the sub-intervals of this exper­i­ence, no mat­ter how short (for devel­op­ment and dis­cus­sion of this point, see Hoerl (2013), Phillips (2014), and Rashbrook (2013a)).

Likewise, in the case of dis­crete neur­al pro­cessing, in order for the case to gen­er­ate a clash with how exper­i­ence appears ‘from the inside’, our char­ac­ter­isa­tion of how con­scious­ness seems must apply not only to some tem­por­ally exten­ded potions of con­scious­ness, but to all of them, no mat­ter how brief. Again, we might ques­tion wheth­er this is really how things seem.

While exper­i­ence doesn’t seem to be frag­men­ted into dis­crete epis­odes, this cer­tainly doesn’t mean that it seems to fill every inter­val for which we are con­scious, no mat­ter how brief (for dis­cus­sion, see Rashbrook, 2013b). As in the case of the col­our phi, per­haps our char­ac­ter­isa­tion of how things seem applies only to tem­por­ally exten­ded por­tions of exper­i­ence – so the deliv­er­ances of intro­spec­tion are simply neut­ral about wheth­er con­scious exper­i­ence fills every instant of the inter­val it occupies.

There is more than one way, then, to recon­cile the psy­cho­lo­gic­al and the phe­nomen­o­lo­gic­al strategies of enquir­ing about con­scious exper­i­ence. Rather than tak­ing non-phenomenological invest­ig­a­tion to reveal the phe­nomen­o­logy to be mis­lead­ing, per­haps we should take it as an invit­a­tion to think more care­fully about how things seem ‘from the inside’.



Hoerl, Christoph. 2013. ‘A Succession of Feelings, in and of Itself, is Not a Feeling of Succession’. Mind 122:373–417.

Phillips, Ian. 2014. The Temporal Structure of Experience. In Subjective Time: The Philosophy, Psychology, and Neurscience of Temporality, ed. Dan Lloyd and Valtteri Arstila, 139–159. MIT.

Rashbrook, Oliver. 2013a. An Appearance of Succession Requires a Succession of Appearances. Philosophy and Phenomenological Research 87:584–610.

Rashbrook, Oliver. 2013b. The con­tinu­ity of con­scious­ness. European Journal of Philosophy 21:611–640.

VanRullen, Rufin. and Koch, Christoph. 2003. Is per­cep­tion dis­crete or con­tinu­ous? Trends in Cognitive Sciences 7:207–13.

Infant Number Knowledge: Analogue Magnitude Reconsidered

Alexander Green, MPhil Candidate, Department of Philosophy, University of Warwick

Following Stanislas Dehaene’s The Number Sense (1997) there has been a surge in interest in num­ber know­ledge, espe­cially the devel­op­ment of num­ber know­ledge in infants. This research has broadly focused on answer­ing the fol­low­ing ques­tions: What numer­ic­al abil­it­ies do infants pos­sess, and how do these work? How are they dif­fer­ent from the numer­ic­al abil­it­ies of adults, and how is the gap bridged in cog­nit­ive development?

The aim of this post is to provide a gen­er­al intro­duc­tion to infant num­ber know­ledge by focus­ing on the first two of these ques­tions. There is much evid­ence indic­at­ing that there are two dis­tinct sys­tems by which infants are able to track and rep­res­ent numer­os­ity — par­al­lel indi­vidu­ation and ana­logue mag­nitude. I will begin by briefly explain­ing what these numer­ic­al capa­cit­ies are. I will then focus my dis­cus­sion on the ana­logue mag­nitude sys­tem, and raise some doubts about the way in which this sys­tem is com­monly under­stood to work.

Firstly, con­sider par­al­lel indi­vidu­ation. This sys­tem allows infants to dif­fer­en­ti­ate between sets of dif­fer­ent quant­it­ies by track­ing mul­tiple indi­vidu­al objects at the same time (see Feigenson & Carey 2003; Feigenson et al 2002; Hyde 2011). For example if an infant were presen­ted with three objects, par­al­lel indi­vidu­ation would allow the track­ing of the indi­vidu­al objects ({object 1, object 2, object 3}) rather than allow­ing the track­ing of total set-size ({three objects}). There are two fur­ther points of interest about par­al­lel indi­vidu­ation. Firstly, par­al­lel indi­vidu­ation only rep­res­ents numer­os­ity indir­ectly because it track indi­vidu­als rather than total set-size. Secondly it is lim­ited to sets of few­er than four individuals.

Secondly, con­sider ana­logue mag­nitude. This sys­tem allow infants to dis­crim­in­ate between set sizes provided that the ratio is suf­fi­ciently large (see (Xu & Spelke 2000), (Feigenson et al 2004), (Xu et al, 2005)). More spe­cific­ally, ana­logue mag­nitude allows infants to dif­fer­en­ti­ate between dif­fer­ent sets provided that the ratio is at least 2:1. Interestingly the pre­cise car­din­al value of the sets seems to be irrel­ev­ant as long as the ratio remains con­stant (i.e. it applies equally to a case of two-to-four as twenty-to-forty). Thus the lim­it­a­tions of the ana­logue mag­nitude sys­tem are determ­ined by ratio, in con­trast to the par­al­lel indi­vidu­ation sys­tem whose lim­it­a­tions are determ­ined by spe­cif­ic set-size.

So how does ana­logue mag­nitude work? I will argue that the most recent answer to this ques­tion is incor­rect. This is because con­tem­por­ary authors rightly reject the ori­gin­al char­ac­ter­isa­tion of ana­logue mag­nitude (the accu­mu­lat­or mod­el), yet fail to reject its implications.

The accu­mu­lat­or mod­el of ana­logue mag­nitude is intro­duced by Dehaene, by way of an ana­logy with Robinson Crusoe (1997, p.28). Suppose that Crusoe must count coconuts. To do this he might dig a hole next to a river, and dig a trench which links the river to this hole. He also cre­ates a dam, such that he can con­trol when the river flows into the hole. For every coconut Crusoe counts, he diverts some giv­en amount of water into the hole. However as Crusoe diverts more water into the hole, it becomes more dif­fi­cult to dif­fer­en­ti­ate between con­sec­ut­ive num­bers of coconuts (i.e. the dif­fer­ence between one and two diver­sions of water is easi­er to see than between twenty and twenty-one).

Dehaene sup­poses that ana­logue mag­nitude rep­res­ent­a­tions are giv­en by a sim­il­ar icon­ic format, i.e. by rep­res­ent­ing a phys­ic­al mag­nitude pro­por­tion­al to the num­ber of indi­vidu­als in the set. Consider the fol­low­ing example: one object is rep­res­en­ted by ‘_’, two objects are rep­res­en­ted by ‘__’, three are rep­res­en­ted by ‘___’, and so on. Under this mod­el, ana­logue mag­nitude is under­stood to rep­res­ent the approx­im­ate car­din­al value of a set by the use of an iter­at­ive count­ing meth­od (Dehaene 1997, p.29). This partly reflects the empir­ic­al data: sub­jects are able to rep­res­ent dif­fer­ences in set size (with longer lines indic­at­ing lar­ger sets), and the import­ance of ratio for dif­fer­en­ti­ation is accoun­ted for (because it is more dif­fi­cult to dif­fer­en­ti­ate between sets which dif­fer by smal­ler ratios).

More recently this accu­mu­lat­or mod­el of ana­logue mag­nitude has come to be rejec­ted, how­ever. This mod­el entails that each object in a set must be indi­vidu­ally rep­res­en­ted in turn (the first object pro­duces the rep­res­ent­a­tion ‘_’, the second pro­duces the rep­res­ent­a­tion ‘__’, etc). This sug­gests that it would take longer for a lar­ger num­ber to be rep­res­en­ted than a smal­ler one (as the quant­ity of objects to be indi­vidu­ally rep­res­en­ted dif­fers). However there are empir­ic­al reas­ons to reject this.

For example there is evid­ence sug­gest­ing that the speed of form­ing ana­logue mag­nitude rep­res­ent­a­tions doesn’t vary between dif­fer­ent set sizes (Wood & Spelke 2005). Additionally, infants are still able to dis­crim­in­ate between dif­fer­ent set sizes in cases where they are unable to attend to the indi­vidu­al objects of a set in sequence (Intriligator & Cavanagh 2001). These find­ings sug­gests that it is incor­rect to claim that ana­logue mag­nitude rep­res­ent­a­tions are formed by respond­ing to indi­vidu­al objects in turn.

Despite these obser­va­tions, many authors con­tin­ue to advoc­ate the implic­a­tions of this accu­mu­lat­or mod­el even though there isn’t empir­ic­al evid­ence to sup­port these. The implic­a­tions that I am refer­ring to are that ana­logue mag­nitude rep­res­ents approx­im­ate car­din­al value and that it does so by the afore­men­tioned icon­ic format. For example, con­sider Carey’s dis­cus­sions of ana­logue mag­nitude (2001, 2009). Carey takes ana­logue mag­nitude to enable infants to ‘rep­res­ent the approx­im­ate car­din­al value of sets’ (2009, p.127). As a res­ult, the above icon­ic format (in which infants rep­res­ent a phys­ic­al mag­nitude pro­por­tion­al to the num­ber of rel­ev­ant objects) is still advoc­ated (Carey 2001, p.38). This char­ac­ter­isa­tion of ana­logue mag­nitude is typ­ic­al of many authors (e.g. Feigenson et al 2004; Slaughter et al 2006; Feigenson et al 2002; Lipton & Spelke 2003; Condry & Spelke 2008).

Given the rejec­tion of the accu­mu­lat­or meth­od, this char­ac­ter­isa­tion seems dif­fi­cult to jus­ti­fy. Analogue mag­nitude allows infants the abil­ity to dif­fer­en­ti­ate between two sets of quant­ity, but there seems no reas­on why this would require any­thing over and above the rep­res­ent­a­tion of ordin­al value (i.e. ‘great­er than’ and ‘less than’). Consequently the claim that ana­logue mag­nitude rep­res­ents approx­im­ate car­din­al value seems to be both unjus­ti­fied and unne­ces­sary. Given this there also seems to be no jus­ti­fic­a­tion for the Crusoe-analogy icon­ic format because this doesn’t con­trib­ute any­thing oth­er than allow­ing ana­logue mag­nitude to rep­res­ent approx­im­ate car­din­al value which, as we have seen, is empir­ic­ally undermined.

In this post I have dis­cussed the abil­it­ies of par­al­lel indi­vidu­ation and ana­logue mag­nitude, in answer to the ques­tion: what numer­ic­al abil­it­ies do infants pos­sess, and how do these work? Parallel indi­vidu­ation allows infants to dif­fer­en­ti­ate between small quant­it­ies of objects (few­er than four), and ana­logue mag­nitude allows dif­fer­en­ti­ation between quant­it­ies if the ratio is suf­fi­ciently large. I have also advanced a neg­at­ive argu­ment against the dom­in­ant under­stand­ing of ana­logue mag­nitude. Many authors have rejec­ted the iter­at­ive accu­mu­lat­or mod­el without reject­ing its implic­a­tions (ana­logue mag­nitude as rep­res­ent­ing approx­im­ate car­din­al value, and its doing so by icon­ic format). This sug­gests that the lit­er­at­ure requires a new under­stand­ing of how the ana­logue mag­nitude sys­tem works.



Carey, S. 2001. ‘Cognitive Foundations of Arithmetic: Evolution and Ontogenisis’. Mind & Language. 16(1): 37–55.

Carey, S. 2009. The Origin of Concepts. New York: OUP.

Condry, K., & Spelke, E. 2008. ‘The Development of Language and Abstract Concepts: The Case of Natural Number.’ Journal of Experimental Psychology: General. 137(1): 22–38.

Dehaene, S. 1997. The Number Sense: How the Mind Creates Mathematics. Oxford: OUP.

Feigenson, L., Carey, S., & Hauser, M. 2002. ‘The Representations Underlying Infants’ Choice of More: Object Files versus Analog Magnitudes’. Psychological Science. 13(2): 150–156.

Feigenson, L., & Carey, S. 2003. ‘Tracking Individuals via Object-Files: Evidence from Infants’ Manual Search’. Developmental Science. 6(5): 568–584.

Feigenson, L., Dehaene, S., & Spelke, E. 2004. ‘Core Systems of Number’. Trends in Cognitive Sciences. 8(7): 307–314.

Hyde, D. 2011. ‘Two Systems of Non-Symbolic Numerical Cognition’. Frontiers in Human Neuroscience. 5: 150.

Intriligator, J., & Cavanagh, P. 2001. ‘The Spatial Resolution of Visual Attention’. Cognitive Psychology. 43: 171–216.

Lipton, J., & Spelke, E. 2003. ‘Origins of Number Sense: Large-Number Discrimination in Human Infants’. Psychological Science. 14(5): 396–401.

Slaughter, V., Kamppi, D., & Paynter, J. 2006. ‘Toddler Subtraction with Large Sets: Further Evidence for an Analog-Magnitude Representation of Number’. Developmental Science. 9(1): 33–39.

Wagner, J., & Johnson, S. 2011. ‘An Association between Understanding Cardinality and Analog Magnitude Representations in Preschoolers’. Cognition. 119(1): 10–22.

Wood, J., & Spelke, E. 2005. ‘Chronometric Studies of Numerical Cognition in Five-Month-Old Infants’. Cognition. 97(1): 23–29.

Xu, F., & Spelke, E. 2000. ‘Large Number Discrimination in 6‑Month-Old Infants’. Cognition. 74(1): B1-B11.

Xu, F., Spelke, E., & Goddard, S. 2005. ‘Number Sense in Human Infants’. Developmental Science. 8(1): 88–101.

The Mental Causation Question and Emergence

Dr. Umut Baysan–University Teacher in Philosophy at the University of Glasgow

How can the mind caus­ally influ­ence a world that is, ulti­mately, made up of phys­ic­al stuff? This is one way of ask­ing the men­tal caus­a­tion ques­tion, where men­tal caus­a­tion is the type of caus­a­tion in which either the cause or effect is a men­tal event or prop­erty. The ques­tion can also be put this way: How can men­tal events or prop­er­ties (such as beliefs, desires, sen­sa­tions, and so on) cause oth­er events? Discussion of the men­tal caus­a­tion ques­tion dates back to at least Princes Elizabeth of Bohemia’s chal­lenge to Descartes, who took the mind to be a non-physical sub­stance. Elizabeth’s ques­tion to Descartes was how one can make sense of the idea that the mind could move the body, or the body could influ­ence the mind, if they are two dis­tinct sub­stances as such.

We take men­tal caus­a­tion to be real. The real­ity of men­tal caus­a­tion is so cent­ral to our philo­soph­ic­al think­ing that the view that there is no such thing as men­tal caus­a­tion, namely epi­phen­om­en­al­ism, has a cru­cial dia­lect­ic­al role in philo­soph­ic­al argu­ment­a­tion in meta­phys­ics of mind. As with Elizabeth’s cri­ti­cism of Descartes, some­times views in the meta­phys­ics of mind are eval­u­ated on this basis. In terms of their roles in philo­soph­ic­al argu­ment­a­tion, I find epi­phen­om­en­al­ism and rad­ic­al scep­ti­cism to be very sim­il­ar. In epi­stem­o­logy, rad­ic­al scep­ti­cism is the view that there is no such thing as know­ledge of the extern­al world. Although pretty much every­one takes rad­ic­al scep­ti­cism to be false, some epi­stem­o­lo­gists still devote time to show­ing why this is the case, as a view’s implic­a­tion of rad­ic­al scep­ti­cism is taken to be reas­on enough to dis­pense with it. Likewise in meta­phys­ics of mind, nearly every­one thinks that epi­phen­om­en­al­ism is false, but there is a very siz­able lit­er­at­ure try­ing to show how this is so. For this reas­on, we often find charges of epi­phen­om­en­al­ism in reduc­tio argu­ments.

Although there may have been ways of tack­ling Princess Elizabeth’s chal­lenge to Descartes, the dif­fi­culty of doing so moved many con­tem­por­ary philo­soph­ers towards an onto­lo­gic­ally phys­ic­al­ist view accord­ing to which, at least in the actu­al world, there are only phys­ic­al sub­stances. Now, once we get rid of all non-physical sub­stances from our onto­logy (sub­stance phys­ic­al­ism) and yet still hold on to the exist­ence of minds (real­ism about the mind), the next set of ques­tions is: What should we do with the prop­er­ties of such minds? What are men­tal prop­er­ties? Can men­tal prop­er­ties be reduced to phys­ic­al properties?

For the sake of brev­ity, I shall not recite the reas­ons why such a reduc­tion can­not be main­tained, so let’s just assume that men­tal prop­er­ties are not phys­ic­al prop­er­ties. (For sem­in­al work on this point, see Putnam 1967.) In a world with purely phys­ic­al sub­stances, some of which have irre­du­cibly men­tal prop­er­ties, it might look as if the men­tal caus­a­tion ques­tion can be answered eas­ily. Mental events can cause phys­ic­al events (or vice versa); such a caus­al rela­tion doesn’t require the inter­ac­tion of phys­ic­al and non-physical sub­stances, so the prob­lem of caus­al inter­ac­tion evaporates.

Emergentism is a view, or rather a group of views, accord­ing to which sub­stance phys­ic­al­ism is true and men­tal prop­er­ties are irre­du­cibly men­tal. There are (at least) two vari­et­ies of emer­gen­t­ism. The weak vari­ety, which some­times goes by the name “non-reductive phys­ic­al­ism”, takes men­tal prop­er­ties to be real­ized by phys­ic­al prop­er­ties. (For my work on what it is for a prop­erty to be real­ized by anoth­er prop­erty, see Baysan 2015). The strong vari­ety, which goes by the name (sur­prise sur­prise!) “strong emer­gen­t­ism”, holds that (at least some) men­tal prop­er­ties are as fun­da­ment­al as phys­ic­al prop­er­ties to the extent that they need not be real­ised by phys­ic­al prop­er­ties. (See Barnes 2012 for an account of strong emer­gence along these lines. For joint dis­cus­sions of weak and strong emer­gence, see Chalmers 2006 and Wilson 2015.)

Some con­tem­por­ary meta­phys­i­cians of mind, most not­ably Jaegwon Kim (2005), think that epi­phen­om­en­al­ism is still a threat to emer­gen­t­ism. It is thought be a prob­lem for the weak, non-reductive phys­ic­al­ist, vari­ety because of the fol­low­ing line of thought. The phys­ic­al world is sup­posed to be caus­ally closed in the sense that if a phys­ic­al event has a cause at any time, then at that time, it has a suf­fi­cient phys­ic­al cause. Thus, if a phys­ic­al event is caused by a men­tal event (or a prop­erty), it must be fully caused by a phys­ic­al event (or a prop­erty) too. If all this is true, then every phys­ic­al event that has a men­tal cause must be caus­ally over­de­termined. (Here, the idea is that caus­a­tion implies determ­in­a­tion, and hav­ing more than one fully suf­fi­cient cause implies overdeterm­in­a­tion.) The accept­ance of such sys­tem­at­ic caus­al over­de­termin­a­tion is taken to be absurd; the world can’t have that much redund­ant caus­a­tion. Therefore, the com­bin­a­tion of non-reductive phys­ic­al­ism and the real­ity men­tal caus­a­tion is not ten­able. That is the charge anyway.

Now, what about strong emer­gen­t­ism? In a nut­shell, defend­ers of this view can reject the idea that the phys­ic­al domain is caus­ally closed in the way that non-reductive phys­ic­al­ists typ­ic­ally assume. Given its anti-physicalist assump­tion that some prop­er­ties oth­er than the phys­ic­al ones can be fun­da­ment­al too, reject­ing the caus­al clos­ure prin­ciple is def­in­itely a live option for strong emer­gen­t­ism. However, accord­ing to some, that is pre­cisely the prob­lem with this view. From a sci­entif­ic or nat­ur­al­ist­ic point of view, how can we defend such a view if its best way of accom­mod­at­ing men­tal caus­a­tion is through reject­ing the caus­al clos­ure of the phys­ic­al domain?

The pic­ture that I have por­trayed thus far seems to sug­gest that unless we go all the way and reduce men­tal prop­er­ties to phys­ic­al prop­er­ties, there isn’t any room for men­tal caus­a­tion. This is what Kim and oth­ers have been try­ing to per­suade us over the years. But, is the reas­on­ing that has led us here really sol­id? Should all of the argu­ment­at­ive steps briefly sketched above be accep­ted? I have some doubts.

First, there is an emer­ging (pun inten­ded) con­sensus that the caus­al argu­ment against non-reductive phys­ic­al­ism sketched above has some flaws. Some philo­soph­ers aren’t con­vinced that non-reductive phys­ic­al­ism, as Kim por­trays it, really implies caus­al over­de­termin­a­tion (see Yablo 1992 for a sem­in­al account). Very roughly, the idea is that such caus­al over­de­termin­a­tion appears to obtain when a whole event and its parts are caus­ing an event too. But tak­ing a whole and its parts sep­ar­ately is surely “double count­ing”. Also, in present­a­tions of the caus­al argu­ment against non-reductive phys­ic­al­ism, we often come across the idea that if an event has two dis­tinct suf­fi­cient causes, it must be genu­inely caus­ally overdetermined—this is known as “the exclu­sion prin­ciple”. But a prin­ciple with such a cru­cial dia­lect­ic­al role needs some backing-up, and some authors have noted that there doesn’t seem to be any pos­it­ive argu­ment for the truth of the exclu­sion prin­ciple. (For a cri­ti­cism along these lines, see Arnadottir and Crane 2013).

Second, the reas­on to res­ist strong emer­gen­t­ism that is sketched above can be ques­tioned too. Do we really have good reas­ons to think that the phys­ic­al domain is caus­ally closed? I don’t think that we can play the caus­al clos­ure card unless we care­fully study the reas­ons that are giv­en in favour of it. Considering its import­ance in argu­ment­a­tion in the meta­phys­ics of mind, it would be fair to say that there hasn’t been enough atten­tion giv­en to the pos­it­ive reas­ons for hold­ing it. I am aware of three argu­ments for the caus­al clos­ure prin­ciple: (1) Lycan’s (1987) argu­ment that it is absurd to think that laws of con­ser­va­tion hold every­where in the uni­verse with the excep­tion of the human skull; (2) McLaughlin’s (1992) sug­ges­tion that the fail­ure of the caus­al clos­ure prin­ciple was a sci­entif­ic hypo­thes­is in chem­istry which was even­tu­ally fals­i­fied (in chem­istry!); and (3) Papineau’s (2002) argu­ment that the prin­ciple is induct­ively veri­fied by the prac­tice of 20th cen­tury physiolo­gists. This is not the place to care­fully exam­ine these three argu­ments in detail, but I think it is fair to say that these argu­ments don’t even attempt to be con­clus­ive. The clos­ure prin­ciple may turn out to be true, but wheth­er that is the case or not will be an empir­ic­al mat­ter of fact, and until we some­how estab­lished it empir­ic­ally, we need to devise more sol­id philo­soph­ic­al argu­ments for it.

I hope this short dis­cus­sion has per­suaded you that whichever view in the meta­phys­ics of mind turns out to be true, the men­tal caus­a­tion ques­tion will play some role in determ­in­ing its plausibility.


Árnadóttir, S. and Crane, T. (2013). ‘There is no Exclusion Problem’, in Mental Causation and Ontology, eds. S. C. Gibb, E. J. Lowe, and R. D. Ingthorsson (Oxford: Oxford University Press).

Barnes, E. (2012). ‘Emergence and Fundamentality’. Mind, 121, pp. 873–901.

Baysan, U. (2015) ‘Realization Relations in Metaphysics’, Minds and Machines 25, pp. 247–60.

Chalmers, D. (2006). ‘Strong and Weak Emergence’, in The Re-Emergence of Emergence , eds. P. Clayton & P. Davies (Oxford: Oxford University Press).

Kim, J. (2005) Physicalism or Something Near Enough. Princeton, NJ: Princeton University Press.

Lycan, W. (1987). Consciousness. Cambridge, MA: MIT Press.

McLaughlin, B. (1992). ‘The Rise and Fall of British Emergentism’, in Emergence or Reduction?: Prospects for Nonreductive Physicalism, eds. A. Beckermann, H. & J. Kim (De Gruyter).

Papineau, D. (2002). Thinking about Consciousness. Oxford: Oxford University Press.

Putnam, H. (1967). ‘Psychological Predicates’, in Art, Mind, and Religion, eds. W.H. Capitan & D.D. Merrill (Pittsburgh: University of Pittsburgh Press).

Wilson, J. (2015). ‘Metaphysical Emergence: Weak and Strong’, in Metaphysics in Contemporary Physics: Poznan Studies in the Philosophy of the Sciences and the Humanities, eds. T. Bigaj and C. Wuthrich (Leiden: Brill).

Yablo, S. (1992) ‘Mental Causation’. Philosophical Review, 101, pp. 245–280.

The Cognitive Impenetrability of Recalcitrant Emotions

Dr. Raamy Majeed —Postdoctoral Research Fellow on the John Templeton Foundation pro­ject, ‘New Directions in the Study of Mind’ in the Faculty of Philosophy, University of Cambridge and By-Fellow, Churchill College, University of Cambridge

Consider the fol­low­ing emo­tion­al epis­odes. You fear Fido, your neighbour’s dog you judge to be harm­less. You are angry with your col­league, even though you know his remark wasn’t really offens­ive. You are jeal­ous of your partner’s friend, des­pite believ­ing that she does­n’t fancy him. D’Arms and Jacobson (2003) call these recal­cit­rant emo­tions: emo­tions that exist “des­pite the agent’s mak­ing a judg­ment that is in ten­sion with it” (pg. 129). The phe­nomen­on of emo­tion­al recal­cit­rance is said to raise a chal­lenge for the­or­ies of emo­tions. Drawing on the work of Greenspan (1981) and Helm (2001), Brady argues that this chal­lenge is “to explain the sense in which recal­cit­rant emo­tions involve ration­al con­flict or ten­sion” (2009: 413).

Whether we require ration­al con­flict to account for emo­tion­al recal­cit­rance is debat­able. Indeed, much of the present con­tro­versy involves spelling out the pre­cise nature of this con­flict. But con­flict, ration­al or oth­er­wise, isn’t the only fea­ture that is per­tin­ent to the phe­nomen­on. What tends to get neg­lected is pre­cisely what gives these emo­tions their name, viz. their recal­cit­rance; their per­sist­ent nature. To elab­or­ate, emo­tion­al epis­odes, by their very nature, are epis­od­ic, and we shouldn’t expect recal­cit­rant emo­tions to last any longer than non-recalcitrant ones. Nevertheless, it is in the very nature of recal­cit­rant emo­tions that they are mul­ish, that they don’t suc­cumb to our judge­ments – i.e. to the extent that these emo­tion­al epis­odes last.

Here is an example. Suppose I judge that fly­ing is safe, but feel instantly afraid as soon as my plane starts to take off. But sup­pose, also, that once I real­ize that my fear is irra­tion­al, or at least, that it is in ten­sion with my judge­ment, my fear dis­sip­ates. This, argu­ably, won’t count as an instance of emo­tion­al recal­cit­rance. By con­trast, say I remain fear­ful des­pite my judge­ment. I keep think­ing to myself, ‘I know this is safe’, and yet I con­tin­ue to feel afraid. This, I ven­ture, bet­ter cap­tures what we mean by emo­tion­al recal­cit­rance. Mutatis mutandis for being afraid of Fido, being jeal­ous of your partner’s friend etc. All famil­i­ar cases of emo­tion­al recal­cit­rance seem to share this per­sist­ent fea­ture. The ques­tion is, what accounts for it?

My hypo­thes­is is this: emo­tions are recal­cit­rant to the extent that they are cog­nit­ively impen­et­rable. According to Goldie, “someone’s emo­tion or emo­tion­al exper­i­ence is cog­nit­ively pen­et­rable only if it can be affected by his rel­ev­ant beliefs” (2000: 76). So far as I can tell, the first to dis­cuss the cog­nit­ive (im)penetrability of emo­tions is Griffiths (1990, 1997), who takes one of the advant­ages of his the­ory to be pre­cisely that it accounts for recal­cit­rant emo­tions, or what he calls ‘irra­tion­al emotions’.

Griffiths’s explan­a­tion of emo­tion­al recal­cit­rance is neg­lected by much of the cur­rent lit­er­at­ure on the phe­nomen­on. This is war­ran­ted in one respect. Griffiths doesn’t account for the sense in which recal­cit­rant emo­tions involve ration­al con­flict, which, as men­tioned earli­er, is one of the cent­ral con­tro­ver­sies. But there is a way in which the neg­lect is unwar­ran­ted. This has to do with the charge that his account makes emo­tions too piecemeal.

To elab­or­ate, one of the most con­tro­ver­sial fea­tures of Griffiths’s account of emo­tions more gen­er­ally is that it div­vies up emo­tions into three broad types, only one of which forms a nat­ur­al kind. These are the set of evolved adapt­ive ‘affect-program’ responses, which are, more or less, cog­nit­ively impen­et­rable. They are sur­prise, fear, anger, dis­gust, sad­ness and joy. The rest are ‘high­er cog­nit­ive emo­tions’, which are cog­nit­ively pen­et­rable, like jeal­ousy, shame etc., or social con­struc­tions that are ‘essen­tially pre­tences’, e.g. romantic love.

This account, argu­ably, does make emo­tions too piece­meal, but to reject the hypo­thes­is that recal­cit­rant emo­tions are cog­nit­ively impen­et­rable for this reas­on is to throw the baby out with the bathwa­ter. Let us be neut­ral as to what emo­tions actu­ally are, as well as to the kinds of emo­tions that can be cog­nit­ively impen­et­rable. I think we can remain thus neut­ral, and still bor­row some of Griffiths’s insights con­cern­ing the cog­nit­ive impen­et­rab­il­ity of recal­cit­rant emo­tions to explain their recalcitrance.

Leaving aside the Ekman-esque notion that there are a set of basic emo­tions from which all oth­er emo­tions arise, we can fol­low Griffiths in sup­pos­ing that emo­tions, indeed the very same kind of emo­tions, can be brought about in dis­tinct ways. Take, for instance, the affect-program responses. The pro­cesses that typ­ic­ally give rise to them, as well as these responses them­selves, are what Griffiths claims is cog­nit­ively impen­et­rable. But he notes that they can also be triggered by pro­cesses that are cog­nit­ively pen­et­rable. In fact, he is clear that the former doesn’t rule out the lat­ter: “[t]he exist­ence of a rel­at­ively unin­tel­li­gent, ded­ic­ated mech­an­ism does not imply that higher-level cog­nit­ive pro­cesses can­not ini­ti­ate the same events” (1990: 187).

Griffiths exploits this account to explain emo­tion­al recal­cit­rance. In brief, the phe­nomen­on occurs when an affect-program response is triggered without the cog­nit­ive pro­cess of belief-fixation that gives rise to judge­ment. For example, “[if] only the affect-program sys­tem classes the stim­u­lus as a danger, the sub­ject will exhib­it the symp­toms of fear, but will deny mak­ing the judge­ments which folk the­ory sup­poses to be impli­cit in the emo­tion” (1990: 191).

This explan­a­tion isn’t sup­posed to provide us with an account of what recal­cit­rant emo­tions are; what picks them out as a type. Rather, for Griffiths, it gives us a ‘the­ory’ of them; we have an explan­a­tion for their occur­rence. Regardless of wheth­er this the­ory is adequate, it is my view that the work such an explan­a­tion can be fur­ther put towards is to explain the recal­cit­rant nature of recal­cit­rant emo­tions. While the affect-program responses don’t always run in tan­dem with the cog­nit­ive pro­cesses involved in belief-fixation, what explains the per­sist­ent nature of these responses is that they, as well as the pro­cesses that give rise to them, are cog­nit­ively impen­et­rable. Moreover, cog­nit­ive pen­et­rab­il­ity admits of degrees. Thus, the extent to which such responses are recal­cit­rant will depend on the extent to which they, as well as the pro­cesses that give rise to them, are cog­nit­ively impenetrable.

One of the advant­ages of his the­ory, accord­ing to Griffiths, is that “[t]he occur­rence of emo­tions in the absence of suit­able beliefs is con­ver­ted from a philo­soph­ers’ para­dox into a prac­tic­al sub­ject for psy­cho­lo­gic­al invest­ig­a­tion” (1990: 192). The present explan­a­tion is sim­il­arly advant­age­ous in that it provides an explan­a­tion of emo­tion­al recal­cit­rance that is empir­ic­ally veri­fi­able. But by the same token, the explan­a­tion is only of interest to the extent that it is empir­ic­ally plaus­ible. The evid­ence is far from con­clus­ive, but there is good reas­on to think we are on the right track.

McRae et al. (2012) sought to test “wheth­er the way an emo­tion is gen­er­ated influ­ences the impact of sub­sequent emo­tion reg­u­lat­ory efforts” (pg. 253). Emotions can be triggered ‘bot­tom up’, i.e. in response to per­cept­ible prop­er­ties of a stim­u­lus, or ‘top down’, i.e. in response to cog­nit­ive apprais­als of an event. They took their find­ings to “sug­gest that top-down gen­er­ated emo­tions are more suc­cess­fully down-regulated by reapprais­al than bottom-up emo­tions” (pg. 259). Emotions gen­er­ated bottom-up, then, appear to behave as if they are cog­nit­ively impen­et­rable; or at least, as if they are less pen­et­rable than ones gen­er­ated top-down. Insofar as any of the emo­tions thus gen­er­ated con­flict (in the rel­ev­ant sense) with an eval­u­at­ive judge­ment, we have an instance of emo­tion­al recal­cit­rance. Run these thoughts togeth­er, and they imply that recal­cit­rant emo­tions are recal­cit­rant to the extent that they are cog­nit­ively impenetrable.



Brady, M. S. (2009). ‘The Irrationality of Recalcitrant Emotions’. Philosophical Studies 145: 413–30.

D’Arms, J., & Jacobson, D. (2003). ‘The Significance of Recalcitrant Emotion’. In A. Hatzimoysis (Ed.), Philosophy and the Emotions. Cambridge: Cambridge University Press.

Goldie, P. (2000). The Emotions: A Philosophical Exploration. Oxford University Press.

Greenspan, P. S. (1981). ‘Emotions as Evaluations’. Pacific Philosophical Quarterly 62: 158–69.

Griffiths, P. E. (1990). ‘Modularity, and the Psychoevolutionary Theory of Emotion’. Biology and Philosophy 5: 175–96.

—- (1997). What Emotions Really Are. Chicago University Press.

Helm, B. (2001). Emotional Reason. Cambridge University Press.

McRae, K., Misra, S., Prasad, A. K., Pereira, S. C., Gross, J. J. (2012). ‘Bottom-up and Top-down Emotion Generation: Implications for Emotion Regulation’. Social Cognitive and Affective Neuroscience 7: 253–62.