iCog Blog

How much of an animal are you?

Baby chimpanzee

by Léa Salje- Lecturer in Philosophy of Mind and Language at The University of Leeds

I’m an anim­al, and so are you. We might be rather spe­cial anim­als, but we are anim­als all the same: bio­lo­gic­al organ­isms oper­at­ing in a par­tic­u­lar eco­lo­gic­al niche. For most of us, this is some­thing we’ve known for a long time, prob­ably since primary school. It’s per­haps sur­pris­ing, then, how little it seems to per­meate our every­day think­ing about ourselves, for many of us at least. I’m hardly minded to earn­estly con­tem­plate the fact of my anim­al­ity in my deal­ings with myself as I go about my daily busi­ness of coffee-ordering and Facebook-posting.

There’s also a ques­tion about how deeply the fact of our anim­al­ity genu­inely pen­et­rates the con­cep­tion of ourselves that guides our philo­sophy of mind, even among those of us happy to accept it on its sur­face. This was the ques­tion at the heart of the Persons as Animals pro­ject – an AHRC-funded pro­ject at the University of Leeds that I’ve been work­ing on for the last year led by Helen Steward, that aims to explore the ways in which cer­tain areas in philo­sophy of mind might be illu­min­ated by a per­spect­ive that fore­fronts the fact that we are anim­als. A couple of things (at least) fol­low from tak­ing such a per­spect­ive ser­i­ously. The first is that if we are anim­als, we are thereby not Cartesian egos, or brains, or sys­tems of inform­a­tion, or func­tion­al sys­tems, or bundles of men­tal states. We are entire embod­ied wholes, such that an under­stand­ing of ourselves requires a much more hol­ist­ic per­spect­ive than that which is often taken in philo­sophy of mind. And second, if we are anim­als then our powers and capa­cit­ies must be related in an evol­u­tion­ary way to those of oth­er creatures. This means that a decent under­stand­ing of those powers and capa­cit­ies – even rel­at­ively hifalutin powers like lan­guage and the capa­city to make choices – should bene­fit from a per­spect­ive that takes account of what is known of anim­al per­cep­tion, cog­ni­tion and agency.

Clearly, mere know­ledge of the bio­lo­gic­al fact of our anim­al­ity is not enough to mobil­ise these sorts of changes. One of the cent­ral planks of the pro­ject was that we need new and bet­ter ways to artic­u­late our place in the anim­al king­dom if we are to make philo­soph­ic­al pro­gress in these areas. And before we can do that, we need to under­stand what sorts of obstacles might have so far pre­ven­ted such an anim­al­ist­ic self-conception from really tak­ing hold.

To this end, the Persons as Animals pro­ject came togeth­er earli­er this year with con­ser­va­tion social sci­ent­ist Andy Moss from the edu­ca­tion depart­ment at Chester Zoo to run a series of semi-structured focus groups, designed to explore how we think of ourselves and our rela­tion to the anim­al world. What sorts of things get in the way of anim­al­ist­ic think­ing about ourselves? How might it be encour­aged? We ran 12 groups in all, 6 made up of zoo vis­it­ors, and anoth­er 6 of stu­dents from Leeds University.

What we found was a strik­ing absence of any uni­vocal nar­rat­ive about our sense of our own anim­al­ity. Instead, we found a deeply frac­tured and uneasy pic­ture: we do see ourselves as anim­als, and we don’t. And many of us struggle to recon­cile these two view­points.

Interestingly, this sense of unease came out in dif­fer­ent ways for dif­fer­ent par­ti­cipants. Some began with a firm sense of their own anim­al­ity, often accom­pan­ied by expres­sions of indig­na­tion at the very sug­ges­tion that we might think oth­er­wise. (Of course we’re anim­als; how dare we count ourselves as spe­cial?) The dis­cus­sion of these par­ti­cipants ten­ded to high­light the intel­li­gent beha­viours of oth­er anim­als, and to down­play our own beha­viours and capa­cit­ies as largely instinct-driven under a flimsy ven­eer of civil­ity.

This is, of course, to fore­front the fact of our anim­al­ity in a way. But by so mag­ni­fy­ing our con­tinu­ity with the rest of the anim­al world, these par­ti­cipants seemed to face a spe­cial chal­lenge: they seemed to struggle to absorb into that anim­al­ist­ic self-image our ali­en­a­tion from and – even more troub­lingly – dom­in­a­tion over the nat­ur­al world around us. How can we recon­cile this self-conceived status as one spe­cies of anim­al among oth­ers on the one hand, with the eye-watering extent of our dam­aging impos­i­tions on the world around us on the oth­er? It’s one thing to think of ourselves as a spe­cial cat­egory of being, per­haps one that has the right (or even the duty) to organ­ise things for the whole of the nat­ur­al world. But that option is ruled out by a robust insist­ence on our lack of spe­cial­ness, of con­tinu­ity with oth­er anim­als. The only option remain­ing, how­ever, seems to be infin­itely more dis­turb­ing – that we are mere anim­als who have simply spir­alled out of con­trol. In the end, we often found these par­ti­cipants adopt­ing the rather ingeni­ous solu­tion of mov­ing from first per­son­al locu­tions to speak­ing in gen­er­al­isa­tions when dis­cuss­ing power asym­met­ries with the rest of the nat­ur­al world; ‘I don’t think we’re spe­cial, but the prob­lem is that people do’.

Others, by con­trast, began from a heightened sense of fun­da­ment­al dis­tinct­ness from oth­er anim­als. Even if we’re anim­als (sotto voce), we’re obvi­ously spe­cial. No danger among these groups of fail­ing to cel­eb­rate the spe­cial com­plex­ity of human beings. But these par­ti­cipants faced anoth­er chal­lenge, of recon­cil­ing this self-conception as fun­da­ment­ally dif­fer­ent from oth­er anim­als with know­ledge of the bio­lo­gic­al fact of our anim­al­ity.

Typically par­ti­cipants express­ing this sort of view repor­ted that their know­ledge of their anim­al­ity is highly muted or recess­ive as they go about their daily lives. Indeed, some repor­ted not only that it nor­mally faded into the back­ground, but more strongly that it took con­sid­er­able cog­nit­ive effort to bring it to mind and make it fit with how they really see them­selves. In one par­tic­u­larly mem­or­able artic­u­la­tion of this feel­ing, one par­ti­cipant recalled find­ing out that she was an anim­al, and think­ing of it ‘as more of a clas­si­fic­a­tion like fit­ting everything into bubbles, like when I real­ised the sun was a star. It has all the same prop­er­ties as the oth­er stars and that’s weird to you because you regard them very dif­fer­ently in your every­day life.’ Our anim­al­ity, the idea seems to be, is a mat­ter of indis­put­able sci­entif­ic fact which is nev­er­the­less some­how com­pletely at odds with our every­day con­cep­tu­al­isa­tions and cat­egor­isa­tions.

Through dis­cus­sion, these groups too found cre­at­ive ways of dis­solv­ing the ten­sion. An extreme minor­ity reac­tion was to give up on the claim that we are anim­als as simply ‘not ringing true’. Another strategy, observed in an exten­ded dis­cus­sion by a group of phys­ics stu­dents, was to redraw the con­cep­tu­al bound­ar­ies of what it is to be an anim­al. If we aban­don the idea that anim­als must be bio­lo­gic­al organ­isms, then we cre­ate more space to com­fort­ably hold togeth­er both the fact that we are anim­als and the con­vic­tion that we are import­antly dif­fer­ent from oth­er mem­bers of the anim­al king­dom. To say that we are anim­als, after all, might now be to pos­i­tion ourselves us just as closely to com­puters as to cater­pil­lars. A third sort of res­ol­u­tion was to asso­ci­ate anim­al­ity with a very basic form of exist­ence; one that we have, by now, tran­scen­ded. We might once have been anim­als, the idea is, but we’ve now moved bey­ond it. With this response, par­ti­cipants were able to brack­et out uncom­fort­able facts about our anim­al natures as part of our evol­u­tion­ary his­tory, rather than present­ing them­selves as call­ing for incor­por­a­tion into our live self-conceptions. For the most part, how­ever, all of these responses were giv­en with observ­able unease and frank state­ments of felt dif­fi­culty in incor­por­at­ing the fact of our anim­al­ity into their every­day self-conceptions.

Among yet oth­er par­ti­cipants there emerged a quite dif­fer­ent view­point, this time one that seemed much bet­ter able to accom­mod­ate our claims to both anim­al­ity and to dis­tinct­ness. For this group, the traits, beha­viours and capa­cit­ies that might at first glance seem to sep­ar­ate us from the rest the anim­al king­dom are really just the res­ults of evol­u­tion­ary pro­cesses, like any oth­er. Cinemas, reli­gion, prog rock, i‑pads, sar­casm, nuc­le­ar weapons, cryptic cross­words and Shoreditch apart­ments don’t cut us off from the nat­ur­al world; they are part of it. We are, on this view, placed unflinch­ingly along­side oth­er anim­als in the nat­ur­al world, but not at the cost of a deni­al or deprec­a­tion of human com­plex­ity.

One of the cent­ral aims of the Persons as Animals pro­ject was to bet­ter under­stand our rela­tion­ship to our own anim­al­ity, so that we might in turn bet­ter under­stand how to instill more deep-rooted ways of think­ing of ourselves as anim­als into our philo­sophy of mind. Our res­ults seem to sug­gest that for many of us the answer is that the rela­tion­ship is a pro­foundly awk­ward one; we seem to be far from find­ing a stable rest­ing place for our sense of pos­i­tion in the anim­al world. This find­ing ought to put us on our guard in our philo­soph­ic­al prac­tices. We are not insu­lated, as philo­soph­ers, from the uneasy and con­flic­ted anim­al­ist­ic self-conceptions that seem­ingly under­lie our every­day think­ing about ourselves.

Is implicit cognition bad cognition?

implicit-test-large

by Sophie Stammers– incom­ing postdoc­tor­al fel­low on pro­ject PERFECT

A sig­ni­fic­ant body of research in cog­nit­ive sci­ence holds that human cog­ni­tion com­prises two kinds of pro­cesses: expli­cit and impli­cit. According to this research, expli­cit pro­cesses oper­ate slowly, requir­ing atten­tion­al guid­ance, whilst impli­cit pro­cesses oper­ate quickly, auto­mat­ic­ally and without atten­tion­al guid­ance (Kahneman, 2012; Gawonski and Bodenhausen, 2014). A prom­in­ent example of impli­cit cog­ni­tion that has seen much recent dis­cus­sion in philo­sophy is that of impli­cit social bias, where asso­ci­ations between (often) stig­mat­ized social groups and (often) neg­at­ive traits mani­fest in beha­viour, res­ult­ing in dis­crim­in­a­tion (see Brownstein and Saul, 2016a; 2016b). This is the case even though the indi­vidu­al in ques­tion isn’t dir­ect­ing their beha­viour to be dis­crim­in­at­ory with the use of atten­tion­al guid­ance, and is appar­ently unaware that they’re exhib­it­ing any kind of dis­fa­vour­ing treat­ment at the time (although see Holroyd 2015 for the sug­ges­tion that indi­vidu­als may be able to observe bias in their beha­viour).

Examples of impli­cit social bias mani­fest­ing in beha­viour include exhib­it­ing great­er signs of social unease, less smil­ing and more speech errors when con­vers­ing with a black exper­i­menter com­pared to when the exper­i­menter is white (McConnell and Leibold, 2001); less eye con­tact and increased blink­ing in con­ver­sa­tions with a black exper­i­menter versus their white coun­ter­part (Dovidio et al., 1997), and reduced will­ing­ness for skin con­tact with a black exper­i­menter versus a white one (Wilson et al., 2000). Implicit social biases also arise in more delib­er­at­ive scen­ari­os: Swedish recruit­ers who har­bor impli­cit racial asso­ci­ations are less likely to inter­view applic­ants per­ceived to be Muslim, as com­pared to applic­ants with a Swedish name (Rooth, 2007), and doc­tors who har­bor impli­cit racial asso­ci­ations are less likely to offer treat­ment to black patients with the clin­ic­al present­a­tion of heart dis­ease than to white patients with the same clin­ic­al present­a­tion of the dis­ease (Green, et al., 2007). These stud­ies estab­lish that there is no cor­rel­a­tion between par­ti­cipants’ dis­crim­in­at­ory beha­viour and the beliefs and val­ues that they pro­fess to have when ques­tioned.

Both the mech­an­isms of impli­cit bias, and impli­cit pro­cesses more gen­er­ally, are often char­ac­ter­ised in the lan­guage of the sup-optimal. Variously, they deliv­er “a more inflex­ible form of think­ing” than expli­cit cog­ni­tion (Pérez, 2016: 28), they are “ara­tion­al” com­pared to the ration­al pro­cesses that gov­ern belief update (Gendler, 2008a: 641; 2008b: 557), and their con­tent is “dis­uni­fied” with our set of expli­cit atti­tudes (Levy, 2014: 101–103). As such, one might be temp­ted to think of impli­cit cog­ni­tion as reg­u­larly, or even neces­sar­ily bad cog­ni­tion. A strong inter­pret­a­tion of that value-laden assess­ment might mean that the pro­cesses in ques­tion deliv­er object­ively bad out­puts, how­ever these are to be defined, but we could also mean some­thing a bit weak­er, such as that out­puts are not aligned with the agent’s goals, or sim­il­ar. It’s easy to see why one might apply this value-laden assess­ment to the mech­an­isms which res­ult in impli­citly biased beha­viour: indi­vidu­als simply have no reas­on to dis­crim­in­ate against already mar­gin­al­ized people in the ways out­lined above, and yet they do any­way – that seems like a good can­did­ate for bad cog­ni­tion. That impli­citly biased beha­viours are the product of what appears to be a sub­op­tim­al pro­cessing sys­tem might motiv­ate the argu­ment that we’re not the agents of our impli­citly biased beha­vi­ors, as well as argu­ments that might fol­low from this, such as that it is not appro­pri­ate to hold people mor­ally respons­ible for their impli­cit biases (Levy, 2014).

But I think it would be wrong to con­clude that impli­cit cog­ni­tion neces­sar­ily deliv­ers sub­op­tim­al out­puts, and that impli­cit bias is an example of bad cog­ni­tion simply for the reas­on that it is impli­cit. Moreover, as I’ll argue below, main­tain­ing the former claim may well do a dis­ser­vice to the pro­ject of redu­cing impli­cit social biases.

Whilst expli­cit pro­cesses may be ‘bet­ter’ at some cog­nit­ive tasks, research sug­gests that impli­cit pro­cesses can actu­ally deliv­er a more favour­able per­form­ance than expli­cit pro­cesses in a vari­ety of domains. For instance, non-attentional, auto­mat­ic pro­cesses gov­ern the fast motor reac­tions employed by skilled ath­letes (Kibele, 2006). Trying to bring these pro­cesses under atten­tion­al con­trol can actu­ally dis­rupt sport­ing per­form­ance: Fleagal and Anderson (2008) show that dir­ect­ing atten­tion to their action per­form­ance sig­ni­fic­antly impairs the abil­ity of high-skill golfers on a put­ting task, whilst high-skill foot­ballers per­form less pro­fi­ciently when dir­ect­ing atten­tion to their exe­cu­tion of drib­bling (Beilock et al., 2002). Engaging atten­tion­al pro­cesses when learn­ing new motor skills can also dis­rupt per­form­ance (McKay et al., 2015).

Meanwhile, func­tion­al MRI stud­ies sug­gest that impro­visa­tion implic­ates non-attentional pro­cesses. One study shows that when pro­fes­sion­al jazz pian­ists impro­vise, they do so in the absence of cent­ral pro­cesses implic­ated in atten­tion­al guid­ance (Limb and Braun, 2008). Another study demon­strates that trained musi­cians inhib­it net­works asso­ci­ated with atten­tion­al pro­cessing dur­ing impro­visa­tion, (Berkowitz and Ansari, 2010).

Further, delib­er­ately dis­en­ga­ging atten­tion­al resources can facil­it­ate cre­ativ­ity, a pro­cess known as ‘incub­a­tion’. Subjects who return to work on a cre­at­ive task after a peri­od dir­ect­ing atten­tion­al resources to some­thing unre­lated to the task at hand often deliv­er enhanced out­puts com­pared with those who con­tinu­ally engage their atten­tion­al resources (Dodds et al., 2003). It has been pro­posed that task-relevant impli­cit pro­cesses remain act­ive dur­ing the incub­a­tion peri­od and con­trib­ute to enhanced cre­at­ive out­put (Ritter and Dijksterhuis, 2014).

So it would be wrong to sug­gest that impli­cit pro­cesses neces­sar­ily, or even typ­ic­ally, deliv­er sub-optimal out­puts com­pared with their expli­cit cous­ins. And per­tin­ent to our dis­cus­sion of impli­cit social bias, impli­cit pro­cesses them­selves can actu­ally be recruited to inhib­it the mani­fest­a­tion of bias. Research demon­strates that par­ti­cipants with genu­ine long-term egal­it­ari­an com­mit­ments (Moskowitz et al. 1999) as well as those in whom egal­it­ari­an com­mit­ments are activ­ated dur­ing in an exper­i­ment­al task (Moskowitz and Li, 2011) actu­ally mani­fest less impli­cit bias than those without such com­mit­ments. Crucially, the pro­cesses which bring impli­cit responses in line with an agent’s long-term com­mit­ments are not driv­en by atten­tion­al guid­ance, instead oper­at­ing auto­mat­ic­ally to pre­vent the facil­it­a­tion of ste­reo­typ­ic cat­egor­ies in the pres­ence of the rel­ev­ant social con­cepts (Moskowitz et al. 1999: 168). The sug­ges­tion here is that devel­op­ing genu­ine com­mit­ments to egal­it­ari­an val­ues and treat­ment can actu­ally recal­ib­rate impli­cit pro­cesses to deliv­er value-consistent beha­vi­or (see Holroyd and Kelly, 2016), without need­ing to effort­fully over­ride impli­cit responses each time one encoun­ters social con­cepts that might oth­er­wise trig­ger biased reac­tions. It would seem that the pro­file of impli­cit pro­cesses as inflex­ible, ara­tion­al and dis­uni­fied with expli­cit val­ues and com­mit­ments is ill-fitted to account for this example.

So, in a num­ber of cases it seems that impli­cit pro­cesses can serve our goals and val­ues. If this is right, then we should per­haps be more will­ing to loc­ate ourselves as agents not just in the beha­vi­or that arises from our expli­cit pro­cesses, but in that which arises from our impli­cit ones as well.

I think this has an import­ant implic­a­tion for prac­tices related to impli­cit bias train­ing. We should be wary of the rhet­or­ic that dis­tances us as agents from our impli­cit pro­cesses: for instance, char­ac­ter­iz­ing impli­cit bias as “racism without racists”1 might be com­fort­ing for those of us with impli­cit racial biases, but dis­own­ing the impli­cit pro­cesses that lead to racial dis­crim­in­a­tion, while not dis­own­ing those that lead to skilled music­al impro­visa­tion or cre­ativ­ity as above, seems some­what incon­sist­ent. I won­der wheth­er great­er will­ing­ness to accept one’s impli­cit pro­cesses as aspects of one’s agency (not neces­sar­ily as cent­ral, defin­ing aspects of one’s agency — but some­where in there non­ethe­less) might help to motiv­ate the pro­ject of realign­ing one’s impli­citly biased responses?

 

Footnotes:

  1. In U.S. Department of Justice. 2016. “Implicit Bias.” Community Oriented Policing Services report, page 1. Accessed 27/07/16, URL: https://uploads.trustandjustice.org/misc/ImplicitBiasBrief.pdf

 

References:

Berkowitz, A. L. and D. Ansari. 2010. “Expertise-Related Deactivation of the Right Temporoparietal Junction dur­ing Musical Improvisation.” NeuroImage 49 (1): 712–19.

Brownstein, M and J. Saul. 2016a. Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, New York: Oxford University Press.

Brownstein, M and J. Saul. 2016b. Implicit Bias and Philosophy, Volume 2: Moral Responsibility, Structural Injustice, and Ethics, New York: Oxford University Press.

Dodds R. D., T. B. Ward and S. M. Smith. 2003. “Incubation in prob­lem solv­ing and cre­ativ­ity.” in The Creativity Research Handbook, edited by Runco M. A., Cresskill, NJ: Hampton Press.

Dovidio, J. F., K. Kawakami, C. Johnson, B. Johnson and A. Howard. 1997. “On the Nature of Prejudice: Automatic and Controlled Processes.” Journal of Experimental Social Psychology 33 (5): 510–40.

Gawronski, B. and G. V. Bodenhausen. 2014. “Implicit and Explicit Evaluation: A Brief Review of the Associative-Propositional Evaluation Model: APE Model.” Social and Personality Psychology Compass 8 (8): 448–62.

Gendler, T. S. 2008a. “Alief and Belief.” The Journal of Philosophy 105 (10): 634–63.

———. 2008b. “Alief in Action (and Reaction).” Mind & Language 23 (5): 552– 85.

Green, A. R., D. R. Carney, D. J. Pallin, L. H. Ngo, K. L. Raymond, L. I. Iezzoni and M. R. Banaji. 2007. “Implicit Bias among Physicians and Its Prediction of Thrombolysis Decisions for Black and White Patients.” Journal of General Internal Medicine 22 (9): 1231–38.

Holroyd, J. 2015. “Implicit Bias, Awareness and Imperfect Cognitions.” Consciousness and Cognition 33 (May): 511–23.

Holroyd, J. and D. Kelly. 2016. “Implicit Bias, Character, and Control.” In From Personality to Virtue, edited by A. Masala and J. Webber, Oxford: Oxford University Press.

Kahneman, D. 2012. Thinking, Fast and Slow, London: Penguin Books.

Kibele, A. 2006. “Non-Consciously Controlled Decision Making for Fast Motor Reactions in sports—A Priming Approach for Motor Responses to Non-Consciously Perceived Movement Features.” Psychology of Sport and Exercise 7 (6): 591–610.

Levy, N. 2014. Consciousness and Moral Responsibility, Oxford; New York: Oxford University Press.

Limb, C. J. and A. R. Braun. 2008. “Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation.” Edited by E. Greene. PLoS ONE 3 (2): e1679.

McConnell, A. R. and J. M. Leibold. 2001. “Relations among the Implicit Association Test, Discriminatory Behavior, and Explicit Measures of Racial Attitudes.” Journal of Experimental Social Psychology 37 (5): 435–42.

McKay, B., G. Wulf, R. Lewthwaite and A. Nordin. 2015. “The Self: Your Own Worst Enemy? A Test of the Self-Invoking Trigger Hypothesis.” The Quarterly Journal of Experimental Psychology 68 (9): 1910–19.

Moskowitz, G. B., P. M. Gollwitzer, W. Wasel and B. Schaal. 1999. “Preconscious Control of Stereotype Activation Through Chronic Egalitarian Goals.” Journal of Personality and Social Psychology 77 (1): 167–184

Moskowitz, G. B., and P. Li. 2011. “Egalitarian Goals Trigger Stereotype Inhibition: A Proactive Form of Stereotype Control.” Journal of Experimental Social Psychology 47 (1): 103–16.

Pérez, E. O. 2016. Unspoken Politics: Implicit Attitudes and Political Thinking, New York, NY: Cambridge University Press.

Ritter, S. M. and A. Dijksterhuis. 2014. “Creativity–the Unconscious Foundations of the Incubation Period.” Frontiers in Human Neuroscience 8: 22–31.

Rooth, D‑O. 2007. “Implicit Discrimination in Hiring: Real World Evidence.” (IZA Discussion Paper No. 2764). Bonn, Germany: Forschungsinstitut Zur Zukunft Der Arbeit (Institute for the Study of Labor).

Wilson, T. D., S. Lindsey and T. Y. Schooler. 2000. “A Model of Dual Attitudes.” Psychological Review 107 (1): 101–26.

 

 

Trusting the Uncanny Valley: Exploring the Relationship Between AI, Mental State Ascriptions, and Trust.

uncanny-valley-humanoid-android-with-creator-468x312

Henry Powell- PhD Candidate in Philosophy at the University of Warwick

Interactive arti­fi­cial agents such as social and pal­li­at­ive robots have become increas­ingly pre­val­ent in the edu­ca­tion­al and med­ic­al fields (Coradeshi et al. 2006). Different kinds of robots, how­ever, seem to engender dif­fer­ent kinds of inter­act­ive exper­i­ences from their users. Social robots, for example, tend to afford pos­it­ive inter­ac­tions that look ana­log­ous to the ones we might have with one anoth­er. Industrial robots, on the oth­er hand, rarely, if ever, are treated in the same way. Some very life­like humanoid robots seem to fit some­where out­side of these two spheres, inspir­ing feel­ings of dis­com­fort or dis­gust from people who come into con­tact with them. One way of under­stand­ing why this phe­nomen­on obtains is via a con­jec­ture developed by the Japanese roboti­cist Masahiro Mori in 1970 (Mori, 1970, pp. 33–35). This so called “uncanny val­ley” con­jec­ture has a num­ber of poten­tially inter­est­ing the­or­et­ic­al rami­fic­a­tions. Most import­antly, that it may help us to under­stand a set of con­di­tions under which humans could poten­tially ascribe men­tal states to beings without minds – in this case, that trust­ing an arti­fi­cial agent can lead one to do just that. With this in mind the aims of this post are two-fold. Firstly, I wish to provide an intro­duc­tion to the uncanny val­ley con­jec­ture and secondly, I want to raise doubts con­cern­ing its abil­ity to shed light on the con­di­tions under which men­tal state ascrip­tions occur. Specifically, in exper­i­ment­al paradigms that see sub­jects as trust­ing their AI coact­ors.

Mori’s uncanny val­ley con­jec­ture pro­poses that as robots increase in their like­ness to human beings, their famili­ar­ity like­wise increases. This trend con­tin­ues up to a point at which their life­like qual­it­ies are such that we become uncom­fort­able inter­act­ing with them. At around 75% human like­ness, robots are seen as uncan­nily like human beings and are viewed with dis­com­fort, or, in more extreme cases, dis­gust, sig­ni­fic­antly hinder­ing their poten­tial to gal­van­ise pos­it­ive social inter­ac­tions.

uncanny-valley-graph-450x351

This effect has been explained in a num­ber of ways. For instance, Saygin et al. (2011, 2012), have sug­ges­ted that the uncanny val­ley effect is pro­duced when there is a per­ceived incon­gru­ence between an arti­fi­cial agent’s form and its motion. If an agent is seen to be clearly robot­ic but move in a very human-like way, or vice-versa, there is an incom­pat­ib­il­ity effect in the pre­dict­ive, action sim­u­lat­ing cog­nit­ive mech­an­isms that seek to pick out and fore­cast the actions of human­like and non-humanlike objects. This pre­dict­ive cod­ing mech­an­ism is provided con­tra­dict­ing inform­a­tion by the visu­al sys­tem ([human agent] with [non­hu­man move­ment]) that pre­vents it from car­ry­ing out pre­dict­ive oper­a­tions to its nor­mal degree of accur­acy (Urgen & Miller, 2015). I take it that the out­put of this cog­nit­ive sys­tem is presen­ted in our exper­i­ence as being uncer­tain and that this uncer­tainty accounts for the feel­ings of unease that we exper­i­ence when inter­act­ing with these uncanny arti­fi­cial agents.

Of par­tic­u­lar philo­soph­ic­al interest in this regard is a strand of research that has sug­ges­ted that humans can be seen to make men­tal state ascrip­tions to arti­fi­cial agents that fall out­side the uncanny val­ley in giv­en situ­ations. This story was pos­ited in two stud­ies pub­lished in 2011 and 2015 by Kurt Gray & Daniel Wegner and Maya Mathur & David Reichling respect­ively. As I believe that it con­tains the most inter­est­ing evid­en­tial basis for think­ing along these lines I will lim­it my dis­cus­sion here to the lat­ter exper­i­ment.

Mathur & Reichling’s study saw sub­jects par­take in an “invest­ment game” (Berg et al. 1995) – a gen­er­ally accep­ted exper­i­ment­al stand­ard in meas­ur­ing trust – with a num­ber of arti­fi­cial agents whose facial fea­tures var­ied in their human like­ness. This was to test wheth­er sub­jects were will­ing to trust dif­fer­ent kinds of arti­fi­cial agents depend­ing on where they fell on the uncanny val­ley scale. What they found was that sub­jects played the game in such a way that indic­ated that they trus­ted robots with cer­tain kinds of facial fea­tures to act in cer­tain ways so as to reach an out­come that was mutu­ally bene­fi­cial to both of them, rather than favour­ing one or the oth­er. The authors sur­mised that because the sub­jects seemed to trust these arti­fi­cial agents, in a way that sug­ges­ted that they had thought about what the arti­fi­cial agent’s inten­tions might be, the sub­jects had ascribed men­tal states to their robot­ic part­ners in these cases.

It was pro­posed that sub­jects had believed that the arti­fi­cial agents had men­tal states encom­passing inten­tion­al pro­pos­i­tion­al atti­tudes (beliefs, desires, inten­tions etc.). This was because sub­jects seemed to assess the arti­fi­cial agent’s decision mak­ing pro­cesses in the form of what the robots “interests” in the vari­ous out­comes might be. This res­ult is poten­tially very excit­ing but I think that it jumps to con­clu­sions rather too quickly. I’d now like to briefly give reas­ons for my think­ing along these lines.

Mathur and Reichling seem to be mak­ing two claims in the dis­cus­sion of their study’s res­ults.

  1. That sub­jects trus­ted the arti­fi­cial agents.
  2. That this trust implies the ascrip­tion of men­tal states.

My objec­tions here are the fol­low­ing. I think that i) is more com­plic­ated than the authors make it out to be and that ii) is just not at all obvi­ous and does not fol­low from i) when i) is ana­lysed in the prop­er way. Let us address i) first as it leads into the prob­lem with ii).

When elab­or­ated, I think that i) is mak­ing a claim that the sub­jects believed that the arti­fi­cial agents would act in a cer­tain way and that this action would be sat­is­fact­or­ily reli­able. I think that this is plaus­ible but I also think that the form of trust here is not that which is inten­ded by Mathur and Reichling and is thus unin­ter­est­ing in rela­tion to ii). There are, as far as I can tell, at least two ways in which we can trust things. The first and per­haps most inter­est­ing form of trust is that one express­ible in sen­tences like “I trust my broth­er to return the money that I lent him”. This implies that I think of my broth­er as the sort of per­son who would not, giv­en the oppor­tun­ity and upon ration­al reflec­tion, do some­thing con­trary to what he had told me he would do. The second form of trust is that which we might have towards a lad­der or some­thing sim­il­ar. We might say of such objects that “I trust that if I walk up this lad­der it will not col­lapse because I know that it is sturdy”. The dif­fer­ence here should be obvi­ous. I trust the lad­der because I can infer from its phys­ic­al state that it will per­form its des­ig­nated func­tion. It has no loose fix­tures, rot­ting parts or any­thing else that might make it col­lapse when I walk up it. To trust the lad­der in this way I do not think that it has to make com­mit­ments to the action expec­ted of it based on a giv­en set of eth­ic­al stand­ards. In the case of trust­ing my broth­er, my trust in him is express­ible as a belief in the idea that giv­en the oppor­tun­ity to choose not do what I have asked of him he will chose in favour of that which I have asked. The trust that I have in my broth­er requires that I believe that he has men­tal states that inform and help him to choose to act in favour of my ask­ing him to do some­thing. One form of trust implies the exist­ence of men­tal states, the oth­er does not. In regards to ii) then, as has just been argued, trust only implies men­tal states if it is of the form that I would ascribe to my broth­er in the example just giv­en, but not if that trust was of the sort that we would nor­mally ascribe to reli­ably func­tion­al objects like lad­ders. So ii) only fol­lows from i) if the former kind of trust is evinced and not oth­er­wise.

This ana­lys­is sug­gests that if we are to believe that the sub­jects in this exper­i­ment ascribed men­tal states to the arti­fi­cial agents (or indeed sub­jects in any oth­er exper­i­ment that reaches the same con­clu­sions) then we need suf­fi­cient reas­ons for think­ing that the sub­jects were treat­ing the arti­fi­cial agents like I would treat my broth­er and not like I would treat the lad­der in respect to ascrip­tions of trust. Mathur and Reichling are silent as to these and thus we have no good reas­on for think­ing that men­tal state ascrip­tions were tak­ing place in the minds of the sub­jects in their exper­i­ment. While I do not think that it is entirely impossible that such a thing might obtain in some cir­cum­stances it is just not clear from this exper­i­ment that it obtains in this instance.

What I have hope­fully shown in this post is that is import­ant that pro­ceed with cau­tion when mak­ing claims about our will­ing­ness to ascribe oth­er minds to cer­tain kinds of objects and agents (either arti­fi­cial or oth­er­wise). Specifically, it is import­ant to do so in rela­tion to our abil­ity to hold such things in seem­ingly spe­cial kinds of rela­tions with ourselves, trust being an import­ant example of this.

 

References:

Berg, J., Dickhaut J., McCabe, K., (1995). Trust, Reciprocity, and Social History. Game and Economic Behaviour, 10, 122–142.

Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., … Ishida, H. (2006). Human-inspired robots. IEEE Intelligent Systems, 21(4), 74–85.

Gray, K., & Wegner, D. M. (2012). Feeling robots and human zom­bies: Mind per­cep­tion and the uncanny val­ley. Cognition, 125(1), 125–130.

MacDorman, K. F. (2005). Androids as an exper­i­ment­al appar­at­us: Why is there an uncanny val­ley and can we exploit it. In CogSci-2005 work­shop: toward social mech­an­isms of android sci­ence (pp. 106–118).

  1. B. Mathur and D. B. Reichling, “An uncanny game of trust: Social trust­wor­thi­ness of robots inferred from subtle anthro­po­morph­ic facial cues,“Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on, La Jolla, CA, 2009, pp. 313–314.

Saygin, A. P. (2012). What can the Brain Tell us about Interactions with Artificial Agents and Vice Versa? In Workshop on Teleoperated Androids, 34th Annual Conference of the Cognitive Science Society.

Saygin, A. P., & Stadler, W. (2012). The role of appear­ance and motion in action pre­dic­tion. Psychological Research, 76(4), 388–394. http://doi.org/10.1007/s00426-012‑0426‑z

Urgen, B. A., & Miller, L. E. (2015). Towards an Empirically Grounded Predictive Coding Account of Action Understanding. Journal of Neuroscience, 35(12), 4789–4791.

 

 

 

Split Brains and the Compositional Metaphysics of Consciousness

il_fullxfull.367667424_picx

Luke Roelofs- Postdoctoral Fellow in Philosophy at the Australian National University

The mam­mali­an brain has an odd sort of redund­ancy: it has two hemi­spheres, each cap­able of sup­port­ing more-or-less nor­mal human con­scious­ness without the oth­er. We know this because des­troy­ing, inca­pa­cit­at­ing, or remov­ing one hemi­sphere leaves a patient who, des­pite some dif­fi­culties with par­tic­u­lar activ­it­ies, is clearly lucid and con­scious. The puzz­ling implic­a­tions of this redund­ancy are best brought out by con­sid­er­ing the unusu­al phe­nomen­on called the ‘split-brain’.

The hemi­spheres are con­nec­ted by a bundle of nerve fibres called the cor­pus cal­losum, as well as both being linked to the non-hemispheric parts of the brain (the ‘brain­stem’). To con­trol the spread of epi­leptic seizures, some patients had their cor­pus cal­losum severed while leav­ing both hemi­spheres, and the brain­stem, intact (Gazzaniga et al. 1962, Sperry 1964). These patients appear nor­mal most of the time, with no abnor­mal­it­ies in thought or action, but when exper­i­menters man­age to present stim­uli to sens­ory chan­nels which will take them exclus­ively to one hemi­sphere or the oth­er, strange dis­so­ci­ations appear. For example, when we show the word ‘key’ to the right hemi­sphere (such as by flash­ing it in the left half of the patient’s visu­al field), it can­not be verbally repor­ted (because the left hemi­sphere con­trols lan­guage), but if we ask the patient to pick up the object they saw the word for, they will read­ily pick out a key — but only if they can use their left hand (con­trolled by the right hemi­sphere). Moreover, for example, if the patient is shown the word ‘keyring’, with ‘key’ going to the right hemi­sphere and ‘ring’ going to the left, they will pick out a key (with their left hand) and a ring (with their right hand), but not a keyring. They will even report hav­ing seen only the word ‘ring’, and deny hav­ing seen either ‘key’ or ‘keyring’.

Philosophical dis­cus­sion of the split-brain phe­nomen­on takes two forms: arguing in sup­port of a par­tic­u­lar account of what is going on (e.g. Marks 1980, Hurley 1998, Tye 2003, pp.111–129, Bayne & Chalmers 2003, pp.111–112, Bayne 2008, 2010, pp.197–220), or explor­ing how the case chal­lenges the very way that we frame such accounts. A sem­in­al example of the lat­ter form is Nagel (1971) which reviews sev­er­al ways to make sense of the split-brain patient — as one per­son, as two people, as one per­son who occa­sion­ally splits into two people, etc. — and rejects them all for dif­fer­ent reas­ons, con­clud­ing that we have found a case where our ordin­ary concept of ‘a per­son’ breaks down and can­not be coher­ently applied. My work devel­ops an idea in the vicin­ity of Nagel’s: that our ordin­ary concept of ‘a per­son’ can handle the split-brain phe­nomen­on if we trans­form it to allow for com­pos­ite sub­jectiv­ity — some­thing which we have inde­pend­ent argu­ments for.

Start with what Nagel says about one of the pro­posed inter­pret­a­tions of the split-brain patient: as two people inhab­it­ing one body. Pointing out that when not in exper­i­ment­al situ­ations, the patient shows fully integ­rated beha­viour, he asks wheth­er we can really refuse to ascribe all their beha­viour to a single per­son, “just because of some pecu­li­ar­it­ies about how the integ­ra­tion is achieved”(Nagel 1971, p.406). Of course some­times two people do seem to work ‘as one’, as in “pairs of indi­vidu­als engaged in a per­form­ance requir­ing exact beha­vi­or­al coordin­a­tion, like using a two-handed saw, or play­ing a duet.” Perhaps the two hemi­spheres are like this? But Nagel wor­ries that this pos­i­tion is unstable:

“If we decided that they def­in­itely had two minds, then [why not] con­clude on ana­tom­ic­al grounds that every­one has two minds, but that we don’t notice it except in these odd cases because most pairs of minds in a single body run in per­fect par­al­lel?” (Nagel 1971, p.409)

Nagel’s worry here is cogent: if we accept that there can be two dis­tinct sub­jects des­pite it appear­ing for all the world as though there was only one, we seem to lose any basis for con­fid­ence that the same thing is not hap­pen­ing in oth­er cases. He con­tin­ues:

“In case any­one is inclined to embrace the con­clu­sion that we all have two minds, let me sug­gest that the trouble will not end there. For the men­tal oper­a­tions of a single hemi­sphere, such as vis­ion, hear­ing, speech, writ­ing, verbal com­pre­hen­sion, etc. can to a great extent be sep­ar­ated from one anoth­er by suit­able cor­tic­al decon­nec­tions; why then should we not regard each hemi­sphere as inhab­ited by sev­er­al cooper­at­ing minds with spe­cial­ized capa­cit­ies? Where is one to stop?” (Nagel 1971, Fn11)

Where indeed? If one appar­ently uni­fied mind could be really a col­lec­tion of inter­act­ing minds, why not think that all appar­ently uni­fied minds are really such col­lec­tions? What evid­ence could decide one way or the oth­er? Taking this line seems to leave us with empir­ic­ally unde­cid­able ques­tions about every mind we encounter.

What is strik­ing is that this way of think­ing isn’t prob­lem­at­ic for any­thing oth­er than minds — indeed it is plat­it­ud­in­ous. Most things can be equally well under­stood as one or as many, because we are happy to regard them sim­ul­tan­eously as a col­lec­tion of parts and as a single whole. What makes the split-brain phe­nomen­on so per­plex­ing is our dif­fi­culty in extend­ing this atti­tude to minds.

Consider, for instance, the phys­ic­al brain. Do we have one brain, or do we have sev­er­al bil­lion neur­ones, or even 8‑or-so lobes? The answer of course is ‘all of the above’: the brain is noth­ing sep­ar­ate from the bil­lions of neur­ones, in the right rela­tion­ships, and neither are the 8 lobes any­thing sep­ar­ate from the brain (which they com­pose) or the neur­ones (which com­pose them). And as a res­ult of the ease with which we shift between one-whole and many-parts modes of descrip­tion, we can be san­guine about the ques­tion ‘how many brains does the split-brain patient have?’ There is some basis for say­ing ‘one’, and some basis for say­ing ‘two’, but it’s fine if we can’t settle on a single answer, because the ques­tion is ulti­mately a verbal one. There are all the nor­mal parts of a brain, stand­ing in some but not all of their nor­mal rela­tions, and so not fit­ting the cri­ter­ia for being ‘a brain’ as well as they nor­mally would. And there are two over­lap­ping sub­sys­tems with­in the one whole, which indi­vidu­ally fit the cri­ter­ia for being ‘a brain’ mod­er­ately well. But there is no fur­ther fact about which form of descrip­tion — call­ing the whole a brain or call­ing the two sub­sys­tems each a brain — is ulti­mately cor­rect.

The chal­lenge is to take the same relaxed atti­tude to the ques­tion ‘how many people?’ Here is what I would like to say: the two hemi­spheres are con­scious, and the one brain that they com­pose is con­scious in vir­tue of their con­scious­ness and the rela­tions between them. Under nor­mal cir­cum­stances their inter­ac­tions ensure that the com­pos­ite con­scious­ness of the whole brain is well-unified: in the split-brain exper­i­ments, their inter­ac­tions are dif­fer­ent and estab­lish a less­er degree of unity. And each hemi­sphere is itself a com­pos­ite of smal­ler con­scious parts. This amounts to embra­cing what Nagel views as a reduc­tio.

There is some­thing very dif­fi­cult to think through about the com­pos­ite con­scious­ness view. It seems as though if each hemi­sphere is someone, that’s one thing, and if the whole brain is someone, that’s anoth­er: they can­not be just two equi­val­ent ways of describ­ing the same state of affairs. And this intu­it­ive res­ist­ance to see­ing con­scious minds as com­posed of oth­ers (call it the ‘Anti-Combination intu­ition’) goes well bey­ond the split-brain phe­nomen­on. It has a long his­tory in the form of the ‘sim­pli­city argu­ment’, which anti-materialist philo­soph­ers from Plotinus (1956, pp.255-­258, 342-­356) to Descartes (1985, Volume 2, p.59) to Brentano (1987, pp. 290­-301) have used to show the imma­ter­i­al­ity of the soul. In a nut­shell, this argu­ment says that since minds can­not be thought of as com­pos­ite, they must be indi­vis­ible, and since all mater­i­al things are divis­ible, the mind can­not be mater­i­al (for fur­ther ana­lys­is see Mijuskovic 1984, Schachter 2002, Lennon & Stainton 2008). Nor is the sig­ni­fic­ance of this dif­fi­culty just his­tor­ic­al: many recent mater­i­al­ist the­or­ies either stip­u­late that no con­scious being can be part of anoth­er (Putnam 1965, pp.163, Tononi 2012, pp.59­-68), or else advance argu­ments based on the intu­it­ive absurdity of con­scious­ness in a being com­posed of oth­er con­scious beings (Block 1978, cf. Barnett 2008, Schwitzgebel 2015).

All of the just-cited authors take the Anti-Combination intu­ition as a datum, and draw con­clu­sions from it about the nature of con­scious­ness — con­clu­sions up to and includ­ing sub­stance dual­ism. I prefer the oppos­ite approach: to see the Anti-Combination intu­ition as a fact about humans which impedes our under­stand­ing of how con­scious­ness fits into the nat­ur­al world, and thus as some­thing which philo­soph­ers should seek to ana­lyse, under­stand, and ulti­mately move bey­ond. As it hap­pens, there is a group of con­tem­por­ary philo­soph­ers engaged in just this task: con­stitutive pan­psych­ists. Panpsychists think that the best explan­a­tion for human con­scious­ness is that con­scious­ness is a gen­er­al fea­ture of mat­ter, and con­stitutive pan­psych­ists see human con­scious­ness as con­sti­tuted out of sim­pler con­scious­nesses just as the human brain is con­sti­tuted out of sim­pler phys­ic­al struc­tures. The most press­ing objec­tion to this view, which has received extens­ive recent dis­cus­sion, is the ‘com­bin­a­tion prob­lem’: can mul­tiple simple con­scious­ness really com­pose a single com­plex con­scious­ness (Seager 1995, p.280, Goff 2009, Coleman 2013, Mørch 2014, Roelofs 2014, Forthcoming‑a, Forthcoming‑b, Chalmers Forthcoming)? And this is at bot­tom the same issue as we have been grap­pling with con­cern­ing the split-brain phe­nomen­on. In my research, I try to explore the Anti-Combination intu­ition, its basis, and how to move past it, with an eye both to the gen­er­al meta­phys­ic­al ques­tions raised by con­stitutive pan­psych­ism, and to par­tic­u­lar neur­os­cientif­ic phe­nom­ena like the split-brain.

 

References:

Barnett, David. 2008. ‘The Simplicity Intuition and Its Hidden Influence on Philosophy of Mind.’ Nous 42(2): 308­-335

Bayne, Timothy. 2008. ‘The Unity of Consciousness and the Split-Brain Syndrome.’ The Journal of Philosophy 105(6): 277–300.

Bayne, Timothy. 2010. The Unity of Consciousness. Oxford: Oxford University Press

Bayne, Timothy, & Chalmers, David. 2003. ‘What is the Unity of Consciousness?’ In Cleeremans, A. (ed.), The Unity of Consciousness: Binding, Integration, Dissociation, Oxford: Oxford University Press: 23–58

Block, Ned. 1978. ‘Troubles with Functionalism.’ In Savage, C. W. (ed.), Perception and Cognition: Issues in the Foundations of Psychology¸ University of Minneapolis Press: 261–325

Brentano, Franz. 1987. The Existence of God: Lectures giv­en at the Universities of Worzburg and Vienna, 1868-­1891. Ed. and trans. Krantz, S., Nijhoff International Philosophy Series

Chalmers, David. Forthcoming­. ‘The Combination Problem for Panpsychism.’ In Bruntrup, G. and Jaskolla, L. (eds.), Panpsychism, Oxford: Oxford University Press

Coleman, Sam. 2014. ‘The Real Combination Problem: Panpsychism, Micro-­Subjects, and Emergence.’ Erkenntnis 79:19–44

Descartes, René. 1985. ‘Meditations on First Philosophy.’ Originally pub­lished 1641. In Cottingham, John, Stoothoff, Robert, and Murdoch, Dugald, (trans and eds.) The Philosophical Writings of Descartes, 2 vols., Cambridge: Cambridge University Press

Gazzaniga, Michael, Bogen, Joseph, and Sperry, Roger. 1962. ‘Some Functional Effects of Sectioning the Cerebral Commissures in Man.’ Proceedings of the National Academy of Sciences 48(2): 17–65

Goff, Philip. 2009. ‘Why Panpsychism doesn’t Help us Explain Consciousness.’ Dialectica 63(3):289-­311

Hurley, Katherine. 1998. Consciousness in Action. Harvard University Press.

Lennon, Thomas, and Stainton, Robert. (eds.) 2008. The Achilles of Rationalist Psychology. Studies In The History Of Philosophy Of Mind, V7, Springer

Marks, Charles. 1980. Commissurotomy, Consciousness, and Unity of Mind. MIT Press

Mijuskovic, Benjamin. 1984. The Achilles of Rationalist Arguments: The Simplicity, Unity, and Identity of Thought and Soul From the Cambridge Platonists to Kant: A Study in the History of an Argument. Martinus Nijhoff.

Mørch, Hedda Hassel. 2014. Panpsychism and Causation: A New Argument and a Solution to the Combination Problem. Doctoral Dissertation, University of Oslo

Nagel, Thomas. 1971. ‘Brain Bisection and the Unity of Consciousness.’ Synthese 22:396–413.

Plotinus. 1956. Enneads. Trans. and eds. Mackenna, Stephen, and Page, B. S. London: Faber and Faber Ltd.

Putnam, Hilary. 1965. ‘Psychological pre­dic­ates’. In Capitan, William, and Merrill, Daniel. (eds.), Art, mind, and reli­gion. Liverpool: University of Pittsburgh Press

Roelofs, Luke. 2014. ‘Phenomenal Blending and the Palette Problem.’ Thought 3:59–70.

Roelofs, Luke. Forthcoming‑a. ‘The Unity of Consciousness, Within and Between Subjects.’ Philosophical Studies.

Roelofs, Luke. Forthcoming‑b. ‘Can We Sum sub­jects? Evaluating Panpsychism’s Hard Problem.’ In Seager, William (ed.), The Routledge Handbook of Panpsychism, Routledge.

Schachter, Jean-Pierre. 2002. ‘Pierre Bayle, Matter, and the Unity of Consciousness.’ Canadian Journal of Philosophy 32(2): 241­-265

Seager, William. 1995. ‘Consciousness, Information and Panpsychism.’ Journal of Consciousness Studies 2­3:272–2­88

Sperry, Roger. 1964. ‘Brain Bisection and Mechanisms of Consciousness.’ In Eccles, John (ed.), Brain and Conscious Experience. Springer-Verlag

Tye, Michael. 2003. Consciousness and Persons: Unity and Identity. MIT Press

Tononi, Giulio. 2012. ‘Integrated inform­a­tion the­ory of con­scious­ness: an updated account.’ Archives Italiennes de Biologie 150(2­3): 56­-90

Investigating the Stream of Consciousness

Oliver Rashbrook-Cooper–British Academy Postdoctoral Fellow in Philosophy at the University of Oxford

There are a num­ber of dif­fer­ent ways in which we can fruit­fully study our streams of con­scious­ness. We might try to provide a detailed char­ac­ter­isa­tion of how con­scious exper­i­ence seems ‘from the inside’, and closely scru­tin­ize the phe­nomen­o­logy. We might try to uncov­er the struc­ture of con­scious­ness by focus­sing upon our tem­por­al acu­ity, and examin­ing when and how we are sub­ject to tem­por­al illu­sions. Or we might focus upon invest­ig­at­ing the neur­al mech­an­isms upon which con­scious exper­i­ence depends.

Sometimes, these dif­fer­ent approaches appear to yield con­tra­dict­ory res­ults. In par­tic­u­lar, the deliv­er­ances of intro­spec­tion some­times appear to be at odds with what is revealed both by cer­tain tem­por­al illu­sions and by research into neur­al mech­an­isms. When this occurs, what should we do? We can begin by con­sid­er­ing two fea­tures of how con­scious­ness phe­nomen­o­lo­gic­ally seems.

It is nat­ur­al to think of exper­i­ence as unfold­ing in step with its objects. Over a ten second inter­val, for instance, I might watch someone sprint 100 metres. If I watch this event, my exper­i­ence will unfold over a ten second inter­val. First I will hear the pis­tol fire, see the race begin, and so on, until I see the lead­er cross the fin­ish line. My exper­i­ence of the race has two fea­tures. Firstly, it seems to unfold in step with the race itself, secondly it seems to unfold smoothly — it seems as if I am con­tinu­ously aware of the race, rather than my aware­ness of it being frag­men­ted into dis­crete epis­odes.

Can this char­ac­ter­isa­tion of how things seem be recon­ciled with what we learn from oth­er ways of invest­ig­at­ing the stream of con­scious­ness? To answer this ques­tion we can con­sider two dif­fer­ent cases: the case of the col­our phi phe­nomen­on, and the case of dis­crete neur­al pro­cessing.

The col­our phi phe­nomen­on is a case in which the present­a­tion of two stat­ic stim­uli gives rise to an illus­ory exper­i­ence of motion. When two col­oured dots that are suf­fi­ciently close to one anoth­er are illu­min­ated suc­cess­ively in a suf­fi­ciently brief win­dow of time, one is left with the impres­sion that there is a single dot mov­ing from one loc­a­tion to the oth­er (examples can be found here and here)

This phe­nomen­on gen­er­ates a puzzle about wheth­er exper­i­ence really unfolds in step with its objects. In order for us to exper­i­ence appar­ent motion between the two loc­a­tions, we need to register the occur­rence of the second dot. This makes it seem as if the exper­i­ence of motion can only occur after the second dot has flashed, for without regis­ter­ing the second dot, we wouldn’t exper­i­ence motion at all. So it seems that, in this case, the exper­i­ence of motion doesn’t unfold in step with its appar­ent object at all. If this is right, then we have reas­on to doubt that exper­i­ence nor­mally unfolds in step with its objects, for if we can be wrong about this in the col­our phi case, per­haps we are wrong about it in all cases.

The second kind of case is the case of dis­crete neur­al pro­cessing. There is reas­on to think that the neur­al mech­an­isms under­pin­ning con­scious per­cep­tion are dis­crete (see, for example, VanRullen and Koch, 2003). This looks to be in ten­sion with the second fea­ture we noted earli­er – that our aware­ness of things appears to be con­tinu­ous. As in the case of col­our phi, it might be tempt­ing to think that this tells us that our impres­sion of how things seem ‘from the inside’ is mis­taken.

However, when we con­sider how things really strike us phe­nomen­o­lo­gic­ally, it becomes clear that there is an altern­at­ive way to recon­cile these appar­ently con­tra­dict­ory res­ults. We can begin by not­ing that when we intro­spect, it isn’t pos­sible for us to focus our atten­tion upon con­scious exper­i­ence without focus­sing upon a tem­por­ally exten­ded por­tion of exper­i­ence – there is always a min­im­al inter­val upon which we are able to focus.

The claims that exper­i­ence seems to unfold in step with its objects and seems con­tinu­ous apply to these tem­por­ally exten­ded por­tions of exper­i­ence that we are able to focus upon when we intro­spect. If this is right, then we have a dif­fer­ent way of think­ing about the col­our phi case. On this approach, over an inter­val, we have an exper­i­ence of appar­ent motion that unfolds over the time it takes the two dots to flash. The phe­nomen­o­logy is, how­ever, neut­ral about what occurs over the sub-intervals of this exper­i­ence.

The claim that this exper­i­ence unfolds over an exten­ded inter­val of time isn’t incon­sist­ent with what goes on in the col­our phi case. The appar­ent incon­sist­ency only arises if we think that the claim that exper­i­ence seems to unfold in step with its object applies to all of the sub-intervals of this exper­i­ence, no mat­ter how short (for devel­op­ment and dis­cus­sion of this point, see Hoerl (2013), Phillips (2014), and Rashbrook (2013a)).

Likewise, in the case of dis­crete neur­al pro­cessing, in order for the case to gen­er­ate a clash with how exper­i­ence appears ‘from the inside’, our char­ac­ter­isa­tion of how con­scious­ness seems must apply not only to some tem­por­ally exten­ded potions of con­scious­ness, but to all of them, no mat­ter how brief. Again, we might ques­tion wheth­er this is really how things seem.

While exper­i­ence doesn’t seem to be frag­men­ted into dis­crete epis­odes, this cer­tainly doesn’t mean that it seems to fill every inter­val for which we are con­scious, no mat­ter how brief (for dis­cus­sion, see Rashbrook, 2013b). As in the case of the col­our phi, per­haps our char­ac­ter­isa­tion of how things seem applies only to tem­por­ally exten­ded por­tions of exper­i­ence – so the deliv­er­ances of intro­spec­tion are simply neut­ral about wheth­er con­scious exper­i­ence fills every instant of the inter­val it occu­pies.

There is more than one way, then, to recon­cile the psy­cho­lo­gic­al and the phe­nomen­o­lo­gic­al strategies of enquir­ing about con­scious exper­i­ence. Rather than tak­ing non-phenomenological invest­ig­a­tion to reveal the phe­nomen­o­logy to be mis­lead­ing, per­haps we should take it as an invit­a­tion to think more care­fully about how things seem ‘from the inside’.

 

References:

Hoerl, Christoph. 2013. ‘A Succession of Feelings, in and of Itself, is Not a Feeling of Succession’. Mind 122:373–417.

Phillips, Ian. 2014. The Temporal Structure of Experience. In Subjective Time: The Philosophy, Psychology, and Neurscience of Temporality, ed. Dan Lloyd and Valtteri Arstila, 139–159. MIT.

Rashbrook, Oliver. 2013a. An Appearance of Succession Requires a Succession of Appearances. Philosophy and Phenomenological Research 87:584–610.

Rashbrook, Oliver. 2013b. The con­tinu­ity of con­scious­ness. European Journal of Philosophy 21:611–640.

VanRullen, Rufin. and Koch, Christoph. 2003. Is per­cep­tion dis­crete or con­tinu­ous? Trends in Cognitive Sciences 7:207–13.

Infant Number Knowledge: Analogue Magnitude Reconsidered

Alexander Green, MPhil Candidate, Department of Philosophy, University of Warwick

Following Stanislas Dehaene’s The Number Sense (1997) there has been a surge in interest in num­ber know­ledge, espe­cially the devel­op­ment of num­ber know­ledge in infants. This research has broadly focused on answer­ing the fol­low­ing ques­tions: What numer­ic­al abil­it­ies do infants pos­sess, and how do these work? How are they dif­fer­ent from the numer­ic­al abil­it­ies of adults, and how is the gap bridged in cog­nit­ive devel­op­ment?

The aim of this post is to provide a gen­er­al intro­duc­tion to infant num­ber know­ledge by focus­ing on the first two of these ques­tions. There is much evid­ence indic­at­ing that there are two dis­tinct sys­tems by which infants are able to track and rep­res­ent numer­os­ity — par­al­lel indi­vidu­ation and ana­logue mag­nitude. I will begin by briefly explain­ing what these numer­ic­al capa­cit­ies are. I will then focus my dis­cus­sion on the ana­logue mag­nitude sys­tem, and raise some doubts about the way in which this sys­tem is com­monly under­stood to work.

Firstly, con­sider par­al­lel indi­vidu­ation. This sys­tem allows infants to dif­fer­en­ti­ate between sets of dif­fer­ent quant­it­ies by track­ing mul­tiple indi­vidu­al objects at the same time (see Feigenson & Carey 2003; Feigenson et al 2002; Hyde 2011). For example if an infant were presen­ted with three objects, par­al­lel indi­vidu­ation would allow the track­ing of the indi­vidu­al objects ({object 1, object 2, object 3}) rather than allow­ing the track­ing of total set-size ({three objects}). There are two fur­ther points of interest about par­al­lel indi­vidu­ation. Firstly, par­al­lel indi­vidu­ation only rep­res­ents numer­os­ity indir­ectly because it track indi­vidu­als rather than total set-size. Secondly it is lim­ited to sets of few­er than four indi­vidu­als.

Secondly, con­sider ana­logue mag­nitude. This sys­tem allow infants to dis­crim­in­ate between set sizes provided that the ratio is suf­fi­ciently large (see (Xu & Spelke 2000), (Feigenson et al 2004), (Xu et al, 2005)). More spe­cific­ally, ana­logue mag­nitude allows infants to dif­fer­en­ti­ate between dif­fer­ent sets provided that the ratio is at least 2:1. Interestingly the pre­cise car­din­al value of the sets seems to be irrel­ev­ant as long as the ratio remains con­stant (i.e. it applies equally to a case of two-to-four as twenty-to-forty). Thus the lim­it­a­tions of the ana­logue mag­nitude sys­tem are determ­ined by ratio, in con­trast to the par­al­lel indi­vidu­ation sys­tem whose lim­it­a­tions are determ­ined by spe­cif­ic set-size.

So how does ana­logue mag­nitude work? I will argue that the most recent answer to this ques­tion is incor­rect. This is because con­tem­por­ary authors rightly reject the ori­gin­al char­ac­ter­isa­tion of ana­logue mag­nitude (the accu­mu­lat­or mod­el), yet fail to reject its implic­a­tions.

The accu­mu­lat­or mod­el of ana­logue mag­nitude is intro­duced by Dehaene, by way of an ana­logy with Robinson Crusoe (1997, p.28). Suppose that Crusoe must count coconuts. To do this he might dig a hole next to a river, and dig a trench which links the river to this hole. He also cre­ates a dam, such that he can con­trol when the river flows into the hole. For every coconut Crusoe counts, he diverts some giv­en amount of water into the hole. However as Crusoe diverts more water into the hole, it becomes more dif­fi­cult to dif­fer­en­ti­ate between con­sec­ut­ive num­bers of coconuts (i.e. the dif­fer­ence between one and two diver­sions of water is easi­er to see than between twenty and twenty-one).

Dehaene sup­poses that ana­logue mag­nitude rep­res­ent­a­tions are giv­en by a sim­il­ar icon­ic format, i.e. by rep­res­ent­ing a phys­ic­al mag­nitude pro­por­tion­al to the num­ber of indi­vidu­als in the set. Consider the fol­low­ing example: one object is rep­res­en­ted by ‘_’, two objects are rep­res­en­ted by ‘__’, three are rep­res­en­ted by ‘___’, and so on. Under this mod­el, ana­logue mag­nitude is under­stood to rep­res­ent the approx­im­ate car­din­al value of a set by the use of an iter­at­ive count­ing meth­od (Dehaene 1997, p.29). This partly reflects the empir­ic­al data: sub­jects are able to rep­res­ent dif­fer­ences in set size (with longer lines indic­at­ing lar­ger sets), and the import­ance of ratio for dif­fer­en­ti­ation is accoun­ted for (because it is more dif­fi­cult to dif­fer­en­ti­ate between sets which dif­fer by smal­ler ratios).

More recently this accu­mu­lat­or mod­el of ana­logue mag­nitude has come to be rejec­ted, how­ever. This mod­el entails that each object in a set must be indi­vidu­ally rep­res­en­ted in turn (the first object pro­duces the rep­res­ent­a­tion ‘_’, the second pro­duces the rep­res­ent­a­tion ‘__’, etc). This sug­gests that it would take longer for a lar­ger num­ber to be rep­res­en­ted than a smal­ler one (as the quant­ity of objects to be indi­vidu­ally rep­res­en­ted dif­fers). However there are empir­ic­al reas­ons to reject this.

For example there is evid­ence sug­gest­ing that the speed of form­ing ana­logue mag­nitude rep­res­ent­a­tions doesn’t vary between dif­fer­ent set sizes (Wood & Spelke 2005). Additionally, infants are still able to dis­crim­in­ate between dif­fer­ent set sizes in cases where they are unable to attend to the indi­vidu­al objects of a set in sequence (Intriligator & Cavanagh 2001). These find­ings sug­gests that it is incor­rect to claim that ana­logue mag­nitude rep­res­ent­a­tions are formed by respond­ing to indi­vidu­al objects in turn.

Despite these obser­va­tions, many authors con­tin­ue to advoc­ate the implic­a­tions of this accu­mu­lat­or mod­el even though there isn’t empir­ic­al evid­ence to sup­port these. The implic­a­tions that I am refer­ring to are that ana­logue mag­nitude rep­res­ents approx­im­ate car­din­al value and that it does so by the afore­men­tioned icon­ic format. For example, con­sider Carey’s dis­cus­sions of ana­logue mag­nitude (2001, 2009). Carey takes ana­logue mag­nitude to enable infants to ‘rep­res­ent the approx­im­ate car­din­al value of sets’ (2009, p.127). As a res­ult, the above icon­ic format (in which infants rep­res­ent a phys­ic­al mag­nitude pro­por­tion­al to the num­ber of rel­ev­ant objects) is still advoc­ated (Carey 2001, p.38). This char­ac­ter­isa­tion of ana­logue mag­nitude is typ­ic­al of many authors (e.g. Feigenson et al 2004; Slaughter et al 2006; Feigenson et al 2002; Lipton & Spelke 2003; Condry & Spelke 2008).

Given the rejec­tion of the accu­mu­lat­or meth­od, this char­ac­ter­isa­tion seems dif­fi­cult to jus­ti­fy. Analogue mag­nitude allows infants the abil­ity to dif­fer­en­ti­ate between two sets of quant­ity, but there seems no reas­on why this would require any­thing over and above the rep­res­ent­a­tion of ordin­al value (i.e. ‘great­er than’ and ‘less than’). Consequently the claim that ana­logue mag­nitude rep­res­ents approx­im­ate car­din­al value seems to be both unjus­ti­fied and unne­ces­sary. Given this there also seems to be no jus­ti­fic­a­tion for the Crusoe-analogy icon­ic format because this doesn’t con­trib­ute any­thing oth­er than allow­ing ana­logue mag­nitude to rep­res­ent approx­im­ate car­din­al value which, as we have seen, is empir­ic­ally under­mined.

In this post I have dis­cussed the abil­it­ies of par­al­lel indi­vidu­ation and ana­logue mag­nitude, in answer to the ques­tion: what numer­ic­al abil­it­ies do infants pos­sess, and how do these work? Parallel indi­vidu­ation allows infants to dif­fer­en­ti­ate between small quant­it­ies of objects (few­er than four), and ana­logue mag­nitude allows dif­fer­en­ti­ation between quant­it­ies if the ratio is suf­fi­ciently large. I have also advanced a neg­at­ive argu­ment against the dom­in­ant under­stand­ing of ana­logue mag­nitude. Many authors have rejec­ted the iter­at­ive accu­mu­lat­or mod­el without reject­ing its implic­a­tions (ana­logue mag­nitude as rep­res­ent­ing approx­im­ate car­din­al value, and its doing so by icon­ic format). This sug­gests that the lit­er­at­ure requires a new under­stand­ing of how the ana­logue mag­nitude sys­tem works.

 

References:

Carey, S. 2001. ‘Cognitive Foundations of Arithmetic: Evolution and Ontogenisis’. Mind & Language. 16(1): 37–55.

Carey, S. 2009. The Origin of Concepts. New York: OUP.

Condry, K., & Spelke, E. 2008. ‘The Development of Language and Abstract Concepts: The Case of Natural Number.’ Journal of Experimental Psychology: General. 137(1): 22–38.

Dehaene, S. 1997. The Number Sense: How the Mind Creates Mathematics. Oxford: OUP.

Feigenson, L., Carey, S., & Hauser, M. 2002. ‘The Representations Underlying Infants’ Choice of More: Object Files versus Analog Magnitudes’. Psychological Science. 13(2): 150–156.

Feigenson, L., & Carey, S. 2003. ‘Tracking Individuals via Object-Files: Evidence from Infants’ Manual Search’. Developmental Science. 6(5): 568–584.

Feigenson, L., Dehaene, S., & Spelke, E. 2004. ‘Core Systems of Number’. Trends in Cognitive Sciences. 8(7): 307–314.

Hyde, D. 2011. ‘Two Systems of Non-Symbolic Numerical Cognition’. Frontiers in Human Neuroscience. 5: 150.

Intriligator, J., & Cavanagh, P. 2001. ‘The Spatial Resolution of Visual Attention’. Cognitive Psychology. 43: 171–216.

Lipton, J., & Spelke, E. 2003. ‘Origins of Number Sense: Large-Number Discrimination in Human Infants’. Psychological Science. 14(5): 396–401.

Slaughter, V., Kamppi, D., & Paynter, J. 2006. ‘Toddler Subtraction with Large Sets: Further Evidence for an Analog-Magnitude Representation of Number’. Developmental Science. 9(1): 33–39.

Wagner, J., & Johnson, S. 2011. ‘An Association between Understanding Cardinality and Analog Magnitude Representations in Preschoolers’. Cognition. 119(1): 10–22.

Wood, J., & Spelke, E. 2005. ‘Chronometric Studies of Numerical Cognition in Five-Month-Old Infants’. Cognition. 97(1): 23–29.

Xu, F., & Spelke, E. 2000. ‘Large Number Discrimination in 6‑Month-Old Infants’. Cognition. 74(1): B1-B11.

Xu, F., Spelke, E., & Goddard, S. 2005. ‘Number Sense in Human Infants’. Developmental Science. 8(1): 88–101.

The Mental Causation Question and Emergence

Dr. Umut Baysan–University Teacher in Philosophy at the University of Glasgow

How can the mind caus­ally influ­ence a world that is, ulti­mately, made up of phys­ic­al stuff? This is one way of ask­ing the men­tal caus­a­tion ques­tion, where men­tal caus­a­tion is the type of caus­a­tion in which either the cause or effect is a men­tal event or prop­erty. The ques­tion can also be put this way: How can men­tal events or prop­er­ties (such as beliefs, desires, sen­sa­tions, and so on) cause oth­er events? Discussion of the men­tal caus­a­tion ques­tion dates back to at least Princes Elizabeth of Bohemia’s chal­lenge to Descartes, who took the mind to be a non-physical sub­stance. Elizabeth’s ques­tion to Descartes was how one can make sense of the idea that the mind could move the body, or the body could influ­ence the mind, if they are two dis­tinct sub­stances as such.

We take men­tal caus­a­tion to be real. The real­ity of men­tal caus­a­tion is so cent­ral to our philo­soph­ic­al think­ing that the view that there is no such thing as men­tal caus­a­tion, namely epi­phen­om­en­al­ism, has a cru­cial dia­lect­ic­al role in philo­soph­ic­al argu­ment­a­tion in meta­phys­ics of mind. As with Elizabeth’s cri­ti­cism of Descartes, some­times views in the meta­phys­ics of mind are eval­u­ated on this basis. In terms of their roles in philo­soph­ic­al argu­ment­a­tion, I find epi­phen­om­en­al­ism and rad­ic­al scep­ti­cism to be very sim­il­ar. In epi­stem­o­logy, rad­ic­al scep­ti­cism is the view that there is no such thing as know­ledge of the extern­al world. Although pretty much every­one takes rad­ic­al scep­ti­cism to be false, some epi­stem­o­lo­gists still devote time to show­ing why this is the case, as a view’s implic­a­tion of rad­ic­al scep­ti­cism is taken to be reas­on enough to dis­pense with it. Likewise in meta­phys­ics of mind, nearly every­one thinks that epi­phen­om­en­al­ism is false, but there is a very siz­able lit­er­at­ure try­ing to show how this is so. For this reas­on, we often find charges of epi­phen­om­en­al­ism in reduc­tio argu­ments.

Although there may have been ways of tack­ling Princess Elizabeth’s chal­lenge to Descartes, the dif­fi­culty of doing so moved many con­tem­por­ary philo­soph­ers towards an onto­lo­gic­ally phys­ic­al­ist view accord­ing to which, at least in the actu­al world, there are only phys­ic­al sub­stances. Now, once we get rid of all non-physical sub­stances from our onto­logy (sub­stance phys­ic­al­ism) and yet still hold on to the exist­ence of minds (real­ism about the mind), the next set of ques­tions is: What should we do with the prop­er­ties of such minds? What are men­tal prop­er­ties? Can men­tal prop­er­ties be reduced to phys­ic­al prop­er­ties?

For the sake of brev­ity, I shall not recite the reas­ons why such a reduc­tion can­not be main­tained, so let’s just assume that men­tal prop­er­ties are not phys­ic­al prop­er­ties. (For sem­in­al work on this point, see Putnam 1967.) In a world with purely phys­ic­al sub­stances, some of which have irre­du­cibly men­tal prop­er­ties, it might look as if the men­tal caus­a­tion ques­tion can be answered eas­ily. Mental events can cause phys­ic­al events (or vice versa); such a caus­al rela­tion doesn’t require the inter­ac­tion of phys­ic­al and non-physical sub­stances, so the prob­lem of caus­al inter­ac­tion evap­or­ates.

Emergentism is a view, or rather a group of views, accord­ing to which sub­stance phys­ic­al­ism is true and men­tal prop­er­ties are irre­du­cibly men­tal. There are (at least) two vari­et­ies of emer­gen­t­ism. The weak vari­ety, which some­times goes by the name “non-reductive phys­ic­al­ism”, takes men­tal prop­er­ties to be real­ized by phys­ic­al prop­er­ties. (For my work on what it is for a prop­erty to be real­ized by anoth­er prop­erty, see Baysan 2015). The strong vari­ety, which goes by the name (sur­prise sur­prise!) “strong emer­gen­t­ism”, holds that (at least some) men­tal prop­er­ties are as fun­da­ment­al as phys­ic­al prop­er­ties to the extent that they need not be real­ised by phys­ic­al prop­er­ties. (See Barnes 2012 for an account of strong emer­gence along these lines. For joint dis­cus­sions of weak and strong emer­gence, see Chalmers 2006 and Wilson 2015.)

Some con­tem­por­ary meta­phys­i­cians of mind, most not­ably Jaegwon Kim (2005), think that epi­phen­om­en­al­ism is still a threat to emer­gen­t­ism. It is thought be a prob­lem for the weak, non-reductive phys­ic­al­ist, vari­ety because of the fol­low­ing line of thought. The phys­ic­al world is sup­posed to be caus­ally closed in the sense that if a phys­ic­al event has a cause at any time, then at that time, it has a suf­fi­cient phys­ic­al cause. Thus, if a phys­ic­al event is caused by a men­tal event (or a prop­erty), it must be fully caused by a phys­ic­al event (or a prop­erty) too. If all this is true, then every phys­ic­al event that has a men­tal cause must be caus­ally over­de­termined. (Here, the idea is that caus­a­tion implies determ­in­a­tion, and hav­ing more than one fully suf­fi­cient cause implies overdeterm­in­a­tion.) The accept­ance of such sys­tem­at­ic caus­al over­de­termin­a­tion is taken to be absurd; the world can’t have that much redund­ant caus­a­tion. Therefore, the com­bin­a­tion of non-reductive phys­ic­al­ism and the real­ity men­tal caus­a­tion is not ten­able. That is the charge any­way.

Now, what about strong emer­gen­t­ism? In a nut­shell, defend­ers of this view can reject the idea that the phys­ic­al domain is caus­ally closed in the way that non-reductive phys­ic­al­ists typ­ic­ally assume. Given its anti-physicalist assump­tion that some prop­er­ties oth­er than the phys­ic­al ones can be fun­da­ment­al too, reject­ing the caus­al clos­ure prin­ciple is def­in­itely a live option for strong emer­gen­t­ism. However, accord­ing to some, that is pre­cisely the prob­lem with this view. From a sci­entif­ic or nat­ur­al­ist­ic point of view, how can we defend such a view if its best way of accom­mod­at­ing men­tal caus­a­tion is through reject­ing the caus­al clos­ure of the phys­ic­al domain?

The pic­ture that I have por­trayed thus far seems to sug­gest that unless we go all the way and reduce men­tal prop­er­ties to phys­ic­al prop­er­ties, there isn’t any room for men­tal caus­a­tion. This is what Kim and oth­ers have been try­ing to per­suade us over the years. But, is the reas­on­ing that has led us here really sol­id? Should all of the argu­ment­at­ive steps briefly sketched above be accep­ted? I have some doubts.

First, there is an emer­ging (pun inten­ded) con­sensus that the caus­al argu­ment against non-reductive phys­ic­al­ism sketched above has some flaws. Some philo­soph­ers aren’t con­vinced that non-reductive phys­ic­al­ism, as Kim por­trays it, really implies caus­al over­de­termin­a­tion (see Yablo 1992 for a sem­in­al account). Very roughly, the idea is that such caus­al over­de­termin­a­tion appears to obtain when a whole event and its parts are caus­ing an event too. But tak­ing a whole and its parts sep­ar­ately is surely “double count­ing”. Also, in present­a­tions of the caus­al argu­ment against non-reductive phys­ic­al­ism, we often come across the idea that if an event has two dis­tinct suf­fi­cient causes, it must be genu­inely caus­ally overdetermined—this is known as “the exclu­sion prin­ciple”. But a prin­ciple with such a cru­cial dia­lect­ic­al role needs some backing-up, and some authors have noted that there doesn’t seem to be any pos­it­ive argu­ment for the truth of the exclu­sion prin­ciple. (For a cri­ti­cism along these lines, see Arnadottir and Crane 2013).

Second, the reas­on to res­ist strong emer­gen­t­ism that is sketched above can be ques­tioned too. Do we really have good reas­ons to think that the phys­ic­al domain is caus­ally closed? I don’t think that we can play the caus­al clos­ure card unless we care­fully study the reas­ons that are giv­en in favour of it. Considering its import­ance in argu­ment­a­tion in the meta­phys­ics of mind, it would be fair to say that there hasn’t been enough atten­tion giv­en to the pos­it­ive reas­ons for hold­ing it. I am aware of three argu­ments for the caus­al clos­ure prin­ciple: (1) Lycan’s (1987) argu­ment that it is absurd to think that laws of con­ser­va­tion hold every­where in the uni­verse with the excep­tion of the human skull; (2) McLaughlin’s (1992) sug­ges­tion that the fail­ure of the caus­al clos­ure prin­ciple was a sci­entif­ic hypo­thes­is in chem­istry which was even­tu­ally fals­i­fied (in chem­istry!); and (3) Papineau’s (2002) argu­ment that the prin­ciple is induct­ively veri­fied by the prac­tice of 20th cen­tury physiolo­gists. This is not the place to care­fully exam­ine these three argu­ments in detail, but I think it is fair to say that these argu­ments don’t even attempt to be con­clus­ive. The clos­ure prin­ciple may turn out to be true, but wheth­er that is the case or not will be an empir­ic­al mat­ter of fact, and until we some­how estab­lished it empir­ic­ally, we need to devise more sol­id philo­soph­ic­al argu­ments for it.

I hope this short dis­cus­sion has per­suaded you that whichever view in the meta­phys­ics of mind turns out to be true, the men­tal caus­a­tion ques­tion will play some role in determ­in­ing its plaus­ib­il­ity.

References:

Árnadóttir, S. and Crane, T. (2013). ‘There is no Exclusion Problem’, in Mental Causation and Ontology, eds. S. C. Gibb, E. J. Lowe, and R. D. Ingthorsson (Oxford: Oxford University Press).

Barnes, E. (2012). ‘Emergence and Fundamentality’. Mind, 121, pp. 873–901.

Baysan, U. (2015) ‘Realization Relations in Metaphysics’, Minds and Machines 25, pp. 247–60.

Chalmers, D. (2006). ‘Strong and Weak Emergence’, in The Re-Emergence of Emergence , eds. P. Clayton & P. Davies (Oxford: Oxford University Press).

Kim, J. (2005) Physicalism or Something Near Enough. Princeton, NJ: Princeton University Press.

Lycan, W. (1987). Consciousness. Cambridge, MA: MIT Press.

McLaughlin, B. (1992). ‘The Rise and Fall of British Emergentism’, in Emergence or Reduction?: Prospects for Nonreductive Physicalism, eds. A. Beckermann, H. & J. Kim (De Gruyter).

Papineau, D. (2002). Thinking about Consciousness. Oxford: Oxford University Press.

Putnam, H. (1967). ‘Psychological Predicates’, in Art, Mind, and Religion, eds. W.H. Capitan & D.D. Merrill (Pittsburgh: University of Pittsburgh Press).

Wilson, J. (2015). ‘Metaphysical Emergence: Weak and Strong’, in Metaphysics in Contemporary Physics: Poznan Studies in the Philosophy of the Sciences and the Humanities, eds. T. Bigaj and C. Wuthrich (Leiden: Brill).

Yablo, S. (1992) ‘Mental Causation’. Philosophical Review, 101, pp. 245–280.

The Cognitive Impenetrability of Recalcitrant Emotions

Dr. Raamy Majeed —Postdoctoral Research Fellow on the John Templeton Foundation pro­ject, ‘New Directions in the Study of Mind’ in the Faculty of Philosophy, University of Cambridge and By-Fellow, Churchill College, University of Cambridge

Consider the fol­low­ing emo­tion­al epis­odes. You fear Fido, your neighbour’s dog you judge to be harm­less. You are angry with your col­league, even though you know his remark wasn’t really offens­ive. You are jeal­ous of your partner’s friend, des­pite believ­ing that she does­n’t fancy him. D’Arms and Jacobson (2003) call these recal­cit­rant emo­tions: emo­tions that exist “des­pite the agent’s mak­ing a judg­ment that is in ten­sion with it” (pg. 129). The phe­nomen­on of emo­tion­al recal­cit­rance is said to raise a chal­lenge for the­or­ies of emo­tions. Drawing on the work of Greenspan (1981) and Helm (2001), Brady argues that this chal­lenge is “to explain the sense in which recal­cit­rant emo­tions involve ration­al con­flict or ten­sion” (2009: 413).

Whether we require ration­al con­flict to account for emo­tion­al recal­cit­rance is debat­able. Indeed, much of the present con­tro­versy involves spelling out the pre­cise nature of this con­flict. But con­flict, ration­al or oth­er­wise, isn’t the only fea­ture that is per­tin­ent to the phe­nomen­on. What tends to get neg­lected is pre­cisely what gives these emo­tions their name, viz. their recal­cit­rance; their per­sist­ent nature. To elab­or­ate, emo­tion­al epis­odes, by their very nature, are epis­od­ic, and we shouldn’t expect recal­cit­rant emo­tions to last any longer than non-recalcitrant ones. Nevertheless, it is in the very nature of recal­cit­rant emo­tions that they are mul­ish, that they don’t suc­cumb to our judge­ments – i.e. to the extent that these emo­tion­al epis­odes last.

Here is an example. Suppose I judge that fly­ing is safe, but feel instantly afraid as soon as my plane starts to take off. But sup­pose, also, that once I real­ize that my fear is irra­tion­al, or at least, that it is in ten­sion with my judge­ment, my fear dis­sip­ates. This, argu­ably, won’t count as an instance of emo­tion­al recal­cit­rance. By con­trast, say I remain fear­ful des­pite my judge­ment. I keep think­ing to myself, ‘I know this is safe’, and yet I con­tin­ue to feel afraid. This, I ven­ture, bet­ter cap­tures what we mean by emo­tion­al recal­cit­rance. Mutatis mutandis for being afraid of Fido, being jeal­ous of your partner’s friend etc. All famil­i­ar cases of emo­tion­al recal­cit­rance seem to share this per­sist­ent fea­ture. The ques­tion is, what accounts for it?

My hypo­thes­is is this: emo­tions are recal­cit­rant to the extent that they are cog­nit­ively impen­et­rable. According to Goldie, “someone’s emo­tion or emo­tion­al exper­i­ence is cog­nit­ively pen­et­rable only if it can be affected by his rel­ev­ant beliefs” (2000: 76). So far as I can tell, the first to dis­cuss the cog­nit­ive (im)penetrability of emo­tions is Griffiths (1990, 1997), who takes one of the advant­ages of his the­ory to be pre­cisely that it accounts for recal­cit­rant emo­tions, or what he calls ‘irra­tion­al emo­tions’.

Griffiths’s explan­a­tion of emo­tion­al recal­cit­rance is neg­lected by much of the cur­rent lit­er­at­ure on the phe­nomen­on. This is war­ran­ted in one respect. Griffiths doesn’t account for the sense in which recal­cit­rant emo­tions involve ration­al con­flict, which, as men­tioned earli­er, is one of the cent­ral con­tro­ver­sies. But there is a way in which the neg­lect is unwar­ran­ted. This has to do with the charge that his account makes emo­tions too piece­meal.

To elab­or­ate, one of the most con­tro­ver­sial fea­tures of Griffiths’s account of emo­tions more gen­er­ally is that it div­vies up emo­tions into three broad types, only one of which forms a nat­ur­al kind. These are the set of evolved adapt­ive ‘affect-program’ responses, which are, more or less, cog­nit­ively impen­et­rable. They are sur­prise, fear, anger, dis­gust, sad­ness and joy. The rest are ‘high­er cog­nit­ive emo­tions’, which are cog­nit­ively pen­et­rable, like jeal­ousy, shame etc., or social con­struc­tions that are ‘essen­tially pre­tences’, e.g. romantic love.

This account, argu­ably, does make emo­tions too piece­meal, but to reject the hypo­thes­is that recal­cit­rant emo­tions are cog­nit­ively impen­et­rable for this reas­on is to throw the baby out with the bathwa­ter. Let us be neut­ral as to what emo­tions actu­ally are, as well as to the kinds of emo­tions that can be cog­nit­ively impen­et­rable. I think we can remain thus neut­ral, and still bor­row some of Griffiths’s insights con­cern­ing the cog­nit­ive impen­et­rab­il­ity of recal­cit­rant emo­tions to explain their recal­cit­rance.

Leaving aside the Ekman-esque notion that there are a set of basic emo­tions from which all oth­er emo­tions arise, we can fol­low Griffiths in sup­pos­ing that emo­tions, indeed the very same kind of emo­tions, can be brought about in dis­tinct ways. Take, for instance, the affect-program responses. The pro­cesses that typ­ic­ally give rise to them, as well as these responses them­selves, are what Griffiths claims is cog­nit­ively impen­et­rable. But he notes that they can also be triggered by pro­cesses that are cog­nit­ively pen­et­rable. In fact, he is clear that the former doesn’t rule out the lat­ter: “[t]he exist­ence of a rel­at­ively unin­tel­li­gent, ded­ic­ated mech­an­ism does not imply that higher-level cog­nit­ive pro­cesses can­not ini­ti­ate the same events” (1990: 187).

Griffiths exploits this account to explain emo­tion­al recal­cit­rance. In brief, the phe­nomen­on occurs when an affect-program response is triggered without the cog­nit­ive pro­cess of belief-fixation that gives rise to judge­ment. For example, “[if] only the affect-program sys­tem classes the stim­u­lus as a danger, the sub­ject will exhib­it the symp­toms of fear, but will deny mak­ing the judge­ments which folk the­ory sup­poses to be impli­cit in the emo­tion” (1990: 191).

This explan­a­tion isn’t sup­posed to provide us with an account of what recal­cit­rant emo­tions are; what picks them out as a type. Rather, for Griffiths, it gives us a ‘the­ory’ of them; we have an explan­a­tion for their occur­rence. Regardless of wheth­er this the­ory is adequate, it is my view that the work such an explan­a­tion can be fur­ther put towards is to explain the recal­cit­rant nature of recal­cit­rant emo­tions. While the affect-program responses don’t always run in tan­dem with the cog­nit­ive pro­cesses involved in belief-fixation, what explains the per­sist­ent nature of these responses is that they, as well as the pro­cesses that give rise to them, are cog­nit­ively impen­et­rable. Moreover, cog­nit­ive pen­et­rab­il­ity admits of degrees. Thus, the extent to which such responses are recal­cit­rant will depend on the extent to which they, as well as the pro­cesses that give rise to them, are cog­nit­ively impen­et­rable.

One of the advant­ages of his the­ory, accord­ing to Griffiths, is that “[t]he occur­rence of emo­tions in the absence of suit­able beliefs is con­ver­ted from a philo­soph­ers’ para­dox into a prac­tic­al sub­ject for psy­cho­lo­gic­al invest­ig­a­tion” (1990: 192). The present explan­a­tion is sim­il­arly advant­age­ous in that it provides an explan­a­tion of emo­tion­al recal­cit­rance that is empir­ic­ally veri­fi­able. But by the same token, the explan­a­tion is only of interest to the extent that it is empir­ic­ally plaus­ible. The evid­ence is far from con­clus­ive, but there is good reas­on to think we are on the right track.

McRae et al. (2012) sought to test “wheth­er the way an emo­tion is gen­er­ated influ­ences the impact of sub­sequent emo­tion reg­u­lat­ory efforts” (pg. 253). Emotions can be triggered ‘bot­tom up’, i.e. in response to per­cept­ible prop­er­ties of a stim­u­lus, or ‘top down’, i.e. in response to cog­nit­ive apprais­als of an event. They took their find­ings to “sug­gest that top-down gen­er­ated emo­tions are more suc­cess­fully down-regulated by reapprais­al than bottom-up emo­tions” (pg. 259). Emotions gen­er­ated bottom-up, then, appear to behave as if they are cog­nit­ively impen­et­rable; or at least, as if they are less pen­et­rable than ones gen­er­ated top-down. Insofar as any of the emo­tions thus gen­er­ated con­flict (in the rel­ev­ant sense) with an eval­u­at­ive judge­ment, we have an instance of emo­tion­al recal­cit­rance. Run these thoughts togeth­er, and they imply that recal­cit­rant emo­tions are recal­cit­rant to the extent that they are cog­nit­ively impen­et­rable.

 

References:

Brady, M. S. (2009). ‘The Irrationality of Recalcitrant Emotions’. Philosophical Studies 145: 413–30.

D’Arms, J., & Jacobson, D. (2003). ‘The Significance of Recalcitrant Emotion’. In A. Hatzimoysis (Ed.), Philosophy and the Emotions. Cambridge: Cambridge University Press.

Goldie, P. (2000). The Emotions: A Philosophical Exploration. Oxford University Press.

Greenspan, P. S. (1981). ‘Emotions as Evaluations’. Pacific Philosophical Quarterly 62: 158–69.

Griffiths, P. E. (1990). ‘Modularity, and the Psychoevolutionary Theory of Emotion’. Biology and Philosophy 5: 175–96.

—- (1997). What Emotions Really Are. Chicago University Press.

Helm, B. (2001). Emotional Reason. Cambridge University Press.

McRae, K., Misra, S., Prasad, A. K., Pereira, S. C., Gross, J. J. (2012). ‘Bottom-up and Top-down Emotion Generation: Implications for Emotion Regulation’. Social Cognitive and Affective Neuroscience 7: 253–62.

 

Resisting nativism about mindreading

Marco Fenici–Independent research­er

My flat­mate, Sam, returns home from cam­pus, and tells me he is thirsty. We always have beer in the fridge, and I know he likes it, but I have already drunk the last one. What will Sam do? I pre­dict that he will go to the kit­chen look­ing for beer. At least, this is what I should do if I con­sider his reas­on­able (but incor­rect) belief that there is beer in the fridge.

As philo­soph­ers often put it, such situ­ations rely on mindread­ing—our capa­city to attrib­ute men­tal states such as beliefs, desires, and inten­tions to oth­ers. Indeed, this capa­city is often deemed vital for the pre­dic­tion and explan­a­tion of oth­ers’ beha­viour in a wide vari­ety of situ­ations (Dennett, 1987; Fodor, 1987); a view that has influ­enced much empir­ic­al research. Extended invest­ig­a­tion of children’s capa­city to pre­dict oth­ers’ actions using elicited-response false belief tasks (Baron-Cohen, Leslie, & Frith, 1985; Wimmer & Perner, 1983), which appar­ently require chil­dren to per­form infer­en­tial reas­on­ing of the above kind, was, until recently, widely taken to show that it is not until age four or more that chil­dren cor­rectly under­stand oth­ers’ to have false beliefs (Wellman, Cross, & Watson, 2001).

These find­ings led to a large debate between, so-called, sim­u­la­tion the­or­ists and the­ory the­or­ists, but this debate has proven largely ortho­gon­al to the con­cerns of psy­cho­lo­gists (see Apperly, 2008, 2009 for dis­cus­sion). Thus, I will not dis­cuss it fur­ther in the present treat­ment. Instead, I will focus on a fur­ther con­tro­versy raised by the above find­ings: namely, the ques­tion of how infants/children acquire the socio-cognitive abil­it­ies. According to the child-as-scientist view (Bartsch & Wellman, 1995; Carey & Spelke, 1996; Gopnik & Meltzoff, 1996), chil­dren acquire a Theory of Mind (ToM) by form­ing, test­ing and revis­ing hypo­theses about the rela­tions between men­tal states and observed beha­viour. In con­trast, pro­ponents of mod­u­lar­ism about mindread­ing (Baron-Cohen, 1995) con­tend that chil­dren have an innately endowed ToM provided by a domain-specific cog­nit­ive mod­ule, which has developed as our spe­cies evolved (Cosmides & Tooby, 1992; Humphrey, 1976; Krebs & Dawkins, 1984).

In the last years, the nat­iv­ist view has been gain­ing increas­ing con­sensus after the find­ing that infants look longer—indicating their surprise—when they see an act­or act­ing against a (false) belief that it would be ration­al to attrib­ute to her (see Baillargeon, Scott, & He, 2010 for a review). These res­ults are taken to indic­ate that infants can attrib­ute true and false beliefs to oth­er agents, and expect them to act coher­ently with these attrib­uted men­tal states. Because of the very young age of the infants assessed, it has been claimed that, since birth, they must pos­sess a pre­dis­pos­i­tion to identi­fy oth­ers’ men­tal states thereby imply­ing a nativ­ism about mindread­ing.

I have always been con­cerned about this con­clu­sion, which seems to me a capit­u­la­tion to a best explan­a­tion argu­ment. Indeed, infants’ select­ive response in a spontaneous-response task does not yet spe­cify which prop­er­ties of the agent infants are sens­it­ive to. It is not clear at all that the infants are respond­ing to men­tal prop­er­ties of the agents they observe rather than to oth­er observed fea­tures of the actor’s beha­viour or of the scene (Fenici & Zawidzki, in press; Hutto, Herschbach, & Southgate, 2011; Rakoczy, 2012). Furthermore, embra­cing nativ­ism about mindread­ing excludes the pos­sib­il­ity that infants may learn to attrib­ute men­tal states in their earli­est year of life (see Mazzone, 2015).

Moreover, the nat­iv­ist inter­pret­a­tion of infants’ look­ing beha­viour in spontaneous-response false belief tasks mani­fests an “adulto­centric” bias. Indeed, what seems to us a full-fledged abil­ity to inter­pret oth­ers’ actions by attrib­ut­ing men­tal states may have an inde­pend­ent explan­a­tion when mani­fes­ted in the look­ing beha­viour of young­er infants. But, as it so hap­pens, there are vari­ous reas­ons to doubt that infants’ social cog­nit­ive capa­cit­ies mani­fes­ted in spontaneous-response false belief tasks are devel­op­ment­ally con­tinu­ous with later belief attri­bu­tion capa­cit­ies such as those appar­ently mani­fes­ted by four-year-olds when suc­ceed­ing in elicited-response false belief tasks (see Fenici, 2013, sec. 4 for full dis­cus­sion).

First, three-year-olds are sens­it­ive to false beliefs in spontaneous- but not in elicited-response false belief tasks (Clements & Perner, 1994; Garnham & Ruffman, 2001) in con­trast to aut­ist­ic sub­jects, who suc­ceed in eli­cited (Happé, 1995) but not spontaneous-response false belief tasks (Senju, 2012; Senju et al., 2010). These opposed pat­terns sug­gest that the two capa­cit­ies can be decoupled.

Furthermore, the activ­a­tion of the ToM mod­ule is sup­posed to be auto­mat­ic. Looking at the empir­ic­al evid­ence, adults abil­ity for per­spect­ive tak­ing is auto­mat­ic (Surtees, Butterfill, & Apperly, 2011) while the capa­city to con­sider oth­ers’ beliefs is not (Apperly, Riggs, Simpson, Chiavarino, & Samson, 2006; Back & Apperly, 2010, but see Cohen & German, 2009 for dis­cus­sion).

Finally, if infants’ ToM mech­an­ism was mostly respons­ible for their later suc­cess in elicited-response false belief tasks, one would expect alleged mindread­ing abil­it­ies in infancy to be a strong pre­dict­or of four-year-olds’ belief attri­bu­tion capa­cit­ies. However, lon­git­ud­in­al stud­ies found only isol­ated and task-specific pre­dict­ive cor­rel­a­tions from infants’ per­form­ance in a vari­ety of spontaneous-response false belief tasks at 15–18 months to suc­cess by the same chil­dren in elicited-response false belief tasks at age four (Thoermer, Sodian, Vuori, Perst, & Kristen, 2012).

These con­sid­er­a­tions make it import­ant to explore altern­at­ive non-nativist explan­a­tions of the same data. In Fenici (2014), I under­took this chal­lenge and argued that infants can pro­gress­ively refine their capa­city to form an expect­a­tion about the next course of an observed action without attrib­ut­ing a men­tal state to the act­or.

In detail, exten­ded invest­ig­a­tion has by now demon­strated that, from 5–6‑months on, infants can track the (motor) goals of oth­ers’ actions, such as grasp­ing (Woodward, 1998, 2003). By one year, this capa­city is quite soph­ist­ic­ated (Biro, Verschoor, & Coenen, 2011; Sommerville & Woodward, 2005; Woodward & Sommerville, 2000). These stud­ies demon­strate that infants asso­ci­ate cog­nit­ive agents with the out­come of their actions, and rely on these asso­ci­ations to form expect­a­tions about the agent’s future beha­viour. Although this is nor­mally taken to be equi­val­ent to the idea that infants attrib­ute goals, these capa­cit­ies may depend on neur­al pro­cesses of cov­ert (motor) imit­a­tion (Iacoboni, 2003; Wilson & Knoblich, 2005; Wolpert, Doya, & Kawato, 2003), which become pro­gress­ively attuned to more abstract fea­tures of the observed action due to asso­ci­at­ive learn­ing (Cooper, Cook, Dickinson, & Heyes, 2013; Ruffman, Taumoepeau, & Perkins, 2012).

Computing the stat­ist­ic­al reg­u­lar­it­ies in observed pat­terns of action may lead infants to form expect­a­tions not only about oth­ers’ motor beha­viour but also about their gaze. Indeed, infants find it more dif­fi­cult to track target-directed gaze than target-directed motor beha­viour because the former but not the lat­ter lacks phys­ic­al con­tact between the act­or and the tar­get. They can nev­er­the­less begin form­ing asso­ci­ations between act­ors and the tar­get of their gaze by noti­cing that cog­nit­ive agents reg­u­larly act upon the objects they gaze at. This hypo­thes­is is coher­ent with empir­ic­al data attest­ing that the abil­ity to fol­low oth­ers’ gaze sig­ni­fic­antly improves around the ninth month (Johnson, Ok, & Luo, 2007; Luo, 2010; Senju, Csibra, & Johnson, 2008), and that this capa­city may merely depend on infants’ abil­ity to detect con­tin­gent pat­terns of inter­ac­tion with the gaz­ing agent (Deligianni, Senju, Gergely, & Csibra, 2011).

The ana­lys­is above may also account for infants’ attested sens­it­iv­ity to goal-directed beha­viour and gaz­ing. Significantly, it may also explain the cog­nit­ive capa­cit­ies mani­fes­ted in spontaneous-response false belief tasks. In fact, sev­er­al stud­ies found that, around 12–14 months, infants do not asso­ci­ate an agent with a pos­sible tar­get of action when a bar­ri­er is pre­vent­ing her from see­ing the tar­get (Butler, Caron, & Brooks, 2000; Caron, Kiel, Dayton, & Butler, 2002; Sodian, Thoermer, & Metz, 2007). Statistical learn­ing may well account for this nov­el capa­city just as it appar­ently explains 9‑month-olds’ acquired sens­it­iv­ity to gaze dir­ec­tion from their pre­vi­ous sens­it­iv­ity to target-directed beha­viour.

Indeed, once they have learnt to asso­ci­ate act­ors with the tar­geted objects of their gaz­ing, infants can start noti­cing that agents do not behave sim­il­arly in the pres­ence or in the absence of bar­ri­ers in their line of gaze. Significantly, this sens­it­iv­ity to the modi­fy­ing role that bar­ri­ers have on oth­ers’ future gaz­ing and act­ing comes in place right before infants start mani­fest­ing sens­it­iv­ity to false beliefs in spontaneous-response false belief tasks. This may well be because devel­op­ing this sens­it­iv­ity is the last devel­op­ment­al step that infants need to achieve to mani­fest looking-behaviour that is select­ive to oth­ers’ false beliefs in spontaneous-response false belief tasks.

In con­clu­sion, des­pite the wide con­sensus that nativ­ism about mindread­ing boasts among philo­soph­ers and devel­op­ment­al psy­cho­lo­gists, the evid­ence actu­ally opposes a con­tinu­ity in the devel­op­ment of social cog­ni­tion from infancy to early child­hood. Therefore, the capa­cit­ies mani­fes­ted in spontaneous-response seem not to be the fore­run­ners of our mature capa­city to attrib­ute men­tal states, and that they could have evolved in oth­er ways (Fenici, in press, subm., 2012; Fenici & Carpendale, in prep.) Future research should explore the pos­sib­il­ity that infants’ alleged mindread­ing capa­cit­ies actu­ally indic­ate some more basic tend­ency to form and update expect­a­tions about oth­ers’ future actions, a capa­city which pro­gress­ively devel­ops over the course of time to reflect a grow­ing appre­ci­ation of which objects oth­ers can and can­not gaze at (Fenici, 2014; Ruffman, 2014).

References

Apperly, I. A. (2008). Beyond Simulation-theory and Theory-theory: why social cog­nit­ive neur­os­cience should use its own con­cepts to study “the­ory of mind.” Cognition, 107(1), 266–283. http://doi.org/10.1016/j.cognition.2007.07.019

Apperly, I. A. (2009). Alternative routes to perspective-taking: Imagination and rule-use may be bet­ter than sim­u­la­tion and the­or­ising. British Journal of Developmental Psychology, 27(3), 545–553. http://doi.org/10.1348/026151008X400841

Apperly, I. A., Riggs, K. J., Simpson, A., Chiavarino, C., & Samson, D. (2006). Is belief reas­on­ing auto­mat­ic? Psychological Science, 17(10), 841–844. http://doi.org/10.1111/j.1467–9280.2006.01791.x

Back, E., & Apperly, I. A. (2010). Two sources of evid­ence on the non-automaticity of true and false belief ascrip­tion. Cognition, 115(1), 54–70.

Baillargeon, R., Scott, R. M., & He, Z. (2010). False-belief under­stand­ing in infants. Trends in Cognitive Sciences, 14(3), 110–118.

Baron-Cohen, S. (1995). Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: The MIT Press.

Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the aut­ist­ic child have a “Theory of Mind”? Cognition, 21(1), 37–46.

Bartsch, K., & Wellman, H. M. (1995). Children Talk About the Mind. New York: Oxford University Press.

Biro, S., Verschoor, S., & Coenen, L. (2011). Evidence for a unit­ary goal concept in 12-month-old infants. Developmental Science, 14(6), 1255–1260.

Butler, S. C., Caron, A. J., & Brooks, R. (2000). Infant under­stand­ing of the ref­er­en­tial nature of look­ing. Journal of Cognition and Development, 1(4), 359–377.

Carey, S., & Spelke, E. S. (1996). Science and core know­ledge. Philosophy of Science, 63(4), 515–533.

Caron, A. J., Kiel, E. J., Dayton, M., & Butler, S. C. (2002). Comprehension of the ref­er­en­tial intent of look­ing and point­ing between 12 and 15 months. Journal of Cognition and Development, 3(4), 445–464. http://doi.org/10.1080/15248372.2002.9669677

Clements, W. A., & Perner, J. (1994). Implicit under­stand­ing of belief. Cognitive Development, 9(4), 377–395.

Cohen, A. S., & German, T. C. (2009). Encoding of oth­ers’ beliefs without overt instruc­tion. Cognition, 111(3), 356–363. http://doi.org/10.1016/j.cognition.2009.03.004

Cooper, R. P., Cook, R., Dickinson, A., & Heyes, C. M. (2013). Associative (not Hebbian) learn­ing and the mir­ror neur­on sys­tem. Neuroscience Letters, 540, 28–36. http://doi.org/10.1016/j.neulet.2012.10.002

Cosmides, L., & Tooby, J. (1992). Cognitive adapt­a­tions for social exchange. In J. Barkow, L. Cosmides, & J. Tooby, The Adapted Mind: Evolutionary Psychology and the Generation of Culture (pp. 163–228). Oxford: Oxford University Press.

Deligianni, F., Senju, A., Gergely, G., & Csibra, G. (2011). Automated gaze-contingent objects eli­cit ori­ent­a­tion fol­low­ing in 8‑month-old infants. Developmental Psychology, 47(6), 1499–1503.

Dennett, D. C. (1987). The Intentional Stance. Cambridge, MA: The MIT Press.

Fenici, M. (subm.). How chil­dren approach the false belief test: Social devel­op­ment, prag­mat­ics, and the assembly of Theory of Mind. Cognition.

Fenici, M. (in press). What is the role of exper­i­ence in children’s suc­cess in the false belief test: mat­ur­a­tion, facil­it­a­tion, attun­e­ment, or induc­tion? Mind & Language.

Fenici, M. (2012). Embodied social cog­ni­tion and embed­ded the­ory of mind. Biolinguistics, 6(3–4), 276–307.

Fenici, M. (2013). Social cog­nit­ive abil­it­ies in infancy: is mindread­ing the best explan­a­tion? Philosophical Psychology. http://doi.org/10.1080/09515089.2013.865096

Fenici, M. (2014). A simple explan­a­tion of appar­ent early mindread­ing: infants’ sens­it­iv­ity to goals and gaze dir­ec­tion. Phenomenology and the Cognitive Sciences, 14, 1–19. http://doi.org/10.1007/s11097-014‑9345‑3

Fenici, M., & Carpendale, J. I. M. (in prep.). Solving the false belief test puzzle: A con­struct­iv­ist approach to the devel­op­ment of social under­stand­ing.

Fenici, M., & Zawidzki, T. W. (in press). Do infant inter­pret­ers attrib­ute endur­ing men­tal states or track rela­tion­al prop­er­ties of tran­si­ent bouts of beha­vi­or? Studia Philosophica Estonica, 9(2).

Fodor, J. A. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: The MIT Press.

Garnham, W. A., & Ruffman, T. (2001). Doesn’t see, doesn’t know: is anti­cip­at­ory look­ing really related to under­stand­ing or belief? Developmental Science, 4(1), 94–100.

Gopnik, A., & Meltzoff, A. N. (1996). Words, Thoughts, and Theories. Cambridge, Mass: The MIT Press.

Happé, F. G. E. (1995). The role of age and verbal abil­ity in the the­ory of mind task per­form­ance of sub­jects with aut­ism. Child Development, 66(3), 843–855.

Humphrey, N. K. (1976). The social func­tion of intel­lect. In P. P. G. Bateson & J. R. Hinde (Eds.), Growing Points in Ethology (pp. 303–317). Cambridge: Cambridge University Press.

Hutto, D. D., Herschbach, M., & Southgate, V. (2011). Social cog­ni­tion: mindread­ing and altern­at­ives. Review of Philosophy and Psychology, 2(3), 375–395. http://doi.org/10.1007/s13164-011‑0073‑0

Iacoboni, M. (2003). Understanding inten­tions through imit­a­tion. In S. H. Johnson-Frey (Ed.), Taking Action: Cognitive Neuroscience Perspectives on Intentional Acts (pp. 107–138). Cambridge, MA: The MIT Press.

Johnson, S. C., Ok, S., & Luo, Y. (2007). The attri­bu­tion of atten­tion: 9‑month-olds’ inter­pret­a­tion of gaze as goal-directed action. Develop­ment­al Science, 10(5), 530–537. http://doi.org/10.1111/j.1467–7687.2007.00606.x

Krebs, J. R., & Dawkins, R. (1984). Animal sig­nals: mind-reading and manip­u­la­tion. Behavioural Ecology: An Evolutionary Approach, 2, 380–402.

Luo, Y. (2010). Do 8‑month-old infants con­sider situ­ation­al con­straints when inter­pret­ing oth­ers’ gaze as goal-directed action? Infancy, 15(4), 392–419.

Mazzone, M. (2015). Being nat­iv­ist about mind read­ing: More demand­ing than you might think. In Proceedings of the EuroAsianPacific Joint Conference on Cognitive Science (EAPCogSci 2015) (Vol. 1419, pp. 288–293).

Rakoczy, H. (2012). Do infants have a the­ory of mind? British Journal of Developmental Psychology, 30(1), 59–74. http://doi.org/10.1111/j.2044–835X.2011.02061.x

Ruffman, T. (2014). To belief or not belief: Children’s the­ory of mind. Developmental Review, 34(3), 265–293. http://doi.org/10.1016/j.dr.2014.04.001

Ruffman, T., Taumoepeau, M., & Perkins, C. (2012). Statistical learn­ing as a basis for social under­stand­ing in chil­dren. British Journal of Developmental Psychology, 30(1), 87–104.

Senju, A. (2012). Spontaneous the­ory of mind and its absence in aut­ism spec­trum dis­orders. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry, 18(2), 108–113. http://doi.org/10.1177/1073858410397208

Senju, A., Csibra, G., & Johnson, M. H. (2008). Understanding the ref­er­en­tial nature of look­ing: Infants’ pref­er­ence for object-directed gaze. Cognition, 108(2), 303–319. http://doi.org/10.1016/j.cognition.2008.02.009

Senju, A., Southgate, V., Miura, Y., Matsui, T., Hasegawa, T., Tojo, Y., … Csibra, G. (2010). Absence of spon­tan­eous action anti­cip­a­tion by false belief attri­bu­tion in chil­dren with aut­ism spec­trum dis­order. Development and Psychopathology, 22(02), 353–360. http://doi.org/10.1017/S0954579410000106

Sodian, B., Thoermer, C., & Metz, U. (2007). Now I see it but you don’t: 14-month-olds can rep­res­ent anoth­er person’s visu­al per­spect­ive. Developmental Science, 10(2), 199–204.

Sommerville, J. A., & Woodward, A. L. (2005). Pulling out the inten­tion­al struc­ture of action: the rela­tion between action pro­cessing and action pro­duc­tion in infancy. Cognition, 95(1), 1–30.

Surtees, A. D. R., Butterfill, S. A., & Apperly, I. A. (2011). Direct and indir­ect meas­ures of level-2 perspective-taking in chil­dren and adults. British Journal of Developmental Psychology, 30, 75–86.

Thoermer, C., Sodian, B., Vuori, M., Perst, H., & Kristen, S. (2012). Continuity from an impli­cit to an expli­cit under­stand­ing of false belief from infancy to preschool age. British Journal of Developmental Psychology, 30(1), 172–187. http://doi.org/10.1111/j.2044–835X.2011.02067.x

Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory-of-mind devel­op­ment: the truth about false belief. Child Devel­op­ment, 72(3), 655–684.

Wilson, M., & Knoblich, G. (2005). The case for motor involve­ment in per­ceiv­ing con­spe­cif­ics. Psychological Bulletin, 131(3), 460–473.

Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: rep­res­ent­a­tion and con­strain­ing func­tion of wrong beliefs in young children’s under­stand­ing of decep­tion. Cognition, 13(1), 103–128.

Wolpert, D. M., Doya, K., & Kawato, M. (2003). A uni­fy­ing com­pu­ta­tion­al frame­work for motor con­trol and social inter­ac­tion. Philosophical Transactions of the Royal Society B: Biological Sciences, 358(1431), 593–602. http://doi.org/10.1098/rstb.2002.1238

Woodward, A. L., & Sommerville, J. A. (2000). Twelve-month-old infants inter­pret action in con­text. Psychological Science, 11(1), 73–77.

 

The Experience of Trying

Josh Shepherd– Junior Research Fellow in Philosophy, Jesus College; Postdoctoral Research Fellow, Oxford Centre for Neuroethics; James Martin Fellow, Oxford Martin School

What kinds of con­scious exper­i­ences accom­pany (and per­haps assist) the exer­cise of con­trol over bod­ily and men­tal action?

For answers on this and related ques­tions, one might turn to the rap­idly grow­ing lit­er­at­ure on the so-called ‘sense of agency.’ The sense of agency is sup­posed to be some­thing exper­i­en­tial and related to action, but I think it is fair to say that there is little unity in the ways sci­ent­ists deploy the term. Andreas Kalckert (2014) writes that the sense of agency is ‘the exper­i­ence of being able to vol­un­tar­ily con­trol limb move­ment.’ Hauser et al. (2011) write that the sense of agency is ‘the exper­i­ence of con­trolling one’s own actions and their con­sequences.’ Damen et al. (2015) write that ‘The sense of agency refers to the abil­ity to recog­nize one­self as the con­trol­ler of one’s own actions and to dis­tin­guish these from actions caused or con­trolled by oth­er sources.’ Chambon et al. (2013) write ‘sense of agency’ refers to the feel­ing of con­trolling an extern­al event through one’s own action.’ David et al. (2008) write ‘The sense of agency is a cent­ral aspect of human self-consciousness and refers to the exper­i­ence of one­self as the agent of one’s own actions.’ These glosses vari­ously emphas­ize vol­un­tary con­trol of limb move­ment, con­trolling actions and con­sequences, con­trolling con­sequences through action, abil­it­ies of recog­ni­tion and dis­crim­in­a­tion, and an exper­i­ence of one­self as agent. While these glosses might share a neigh­bor­hood, they dif­fer in details that are, argu­ably, quite import­ant if one wants to under­stand the kinds of exper­i­ence at issue in bod­ily (and men­tal) action con­trol.

In my own work, then, I have eschewed use of the term sense of agency, pre­fer­ring instead to start with a more detailed account of the phe­nomen­o­logy. Consider, for example, what I have called the exper­i­ence of try­ing.

To get a grip on this kind of exper­i­ence, con­sider lift­ing a heavy weight with one’s arm. Doing so, one will often exper­i­ence ten­sion in the elbow, strain or effort in the muscles, heav­i­ness or pull on the wrist, and so on. In addi­tion, there is an aspect of this exper­i­ence that is not to be iden­ti­fied with any of these haptic ele­ments, or with any con­junc­tion of them. When lift­ing the heavy weight, one has an exper­i­ence of try­ing to do so. Put gen­er­ally, then, we might say that the exper­i­ence of try­ing is an exper­i­ence as of dir­ect­ing activ­ity towards the sat­is­fac­tion of an inten­tion (this is not to say that pos­sess­ing a concept of inten­tion or of an intention’s sat­is­fac­tion is neces­sary for the capa­city to have such exper­i­ences). In the example at hand, it is a phe­nom­en­al char­ac­ter as of dir­ect­ing the move­ments of the arm.

With this much, many appear to agree. David Hume speaks of the ‘intern­al impres­sion’ of ‘know­ingly giv­ing rise to’ some motion of the body or per­cep­tion of the mind. His lan­guage sug­gests that he regards the ‘giv­ing rise to’ as fun­da­ment­ally dir­ect­ive.

It may be said, that we are every moment con­scious of intern­al power; while we feel, that, by the simple com­mand of our will, we can move the organs of our body, or dir­ect the fac­ulties of our mind. An act of voli­tion pro­duces motion in our limbs, or raises a new idea in our ima­gin­a­tion. This influ­ence of the will we know by con­scious­ness. (2000, 52)

 Further evid­ence for this point is that Hume thought of this experience-type as whatever is shared in both suc­cess­ful and failed actions:

A man, sud­denly struck with palsy in the leg or arm, or who had newly lost those mem­bers, fre­quently endeav­ours, at first to move them, and employ them in their usu­al offices. Here he is as much con­scious of power to com­mand such limbs, as a man in per­fect health is con­scious of power to actu­ate any mem­ber which remains in its nat­ur­al state and con­di­tion. (53)

More recently Carl Ginet has asser­ted a very sim­il­ar view.

It could seem to me that I vol­un­tar­ily exert a force for­ward with my arm without at the same time its seem­ing to me that I feel the exer­tion hap­pen­ing: the arm feels kin­es­thet­ic­ally anes­thet­ized. (Sometimes, after an injec­tion of anes­thet­ic at the dentist’s office, my tongue seems to me thus kin­es­thet­ic­ally dead as I vol­un­tar­ily exer­cise it: I then have an illu­sion that my will fails to engage my tongue.) (1990, 28)

Are these philo­soph­ers right? In a recent paper (Shepherd 2015) I argue for a pos­i­tion that seems (to me) to indic­ate the answer is yes. This is the view:

Constitutive view. The neur­al activ­ity that real­izes an exper­i­ence of try­ing is just a part of the neur­al activ­ity that dir­ects real-time action con­trol.

 My argu­ment – very briefly – is this. There is no good empir­ic­al reas­on to deny this view. And there is some empir­ic­al reas­on to adopt it. In what fol­lows I’ll offer a shortened ver­sion of the first part of this argu­ment.

Why do I say there is no good empir­ic­al reas­on to deny the view? The best empir­ic­al reas­on would stem from applic­a­tion of a cer­tain kind of the­ory of the ‘sense of agency’ to exper­i­ences of try­ing. This the­ory seeks to estab­lish that some ver­sion of a com­par­at­or mod­el of the sense of agency is cor­rect. According to the com­par­at­or mod­el:

an inten­tion pro­duces overt action by inter­act­ing with a tangled series of mod­el­ing mech­an­isms that take the intention’s rel­at­ively abstract spe­cific­a­tion of a goal-state and trans­form it into vari­ous fine-grained, func­tion­ally spe­cif­ic com­mands and pre­dic­tions. An inverse mod­el (or ‘con­trol­ler’) takes the goal state as input and out­puts a motor com­mand designed to drive the agent towards the goal-state. A for­ward mod­el receives a copy of the motor com­mand as input and out­puts a pre­dic­tion con­cern­ing its likely sens­ory con­sequences. Throughout action pro­duc­tion, the inverse mod­el receives updates from vari­ous com­par­at­or mech­an­isms. On stand­ard expos­i­tions of the mod­el (e.g., Synofzik et al. 2008), three types of com­par­at­or mech­an­ism are pos­ited. One com­pares the goal-state with feed­back from the envir­on­ment, and informs the inverse mod­el of any errors; a second com­pares the goal-state with the for­ward model’s pre­dic­tions, and informs the inverse mod­el of any errors; a third com­pares the for­ward model’s pre­dic­tion with feed­back from the envir­on­ment, and informs the for­ward mod­el (so as to devel­op a more accur­ate for­ward mod­el). (Shepherd 2015, 5)

 On a com­par­at­or account of agen­t­ive exper­i­ence, when pre­dicted and desired (or, at slower time scales, pre­dicted and actu­al) states match, the giv­en com­par­at­or ‘codes’ the activ­ity as self-generated. This code is then sent to a sys­tem hypo­thes­ized to use it in gen­er­at­ing the sense of agency. Proponents of the com­par­at­or account recog­nize that this is not a com­plete explan­a­tion of agen­t­ive exper­i­ence, but they main­tain that this match­ing pro­cess “lies at the heart of the phe­nomen­on” (Bayne 2011, 357).

Notice that accord­ing to the com­par­at­or account, the neur­al activ­it­ies that real­ize agen­t­ive exper­i­ence are not dir­ectly involved with action gen­er­a­tion and con­trol. If a com­par­at­or mod­el can account for the exper­i­ence of try­ing, then the con­stitutive view is likely false. Of course, this account was not designed to explain exper­i­ences of try­ing, but rather the sense of agency. Can a com­par­at­or account be exten­ded to exper­i­ences of try­ing?

I argue from self-paralysis stud­ies that the answer is no. In these stud­ies, exper­i­menters para­lyzed them­selves with neur­omus­cu­lar blocks that left them con­scious, and then attemp­ted to per­form vari­ous actions. Regarding the res­ult­ant exper­i­ences, here is what Simon Gandevia and col­leagues report.

All repor­ted strong sen­sa­tions of effort accom­pa­ny­ing attemp­ted move­ment of the limb, as if try­ing to move an object of immense weight. Subjective dif­fi­culty in sus­tain­ing a steady level of effort for more than a few seconds was exper­i­enced, partly because there was no visu­al or aud­it­ory feed­back that the effort was appro­pri­ate, and because all sub­jects exper­i­enced unex­pec­ted illu­sions of move­ment. As examples, attemp­ted flex­ion of the fin­gers pro­duced a feel­ing of slight but dis­tinct exten­sion which sub­sided in spite of con­tin­ued effort, and attemp­ted dor­siflex­ion of the ankle led to the sen­sa­tion of slow plantar flex­ion. Further increases in effort repeatedly caused the same illus­ory move­ments. (Gandevia et al. 1993, 97)

As I note in (Shepherd 2015):

[P]articipants had exper­i­ences of try­ing to move a fin­ger or ankle in a cer­tain dir­ec­tion. And par­ti­cipants had exper­i­ences of the rel­ev­ant fin­ger or ankle mov­ing in the oth­er dir­ec­tion. This indic­ates that the exper­i­ence of try­ing is both caus­ally linked with and dis­tinct from the exper­i­ence of the body mov­ing. (7)

This also looks like con­firm­a­tion of the claims made by Hume and Ginet, and indic­a­tion that a com­par­at­or mod­el does not work for exper­i­ences of try­ing. Nothing like a match­ing pro­cess appears to under­lie these exper­i­ences.

This leaves open a num­ber of inter­est­ing ques­tions. Do we have pos­it­ive empir­ic­al reas­on to adopt the con­stitutive view? How do exper­i­ences of try­ing relate to oth­er agen­t­ive exper­i­ences – exper­i­ences of action, per­cep­tu­al exper­i­ences in action, exper­i­ences of con­trol or of error in action, and so on? I deal with some of these ques­tions in my (2015). I deal with oth­ers in work in pro­gress. Dealing with all of them is more than enough work for much more than one per­son.

 

References:

Bayne, T. (2011). The sense of agency. In F. Macpherson (ed.), The Senses. Oxford: Oxford University Press: 355–374.

Chambon, V., Wenke, D., Fleming, S. M., Prinz, W., & Haggard, P. (2013). An online neur­al sub­strate for sense of agency. Cerebral Cortex, 23(5), 1031–1037.

Damen, T. G., Müller, B. C., van Baaren, R. B., & Dijksterhuis, A. (2015). Re-Examining the Agentic Shift: The Sense of Agency Influences the Effectiveness of (Self) Persuasion. PloS one, 10(6), e0128635.

Ginet, C. 1990. On Action. Cambridge University Press.

Hauser, M., Moore, J. W., de Millas, W., Gallinat, J., Heinz, A., Haggard, P., & Voss, M. (2011). Sense of agency is altered in patients with a putat­ive psychot­ic pro­drome. Schizophrenia research, 126(1), 20–27.

Hume, D. (2000). An Enquiry Concerning Human Understanding: A Critical Edition, (ed.) T. L. Beauchamp. Oxford University Press.

Kalckert, A. (2014). Moving a rub­ber hand: the sense of own­er­ship and agency in bod­ily self-recognition.

Shepherd, J. (2015). Conscious action/Zombie action. Noûs.

Synofzik, M., Vosgerau, G. and Newen, A. (2008). Beyond the com­par­at­or mod­el: A mul­ti­factori­al two-step account of agency. Consciousness and Cognition 17(1): 219–239.