iCog Blog

How can I credibly commit to others?

Francesca Bonalumi — PhD can­did­ate, Department of Cognitive Science, Central European University



Imagine that you plan to go to the gym with your friend Kate. You decide togeth­er to meet in the lock­er room at 6pm. Why would you expect that Kate will hon­our this agree­ment to meet you at the gym? Now, ima­gine that at 5.30pm you dis­cov­er that some oth­er friends are gath­er­ing at 6pm, and you would love to join them. What restrains you from join­ing them, even if this is now your pre­ferred option? Your answers to these kinds of dilem­mas that are faced in every­day life will prob­ably involve some ref­er­ence to the fact that a com­mit­ment was in place between you and Kate.

The notion of a com­mit­ment is worth invest­ig­at­ing, in part, because it applies to such a wide vari­ety of cases: we are com­mit­ted to our part­ners, our faith, our work, our prom­ises, our goals, and even ourselves. Although there is an obvi­ous sim­il­ar­ity between all these situ­ations, I will restrict this post to instances of inter­per­son­al com­mit­ment, namely those com­mit­ments that are made by one indi­vidu­al to anoth­er indi­vidu­al (cfr. Clark 2006). According to a stand­ard philo­soph­ic­al defin­i­tion of inter­per­son­al com­mit­ment, a com­mit­ment is a rela­tion among one com­mit­ted agent, one agent to whom the com­mit­ment has been made, and an action which the com­mit­ted agent is oblig­ated to per­form (Searle 1969; Scanlon 1998).

The abil­ity to make and assess inter­per­son­al com­mit­ments is cru­cial in sup­port­ing our proso­cial beha­viour: being motiv­ated to com­ply with those courses of action that we have com­mit­ted to, and being able to assess wheth­er we can rely on oth­ers’ com­mit­ments, enables us to per­form a wide range of jointly coordin­ated and inter­per­son­al activ­it­ies that wouldn’t oth­er­wise be feas­ible (Michael & Pacherie, 2015). This abil­ity requires psy­cho­lo­gic­al mech­an­isms that induce indi­vidu­als to fol­low rules or plans even when it is not in their short-term interests: this can sus­tain phe­nom­ena from the inhib­i­tion of short-term self-interested actions to the motiv­a­tion for mor­al beha­viour. I will focus on one key, yet under­ap­pre­ci­ated, aspect of this rela­tion which sus­tains the whole act of com­mit­ting: how the com­mit­ted agent gives assur­ance to the oth­er agent that she will per­form the rel­ev­ant action. That is, how she makes her com­mit­ment cred­ible.

Making a com­mit­ment can be defined as an act that aims to influ­ence anoth­er agent’s beha­viour by chan­ging her expect­a­tions (e.g. my com­mit­ting to help a friend influ­ences my friend’s beha­viour, inso­far as she can now rely on my help), and by this act the com­mit­ter gains addi­tion­al motiv­a­tion for per­form­ing the action that she com­mit­ted to (Nesse 2001; Schelling 1980). The key ele­ment in all of this is cred­ib­il­ity: how do I cred­ibly per­suade someone that I will do some­thing that I wouldn’t do oth­er­wise? And why would I remain motiv­ated to do some­thing that is no longer in my interest to do? Indeed, a dilemma faced by recip­i­ents in any com­mu­nic­at­ive inter­ac­tion is determ­in­ing wheth­er they can rely on the sig­nal of the sender (i.e. how to rule out the pos­sib­il­ity that the sender is send­ing a fake sig­nal) (Sperber et al., 2010). Likewise, in a cooper­at­ive con­text the prob­lem for any agent is how to dis­tin­guish between a cred­ible com­mit­ment and a fake com­mit­ment, and how to sig­nal a cred­ible com­mit­ment without being mis­taken for a defect­or (Schelling, 1980).

The most per­suas­ive way to make my com­mit­ment cred­ible is to dis­card altern­at­ive options in order to change my future incent­ives, such that com­pli­ance with my com­mit­ments will remain in my best interests (or be my only pos­sible choice). Odysseus instruct­ing his crew to tie him to the mast of the ves­sel and to ignore his future orders is one strong example of com­mit­ting to res­ist the Sirens’ call in this man­ner; avoid­ing cof­fee while try­ing to quit smoking (when hav­ing a cigar­ette after a cof­fee was a well-established habit) is anoth­er example.

How can we per­suade oth­ers that our com­mit­ments are cred­ible when incent­ives are less tan­gible, and altern­at­ive options can­not be com­pletely removed? Consider a mar­riage, in which both part­ners rely on the fact that the oth­er will remain faith­ful even if future incent­ives change. Emotions might be one way of sig­nalling my will­ing­ness to guar­an­tee the exe­cu­tion of the com­mit­ment (Frank 1998; Hirshleifer 2001). If two indi­vidu­als decide to com­mit to a rela­tion­ship, the emo­tion­al ties that they form ensure that neither will recon­sider the costs and bene­fits of the rela­tion­ship[1].  Likewise, if, dur­ing a fight, one indi­vidu­al dis­plays uncon­trol­lable rage, she is giv­ing her audi­ence reas­on to believe that she won’t give up the fight even if con­tinu­ing to fight is to her dis­ad­vant­age. One reas­on that emo­tions are taken to be cred­ible is because they are allegedly hard to con­vin­cingly fake: some stud­ies sug­gest that humans are intu­it­ively able to recog­nize the appro­pri­ate emo­tions when observing a face (Elfenbein & Ambady, 2002), and to some extent humans are able to effect­ively dis­crim­in­ate between genu­ine and fake emo­tion­al expres­sion (Ekman, Davidson, & Friesen, 1990; Song, Over, & Carpenter, 2016).

Formalising a com­mit­ment by mak­ing prom­ises, oaths or vows is anoth­er way of increas­ing the cred­ib­il­ity of your com­mit­ment. Interestingly, with such form­al­ised declar­a­tions people not only mani­fest an emo­tion­al attach­ment to the object of the com­mit­ment; they also sig­nal a will­ing­ness to put their repu­ta­tion at risk. This is because the more pub­lic the com­mit­ment is (and the more people are aware of the com­mit­ment), the high­er the repu­ta­tion­al stakes will be for the com­mit­ted indi­vidu­al.

Securing a com­mit­ment by alter­ing your incent­ives, by risk­ing your repu­ta­tion, or by express­ing it via emo­tion­al dis­plays are import­antly sim­il­ar: the ori­gin­al set of mater­i­al pay­offs for per­form­ing each action changes, because now the costs of smoking or unty­ing your­self from the mast of a ves­sel are too high (if it is even still pos­sible to pay these costs). But we can ima­gine the emo­tion­al costs paid in case of a fail­ure (e.g. the dis­ap­point­ment from slip­ping back into our undesir­able habit of smoking), as well as the social costs (e.g. dam­age to our repu­ta­tion as a reli­able indi­vidu­al), as incent­ives to com­ply with the action that was com­mit­ted to (Fessler & Quintelier 2014).



Cheating Non-cheating
Before the com­mit­ment p -p
After the com­mit­ment p – (m + r + e) -p

Fig.1 Payoff mat­rix of the decision to cheat on your part­ner: p is the pleas­ure you get out of cheat­ing, where­as m is the mater­i­al costs paid in such cases (e.g. a costly divorce), r is the repu­ta­tion­al costs and e is the emo­tion­al bur­den that will be paid in such cases. When p is not high­er than the sum of r, m and e, and the indi­vidu­al accur­ately pre­dict the like­li­hood of these out­comes, we’ll have a situ­ation in which break­ing a com­mit­ment is not worth­while.


Consistent with the idea that com­mit­ments change your pay­off mat­rix (see Fig.1), sev­er­al stud­ies have shown that com­mit­ments facil­it­ate coordin­a­tion and cooper­a­tion in mul­tiple eco­nom­ic games. Promises were found to increase an agent’s trust­worthy beha­viour as well as her partner’s pre­dic­tions about her beha­viour in a trust game (Charness and Dufwenberg 2006), and they were found to increase one’s rates of dona­tions in a dic­tat­or game (Sally 1995; Vanberg 2008). Spontaneous prom­ises have also been found to be pre­dict­ive of cooper­at­ive choices in a Prisoner’s Dilemma game (Belot, Bhaskar & Van de Ven 2010). The will­ing­ness to be bound to a spe­cif­ic course of action (e.g. as Ulysses) has also been found to be highly bene­fi­cial in Hawk-Dove and Battle-of-Sexes games, as com­mit­ted play­ers are more likely to obtain their pre­ferred out­comes (Barclay 2017).

Interestingly, the pay­off struc­tures that an agent faces when they make a com­mit­ment is sim­il­ar to the pay­off struc­ture of a threat: If you are involved in a drivers’ game of chick­en, the out­come you want is the one in which you don’t swerve. But your part­ner prefers the out­come in which she does not swerve, and the worst out­come would be the one in which the two cars crash because neither of you swerved. The key factor is, again, wheth­er you can cred­ibly sig­nal to the oth­er driver that you won’t spin the wheel, no mat­ter what.

Some of the same means by which cred­ib­il­ity can be con­veyed in cases com­mit­ment apply to threats as well. For instance, one effic­a­cious way by which you can cred­ibly per­suade the oth­er driver is by remov­ing the steer­ing wheel and throw­ing it out of the win­dow, thereby phys­ic­ally pre­vent­ing your­self from chan­ging the dir­ec­tion of your car (Kahn 1965); anoth­er is by play­ing a war of nerves, con­vey­ing the idea that you are so emo­tion­ally con­nec­ted to your goal that you would be will­ing to pay the highest cost if neces­sary.

Threat is an inter­est­ing phe­nomen­on to con­sider when invest­ig­at­ing the role of cred­ib­il­ity in com­mit­ment because it might help us to under­stand how com­mit­ment works, and how threat and com­mit­ment might have evolved in sim­il­ar fash­ion. What leads a non-human anim­al to cred­ibly sig­nal an inten­tion to behave in a cer­tain way to its audi­ence, and what lead its audi­ence to rely on this sig­nal, is highly rel­ev­ant for invest­ig­at­ing com­mit­ment. It is still uncer­tain just how threat sig­nals have sta­bil­ized evol­u­tion­ar­ily, giv­en that a select­ive pres­sure for fak­ing the threat would also be evol­u­tion­ar­ily advant­age­ous (Adams & Mesterton-Gibbons 1995). The same select­ive pres­sure apply to human threats and com­mit­ments: if the goal is to sig­nal future com­pli­ance to an action in order to change the audience’s beha­viour (by chan­ging her expect­a­tions), what motiv­ates us to then com­ply to that sig­nal instead of, say, simply tak­ing advant­age of the change in our audience’s beha­viour?

In oth­er words, the phe­nomen­on of com­mit­ment is intrins­ic­ally tied to the prob­lem of recog­nising (and maybe even pro­du­cing) fake sig­nals, and deceiv­ing oth­ers, just as in the case of mak­ing a threat. That being said, it is worth keep­ing in mind that the phe­nomen­on of threat dif­fers import­antly from the phe­nomen­on of com­mit­ment, inso­far as the former does not entail any motiv­a­tion for proso­cial beha­viour. In this respect, the phe­nom­ena of quiet calls and nat­al attrac­tion, in which anim­als sig­nal poten­tial cooper­a­tion or a dis­pos­i­tion not to engage in a fight, are also worth invest­ig­at­ing fur­ther for the sake of bet­ter under­stand­ing how cred­ib­il­ity can be estab­lished in the case of com­mit­ment (Silk 2001).

Most of our social life is built upon com­mit­ments that are either impli­cit or expli­citly expressed. We expect people to do things even in the absence of a verbal agree­ment to do so, and we act in accord­ance with these expect­a­tions. Investigating the factors that carry this motiv­a­tion­al force, such as cred­ib­il­ity, is the next big chal­lenge in bet­ter grasp the com­plex­it­ies of this import­ant notion, and would help us to bet­ter under­stand its onto­gen­et­ic and phylo­gen­et­ic devel­op­ment.



Adams, E. S., & Mesterton-Gibbons, M. (1995). The cost of threat dis­plays and the sta­bil­ity of decept­ive com­mu­nic­a­tion. Journal of Theoretical Biology, 175(4), 405–421.

Barclay, P. (2017). Bidding to Commit. Evolutionary Psychology, 15(1), 1474704917690740.

Belot, M., Bhaskar, V., & van de Ven, J. (2010). Promises and cooper­a­tion: Evidence from a TV game show. Journal of Economic Behavior & Organization, 73(3), 396–405.

Charness, G., & Dufwenberg, M. (2006). Promises and Partnership. Econometrica, 74, 1579–1601.

Clark, H. H. (2006). Social actions, social com­mit­ments. In S.C. Levinson, N.J. Enfield (Eds.), Roots of human social­ity: Culture, cog­ni­tion and inter­ac­tion, (pp. 126–150). New York: Bloomsbury.

Ekman, P., Davidson, R. J., & Friesen, W. V. (1990). The Duchenne smile: Emotional expres­sion and brain physiology: II. Journal of Personality and Social Psychology, 58(2), 342–353.

Elfenbein, H. A., & Ambady, N. (2002). On the uni­ver­sal­ity and cul­tur­al spe­cificity of emo­tion recog­ni­tion: A meta-analysis. Psychological Bulletin, 128(2), 203–235.

Fessler, D. M. T., & Quintelier, K. (2014). Suicide Bombers, Weddings, and Prison Tattoos. An Evolutionary Perspective on Subjective Commitment and Objective Commitment. In R. Joyce, K. Sterelny, & B. Calcott (Eds.), Cooperation and its evol­u­tion (pp. 459–484). Cambridge, MA: The MIT Press.

Frank, R. H. (1988). Passion with­in reas­on. New York, NY: W.W. Norton & Company.

Hirshleifer, J. (2001). On the Emotions as Guarantors of Threats and Promises. In The Dark Side of the Force: Economic Foundations of Conflict Theory (pp. 198–219). Cambridge, MA: Cambridge University Press.

Kahn, H. (1965). On Escalation: Metaphors and Scenarios. New York, NY: Praeger Publ. Co.

Michael, J., & Pacherie, E. (2015). On Commitments and Other Uncertainty Reduction Tools in Joint Action. Journal of Social Ontology, 1(1).

Nesse, R. M. (2001). Natural Selection and the Capacity for Subjective Commitment. In R. M. Nesse (Ed.), Evolution and the Capacity for Commitment (pp. 1–43). New York, NY: Russell Sage Foundation.

Sally, D. (1995). Conversation and cooper­a­tion in social dilem­mas a meta-analysis of exper­i­ments from 1958 to 1992. Rationality and soci­ety, 7(1), 58–92.

Scanlon, T. M. (1998). What We Owe to Each Other. Cambridge, MA: Harvard University Press.

Schelling, T. C. (1980). The Strategy of Conflict. Cambridge, MA: Harvard University Press.

Searle, J. R. (1969). Speech Acts: An essay in the philo­sophy of lan­guage. Cambridge, MA: Cambridge University Press.

Silk, J. B. (2001). Grunts, Girneys, and Good Intentions: The Origins of Strategic Commitment in Nonhuman Primates. In R. M. Nesse (Ed.), Evolution and the Capacity for Commitment (pp. 138–157). New York, NY: Russell Sage Foundation.

Song, R., Over, H., & Carpenter, M. (2016). Young chil­dren dis­crim­in­ate genu­ine from fake smiles and expect people dis­play­ing genu­ine smiles to be more proso­cial. Evolution and Human Behavior, 37(6), 490–501.

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigil­ance. Mind and Language, 25(4), 359–93.

Vanberg, C. (2008). Why Do People Keep Their Promises? An Experimental Test of Two Explanations. Econometrica, 76(6), 1467–1480.


[1] Indeed, mar­riage itself may be a way of increas­ing the like­li­hood that a com­mit­ment will be respec­ted in the future. This is because form­al­ising the rela­tion­ship in this man­ner increases the exit costs of a rela­tion­ship.

Is the Future More Valuable than the Past?

Alison Fernandes — Post-Doctoral Fellow on the AHRC pro­ject ‘Time: Between Metaphysics and Psychology’, Department of Philosophy, University of Warwick


We dif­fer markedly in our atti­tudes towards the future and past. We look for­ward in anti­cip­a­tion to tonight’s tasty meal or next month’s sunny hol­i­day. While we might fondly remem­ber these pleas­ant exper­i­ences, we don’t hap­pily anti­cip­ate them once they’re over. Conversely, while we might dread the meet­ing tomor­row, or doing this year’s taxes, we feel a dis­tinct sort of relief when they’re done. We seem to also prefer pleas­ant exper­i­ences to be in the future, and unpleas­ant exper­i­ences to be in the past. While we can’t swap tomorrow’s meet­ing and make it have happened yes­ter­day, we might prefer that it had happened yes­ter­day and was over and done with.

Asymmetries like these in how we care about the past and future can seem to make a lot of sense. After all, what’s done is done, and can’t be changed. Surely we’re right to focus our care, effort and atten­tion on what’s to come. But do we some­times go too far in valu­ing past and future events dif­fer­ently? In this post I’ll con­sider one par­tic­u­lar tem­por­al asym­metry of value that doesn’t look so ration­al, and how its appar­ent irra­tion­al­ity speaks against cer­tain meta­phys­ic­al ways of explain­ing the asym­metry.

Eugene Caruso, Daniel Gilbert, and Timothy Wilson, invest­ig­ated a tem­por­al asym­metry in how we value past and future events (2008). Suppose that I ask you how much com­pens­a­tion would be fair to receive for under­tak­ing 5 hours of data entry work. The answer that you give seems to depend cru­cially on when the work is described as tak­ing place. Subjects judged that they should receive 101% more money if the work is described as tak­ing place one month in the future ($125.04 USD on aver­age), com­pared to one month in the past ($62.20 USD on aver­age). Even for purely hypo­thet­ic­al scen­ari­os, where no one actu­ally expects the work to take place, we judge future work to be worth much more than past work.

This value asym­metry appears in oth­er scen­ari­os as well (Caruso et al., 2008). Say your friend is let­ting you bor­row their vaca­tion home for a week. How expens­ive a bottle of wine do you buy as a thank you gift? If the hol­i­day is described as tak­ing place in the future, sub­jects select wine that is 37% more expens­ive. Suppose that you help your neigh­bour move. What would be an appro­pri­ate thank you gift for you to receive? Subjects judge they should receive 71% more expens­ive bottles of wine for help­ing in the future, com­pared to the past. Say you’re award­ing dam­ages for the suf­fer­ing of an acci­dent vic­tim. Subjects judge that vic­tims should be awar­ded 42% more com­pens­a­tion when they ima­gine their suf­fer­ing as tak­ing place in the future, com­pared to the past.

Philosophers like Craig Callender (2017) have become increas­ingly inter­ested in the value asym­metry stud­ied by Caruso and his col­leagues. This is partly because there has been a long his­tory of using asym­met­ries in how we care about past and future events to argue for par­tic­u­lar meta­phys­ic­al views about time (Prior, 1959). For example, say you hold a ‘grow­ing block’ view of time, accord­ing to which the present and past exist (and are there­fore ‘fixed’) while future events are not yet real (so the future is unsettled and ‘open’). One might argue that a meta­phys­ic­al pic­ture with an open future like this is needed to make sense of why we care about future events more than past events. If past events are fixed, they’re not worth spend­ing our time over—so we value them less. But because future events are up for grabs, we reas­on­ably place great­er value in them in the present.

Can one argue from the value asym­metry Caruso and his team stud­ied, to a meta­phys­ic­al view about time? Much depends on what fea­tures the asym­metry has, and how these might be explained. When it comes to explain­ing the tem­por­al value asym­metry, Caruso and his team dis­covered that it is closely aligned to anoth­er asym­metry: a tem­por­al emo­tion­al asym­metry. More spe­cific­ally, we tend to feel stronger emo­tions when con­tem­plat­ing future events, com­pared to con­tem­plat­ing past events.

These asym­met­ries are cor­rel­ated in such a way as to sug­gest the emo­tion­al asym­metry is a cause of the value asym­metry. Part of the evid­ence comes from the fact that the emo­tion­al and value asym­metry share oth­er fea­tures in com­mon. For example, we tend to feel stronger emo­tions when con­tem­plat­ing our own mis­for­tunes, or those of oth­ers close to us, than we do con­tem­plat­ing the mis­for­tune of strangers. The value asym­metry shares this fea­ture. It is also much more strongly pro­nounced for events that con­cern one­self, com­pared to oth­ers. Subjects judge their own 5 hours of data entry work to be worth nearly twice as much money when it takes place in the future, com­pared to the past. But they judge the equi­val­ent work of a stranger to be worth sim­il­ar amounts of money, inde­pend­ently of wheth­er the work is described as tak­ing place in the future or in the past.

The same fea­tures that point towards an emo­tion­al explan­a­tion of the value asym­metry also point away from a meta­phys­ic­al explan­a­tion. The value asym­metry is, in a cer­tain sense, ‘perspectival’—it is strongest con­cern­ing one­self. But if meta­phys­ic­al facts were to explain why future events were more valu­able than past ones, it would make little sense for the asym­metry to be per­spectiv­al. After all, on meta­phys­ic­al views of time like the grow­ing block view, events are either future or not. If future events being ‘open’ is to explain why we value them more, the asym­metry in value shouldn’t depend on wheth­er they con­cern one­self or oth­ers. Future events are not only open when they con­cern me – they are also open when they con­cern you. So the meta­phys­ic­al explan­a­tion of the value asym­metry does not look prom­ising.

If we instead explain the value asym­metry by appeal to an emo­tion­al asym­metry, we can also trace the value asym­metry back to fur­ther asym­met­ries. Philosophers and psy­cho­lo­gists have giv­en evol­u­tion­ary explan­a­tions of why we feel stronger emo­tions towards future events than past events (Maclaurin & Dyke, 2002; van Boven & Ashworth, 2007). Emotions help focus our ener­gies and atten­tion. If we gen­er­ally need to align our efforts and atten­tion towards the future (which we can con­trol) rather than being overly con­cerned with the past (which we can’t do any­thing about), then it makes sense that we’re geared to feel stronger emo­tions when con­tem­plat­ing future events than past ones. Note that this evol­u­tion­ary explan­a­tion requires that our emo­tion­al responses to future and past events ‘overgen­er­al­ise’. Even when we’re asked about future events we can’t con­trol, or purely hypo­thet­ic­al future events, we still feel more strongly about them than com­par­at­ive past events, because feel­ing more strongly about the future in gen­er­al is so use­ful when the future events are ones that we can con­trol.

A final nail in the coffin for a meta­phys­ic­al explan­a­tion of the value asym­metry comes from think­ing about wheth­er sub­jects take the value asym­metry to be ration­al. I began with some examples of asym­met­ries that do seem ration­al. It seems ration­al to prefer past pains to future ones, and to feel relief when unpleas­ant exper­i­ences are over. Whether asym­met­ries like these are in fact ration­al is a top­ic of con­tro­versy in philo­sophy (Sullivan, forth.; Dougherty, 2015). Regardless, there is strong evid­ence that the value asym­metry that Caruso stud­ied is taken to be irra­tion­al, even by sub­jects whose judge­ments dis­play the asym­metry.

The meth­od­o­logy Caruso used involved ‘coun­ter­bal­an­cing’: some sub­jects were asked about the future event first, some were asked about the past event first. When the res­ults with­in any single group were con­sidered, no value asym­metry was found. That is, when you ask a single per­son how they value an event (say, using a friend’s vaca­tion home for a week) they think its value now shouldn’t depend on wheth­er the event is in the past or future. It is only when you com­pare res­ults across the two groups that the asym­metry emerges (see Table 1). It’s as if we apply a con­sist­ency judge­ment and think that future and past events should be worth the same. But when we can’t make the com­par­is­on, we value them dif­fer­ently. This strongly sug­gests that the asym­metry is not being driv­en by a con­scious judge­ment that the future is really is worth more than the past, or by a meta­phys­ic­al pic­ture accord­ing to which it is. If it were, we would expect the asym­metry to be more pro­nounced when sub­jects were asked about both the past and the future. Instead, the asym­metry dis­ap­pears.


Order of eval­u­ation
Use of a friend’s vaca­tion home Past event first Future event first
Past $89. 17 $129.06
Future $91.73 $121.98

Table 1: Average amount of money (USD) that sub­jects judge they would spend on a thank you gift for using a friend’s vaca­tion home in the past or future (Caruso et al., 2008).


Investigations into how tem­por­al asym­met­ries in value arise are allow­ing philo­soph­ers and psy­cho­lo­gists to build up a much more detailed pic­ture of how we think about time. It can seem intu­it­ive to think of the past as fixed, and the future as open. Such intu­itions have long been used to sup­port cer­tain meta­phys­ic­al views about time. But, while meta­phys­ic­al views might seem to ration­al­ise asym­met­ries in our atti­tudes, their actu­al explan­a­tion seems to lie else­where, in much deep­er evolution-driven responses. We may even be adopt­ing meta­phys­ic­al views as ration­al­isers of our much more basic emo­tion­al responses. If this is right, the value asym­metry not only provides a case study of how we can get by explain­ing asym­met­ric fea­tures of our exper­i­ence without appeal to meta­phys­ics. It sug­gests that psy­cho­logy can help explain why we’re so temp­ted towards cer­tain meta­phys­ic­al views in the first place.



Callender, Craig. 2017. What Makes Time Special. Oxford: Oxford University Press.

Caruso, Eugene M. Gilbert, D. T., and Wilson, T. D. 2008. A wrinkle in time: Asymmetric valu­ation of past and future events. Psychological Science 19(8): 796–801.

Dougherty, Tom. 2015. Future-Bias and Practical Reason. Philosophers’ Imprint. 15(30): 1−16.

Maclaurin, James & Dyke, Heather. 2002. ‘Thank Goodness That’s Over’: The Evolutionary Story. Ratio 15 (3): 276–292.

Prior, Arthur. N. 1959. Thank Goodness That’s Over. Philosophy. 34(128): 12−17.

Sullivan, Meghan. forth. Time Biases: A Theory of Rational Planning and Personal Persistence. New York: Oxford University Press.

Van Boven, Leaf & Ashworth, Laurence. 2007. Looking Forward, Looking Back: Anticipation Is More Evocative Than Retrospection. Journal of Experimental Psychology. 136(2): 289–300.

What hand gestures tell us about the evolution of language

Suzanne Aussems — Post-Doctoral Fellow/Early Career Fellow, Language & Learning Group, Department of Psychology, University of Warwick

Imagine that you are vis­it­ing a food mar­ket abroad and you want to buy a slice of cake. You know how to say “hello” in the nat­ive lan­guage, but oth­er­wise your know­ledge of the lan­guage is lim­ited. When it is your turn to order, you greet the vendor and point at the cake of your choice. The vendor then places his knife on the cake and looks at you to see if you approve of the size of the slice. You quickly shake both of your hands and indic­ate that you would like a smal­ler width for the slice using your thumb and index fin­ger. The vendor then cuts a smal­ler piece for you and you hap­pily pay for your cake. In this example, you achieved suc­cess­ful com­mu­nic­a­tion with the help of three ges­tures: a point­ing ges­ture, a con­ven­tion­al ges­ture, and an icon­ic ges­ture.

As humans, we are the only spe­cies that engage in the com­mu­nic­a­tion of com­plex and abstract ideas. This abstract­ness is even present in a seem­ingly simple example such as indic­at­ing the size of a slice of cake you desire. After all, size con­cepts such as ‘small’ and ‘large’ are learnt dur­ing devel­op­ment. What makes this sort of com­mu­nic­a­tion pos­sible are the lan­guage and ges­tures that we have at our dis­pos­al. How is it that we came to devel­op lan­guage when oth­er anim­als did not, and what is the role of ges­ture in this? In this blo­g­post, I intro­duce one his­tor­ic­ally dom­in­ant the­ory about the ori­gins of human lan­guage – the gesture-primacy hypo­thes­is (see Hewes, 1999 for an his­tor­ic over­view).

According to the gesture-primacy hypo­thes­is, humans first com­mu­nic­ated in a sym­bol­ic way using ges­ture (e.g. move­ment of the hands and body to express mean­ing). Symbolic ges­tures are, for example, pan­to­mimes that sig­ni­fy actions (e.g., shoot­ing an arrow) or emblems (e.g., rais­ing an index fin­ger to your lips to indic­ate “be quiet”) that facil­it­ate social inter­ac­tions (McNeil, 1992; 2000). The gesture-primacy hypo­thes­is sug­gests that spoken lan­guage emerged through adapt­a­tion of ges­tur­al com­mu­nic­a­tion (Corballis, 2002, Hewes, 1999). Central to this view is the idea that ges­ture and speech emerged sequen­tially.

Much of the evid­ence in favour of the gesture-primacy hypo­thes­is comes from stud­ies on non­hu­man prim­ates and great apes. Within each mon­key or ape spe­cies, indi­vidu­als seem to have the same basic vocal rep­er­toire. For instance, indi­vidu­als raised in isol­a­tion and indi­vidu­als raised by anoth­er spe­cies still pro­duce calls that are typ­ic­al for their own spe­cies, but not calls that are typ­ic­al for the foster spe­cies (Tomasello, 2008, p. 16). This sug­gests that these vocal calls are not learned, but are innate in non­hu­man prim­ates and great apes. Researchers believe that con­trolled, com­plex verbal com­mu­nic­a­tion (such as that found in humans) could not have evolved from these lim­ited innate com­mu­nic­at­ive rep­er­toires (Kendon, 2017). This line of think­ing is partly con­firmed by failed attempts to teach apes how to speak, and failed attempts to teach them to pro­duce their own calls on com­mand (Tomasello, 2008, p. 17).

However, the rep­er­toire of ape ges­tures seems to vary much more per indi­vidu­al than the vocal rep­er­toire (Pollick & de Waal, 2007), and research­ers have suc­ceeded in teach­ing chim­pan­zees manu­al actions with the help of sym­bol­ic ges­tures that were derived from American Sign Language (Gardner & Gardner, 1969). Moreover, bonobos have been observed to use ges­tures to com­mu­nic­ate more flex­ibly than they can use calls (Pollick & de Waal, 2007). The degree of flex­ib­il­ity in the pro­duc­tion and under­stand­ing of ges­tures, espe­cially in great apes, makes this com­mu­nic­at­ive tool seem a more plaus­ible medi­um through which lan­guage could have first emerged than vocal­isa­tion.

In this regard, it is not­able that great apes that have been raised by humans point at food, objects, or toys they desire. For example, some human-raised apes point to a locked door when they want access to what’s behind it, so that the human will open it for them (Tomasello, 2008). It is thus clear that human-raised apes under­stand that humans can be led to act in bene­fi­cial ways via attention-directing com­mu­nic­at­ive ges­tures. Admittedly, there does seem to be an import­ant type of point­ing that apes seem incap­able of; namely, declar­at­ive point­ing (i.e., point­ing for the sake of shar­ing atten­tion, rather than merely dir­ect­ing atten­tion) (Kendon, 2017). Nonetheless, ges­ture seems to be a flex­ible and effect­ive com­mu­nic­at­ive medi­um that is avail­able to non-human prim­ates. This fact, and the fact that vocal­isa­tions seem to be rel­at­ively inflex­ible in these spe­cies, play a sig­ni­fic­ant role in mak­ing the gesture-primacy hypo­thes­is a com­pel­ling the­ory for the ori­gins of human lan­guage.

What about human evid­ence that might sup­port the gesture-primacy hypo­thes­is? Studies on the emer­gence of speech and ges­ture in human infants show that babies pro­duce point­ing ges­tures before they pro­duce their first words (Butterworth, 2003). Shortly after their first birth­day, when most chil­dren have already star­ted to pro­duce some words, they pro­duce com­bin­a­tions of point­ing ges­tures (point at bird) and one-word utter­ances (“eat”). These ges­ture and speech com­bin­a­tions occur roughly three months before pro­du­cing two-word utter­ances (“bird eats”). From an onto­gen­et­ic stand­point, then, ref­er­en­tial beha­viour appears in point­ing ges­tures before it shows in speech. Many research­ers there­fore con­sider ges­ture to pave the way for early lan­guage devel­op­ment in babies (Butterworth, 2003; Iverson & Goldin-Meadow, 2005).

Further evid­ence con­cerns the spon­tan­eous emer­gence of sign lan­guage in deaf com­munit­ies (Senghas, Kita, & Özyürek, 2004). When sign lan­guage is passed on to new gen­er­a­tions, chil­dren use rich­er and more com­plex struc­tures than adults from the pre­vi­ous gen­er­a­tion, and so they build upon the exist­ing sign lan­guage. This phe­nomen­on has led some research­ers to believe that the devel­op­ment of sign lan­guage over gen­er­a­tions could be used as a mod­el for the evol­u­tion of human lan­guage more gen­er­ally (Senghas, Kita, & Özyürek, 2004). The fact that deaf com­munit­ies spon­tan­eously devel­op fully func­tion­al lan­guages using their hands, face, and body, fur­ther sup­ports the gesture-primacy hypo­thes­is.

Converging evid­ence also comes from the field of neur­os­cience. Xu and col­leagues (2009) used func­tion­al MRI to invest­ig­ate wheth­er sym­bol­ic ges­ture and spoken lan­guage are pro­cessed by the same sys­tem in the human brain. They showed par­ti­cipants mean­ing­ful ges­tures, and the spoken lan­guage equi­val­ent of these ges­tures. The same spe­cif­ic areas in the left side of the brain lit up for map­ping sym­bol­ic ges­tures and spoken words onto com­mon, cor­res­pond­ing con­cep­tu­al rep­res­ent­a­tions. Their find­ings sug­gest that the core of the brain’s lan­guage sys­tem is not exclus­ively used for lan­guage pro­cessing, but func­tions as a modality-independent semi­ot­ic sys­tem that plays a broad­er role in human com­mu­nic­a­tion, link­ing mean­ing with sym­bols wheth­er these are spoken words or sym­bol­ic ges­tures.

In this post, I have dis­cussed com­pel­ling evid­ence in sup­port of the gesture-primacy hypo­thes­is. An intriguing ques­tion that remains unanswered is why our closest evol­u­tion­ary rel­at­ives, chim­pan­zees and bonobos, can flex­ibly use ges­ture, but not speech, for com­mu­nic­a­tion. Further com­par­at­ive stud­ies could shed light on the evol­u­tion­ary his­tory of the rela­tion between ges­ture and speech. One thing is cer­tain: ges­ture plays an import­ant com­mu­nic­at­ive role in our every­day lives, and fur­ther study­ing the phylo­geny and onto­geny of ges­ture is import­ant for under­stand­ing how human lan­guage emerged. And it may also come in handy when order­ing some cake on your next hol­i­day!



Butterworth, G. (2003). Pointing is the roy­al road to lan­guage for babies. In S. Kita (Ed.) Pointing: Where Language, Culture, and Cognition Meet (pp. 9–34). Mahwah, NJ: Lawrence Erlbaum Associates.

Corballis, M. C. (2002). From hand to mouth: The ori­gins of lan­guage. Princeton, NJ: Princeton University Press.

Gardner, R. A., & Gardner, B. (1969). Teaching sign lan­guage to a chim­pan­zee. Science, 165, 664–672.

Hewes, G. (1999). A his­tory of the study of lan­guage ori­gins and the ges­tur­al primacy hypo­thes­is. In: A. Lock, & C.R. Peters (Eds.), Handbook of human sym­bol­ic evol­u­tion (pp. 571–595). Oxford, UK: Oxford University Press, Clarendon Press.

Iverson, J. M., & Goldin-Meadow, S. (2005). Gesture paves the way for lan­guage devel­op­ment. Psychological Science, 16(5), 367–371. Doi: 10.1111/j.0956–7976.2005.01542.x

Kendon, A. (2017). Reflections on the “gesture-first” hypo­thes­is of lan­guage ori­gins. Psychonomic Bulletin & Review, 24(1), 163–170. Doi: 10.3758/s13423-016‑1117‑3

McNeill, D. (1992). Hand and mind. Chicago, IL: Chicago University Press.

McNeill, D. (Ed.). (2000). Language and ges­ture. Cambridge, UK: Cambridge University Press.

Pollick, A., & de Waal, F. (2007). Ape ges­tures and lan­guage evol­u­tion. PNAS, 104(19), 8184–8189. Doi: 10.1073/pnas.0702624104

Senghas, A., Kita, S., & Özyürek, A. (2004). Children cre­at­ing core prop­er­ties of lan­guage: evid­ence from an emer­ging sign lan­guage in Nicaragua. Science, 17, 305(5691), 1779–82. Doi: 10.1126/science.1100199

Tomasello, M. (2008). The ori­gins of human com­mu­nic­a­tion. Cambridge, MA: MIT Press.

Xu, J., Gannon, P. J., Emmorey, K., Smith, J. F., & Braun, A. R. (2009). Symbolic ges­tures and spoken lan­guage are pro­cessed by a com­mon neur­al sys­tem. PNAS, 106(49), 20664–20669. Doi: 10.1073/pnas.0909197106

Conceptual short-term memory: a new tool for understanding perception, cognition, and consciousness

Henry Shevlin, Research Associate, Leverhulme Centre for the Future of Intelligence at The University of Cambridge

The notion of memory, as used in ordin­ary lan­guage, may seem to have little to do with per­cep­tion or con­scious exper­i­ence. While per­cep­tion informs us about the world as it is now, memory almo­­st by defi­­nition tells us about the past. Similarly, where­as con­scious exper­i­ence seems like an ongo­ing, occur­rent phe­nomen­on, it’s nat­ur­al to think of memory as being more like an inert store of inform­a­tion, access­ible when we need it but cap­able of lying dormant for years at a time.

However, in con­tem­por­ary cog­nit­ive sci­ence, memory is taken to include almost any psy­cho­lo­gic­al pro­cess that func­tions to store or main­tain inform­a­tion, even if only for very brief dur­a­tions (see also James, 1890). In this broad­er sense of the term, con­nec­tions between memory, per­cep­tion, and con­scious­ness are appar­ent. After all, some mech­an­ism for the short-term reten­tion of inform­a­tion will be required for almost any per­cep­tu­al or cog­nit­ive pro­cess, such as recog­ni­tion or infer­ence, to take place: as one group of psy­cho­lo­gists put it, “stor­age, in the sense of intern­al rep­res­ent­a­tion, is a pre­requis­ite for pro­cessing” (Halford, Phillips, & Wilson, 2001). Assuming, then, as many the­or­ists do, that per­cep­tion con­sists at least partly in the pro­cessing of sens­ory inform­a­tion, short-term memory is likely to have an import­ant role to play in a sci­entif­ic the­ory of per­cep­tion and per­cep­tu­al exper­i­ence.

In this lat­ter sense of memory, two major forms of short-term store have been widely dis­cussed in rela­tion to per­cep­tion and con­scious­ness. The first of these is the vari­ous forms of sens­ory memory, and in par­tic­u­lar icon­ic memory. Iconic memory was first described by George Sperling, who in 1960 demon­strated that large amounts of visu­ally presen­ted inform­a­tion were retained for brief inter­vals, far more than sub­jects were able to actu­ally util­ize for beha­viour dur­ing the short win­dow in which they were avail­able (Figure 1). This phe­nomen­on, dubbed par­tial report superi­or­ity, was brought to the atten­tion of philo­soph­ers of mind via the work of Fred Dretske (1981) and Ned Block (1995, 2007). Dretske sug­ges­ted that the rich but incom­pletely access­ible nature of inform­a­tion presen­ted in Sperling’s paradigm was a mark­er of per­cep­tu­al rather than cog­nit­ive pro­cesses. Block sim­il­arly argued that sens­ory memory might be closely tied to per­cep­tion, and fur­ther, sug­ges­ted that such sens­ory forms of memory could serve as the basis for rich phe­nom­en­al con­scious­ness that ‘over­flowed’ the capa­city for cog­nit­ive access.

A second form of short-term term that has been widely dis­cussed by both psy­cho­lo­gists and philo­soph­ers is work­ing memory. Very roughly, work­ing memory is a short-term inform­a­tion­al store that is more robust than sens­ory memory but also more lim­ited in capa­city. Unlike inform­a­tion in sens­ory memory, which must be cog­nit­ively accessed in order to be deployed for vol­un­tary action, inform­a­tion in work­ing memory is imme­di­ately poised for use in such beha­viour, and is closely linked to notions such as cog­ni­tion and cog­nit­ive access. For reas­ons such as these, Dretske seemed inclined to treat this kind of capacity-limited pro­cess as closely tied or even identic­al to thought, a sug­ges­tion fol­lowed by Block.[1] Psychologists such as Nelson Cowan (2001: 91) and Alan Baddeley (2003: 836) take encod­ing in work­ing memory to be a cri­terion of con­scious­ness, while glob­al work­space the­or­ists such as Stanislas Dehaene (2014: 63) have regarded work­ing memory as intim­ately con­nec­ted – if not identic­al – with glob­al broad­cast.[2]

The fore­go­ing sum­mary is over-simplistic, but hope­fully serves to motiv­ate the claim that sci­entif­ic work on short-term memory mech­an­isms may have import­ant roles to play in under­stand­ing both the rela­tion between per­cep­tion and cog­ni­tion and con­scious exper­i­ence. With this idea in mind, I’ll now dis­cuss some recent evid­ence for a third import­ant short-term memory mech­an­ism, namely Molly Potter’s pro­posed Conceptual Short-Term Memory. This is a form of short-term memory that serves to encode not merely the sens­ory prop­er­ties of objects (like sens­ory memory), but also higher-level semant­ic inform­a­tion such as cat­egor­ic­al iden­tity. Unlike sens­ory memory, it seems some­what res­ist­ant to inter­fer­ence by the present­a­tion of new sens­ory inform­a­tion; where­as icon­ic memory can be effaced by the present­a­tion of new visu­al inform­a­tion, CSTM seems some­what robust. In these respects, it is sim­il­ar to work­ing memory. Unlike work­ing memory, how­ever, it seems to have both a high capa­city and a brief dur­a­tion; inform­a­tion in CSTM that is not rap­idly accessed by work­ing memory is lost after a second or two (for a more detailed dis­cus­sion, see Potter 2012).

Evidence for CSTM comes from a range of paradigms, only two of which I dis­cuss here (inter­ested read­ers may wish to con­sult Potter, Staub, & O’Connor, 2004; Grill-Spector and Kanwisher, 2005; and Luck, Vogel, & Shapiro, 1996). The first par­tic­u­larly impress­ive demon­stra­tion is a 2014 exper­i­ment examin­ing sub­jects’ abil­ity to identi­fy the pres­ence of a giv­en semant­ic tar­get (such as “wed­ding” or “pic­nic”) in a series of rap­idly presen­ted images (see Figure 2).

A num­ber of fea­tures of this exper­i­ment are worth emphas­iz­ing. First, sub­jects in some tri­als were cued to identi­fy the pres­ence of a tar­get only after present­a­tion of the images, sug­gest­ing that their per­form­ance did indeed rely on memory rather than merely, for example, effect­ive search strategies. Second, a rel­at­ively large num­ber of images were dis­played in quick suc­ces­sion, either 6 or 12, in both cases lar­ger than the nor­mal capa­city of work­ing memory. Subjects’ per­form­ance in the 12-item tri­als was not drastic­ally worse than in the 6‑item tri­als, sug­gest­ing that they were not rely­ing on nor­mal capacity-limited work­ing memory alone. Third, because the images were dis­played one after anoth­er in the same loc­a­tion in quick suc­ces­sion, it seems unlikely that they were rely­ing on sens­ory memory alone; as noted earli­er, sens­ory memory is vul­ner­able to over­writ­ing effects. Finally, the fact that sub­jects were able to identi­fy not merely the pres­ence of cer­tain visu­al fea­tures but the pres­ence or absence of spe­cif­ic semant­ic tar­gets sug­gests that they were not merely encod­ing low-level sens­ory inform­a­tion about the images, but also their spe­cif­ic cat­egor­ic­al iden­tit­ies, again telling against the idea that sub­jects’ per­form­ance relied on sens­ory memory alone.

Another rel­ev­ant exper­i­ment for the CSTM hypo­thes­is is that of Belke et al. (2008). In this exper­i­ment, sub­jects were presen­ted with a single array of either 4 or 8 items, and asked wheth­er a giv­en cat­egory of pic­ture (such as a motor­bike) was present. In some tri­als in which the tar­get was absent, a semantic­ally related dis­tract­or (such as a motor­bike hel­met) was present instead. The sur­pris­ing res­ult of this exper­i­ment, which involved an eye-tracking cam­era, was that sub­jects reli­ably fix­ated upon either tar­gets or semantic­ally related dis­tract­ors with their ini­tial eye move­ments, and were just as likely to do wheth­er the arrays con­tained 4 or 8 items, and even when assigned a cog­nit­ive load task before­hand (see Figure 3).

Again, these res­ults argu­ably point to the exist­ence of some fur­ther memory mech­an­ism bey­ond sens­ory memory and work­ing memory: if sub­jects were rely­ing on work­ing memory to dir­ect their eye move­ments, then one would expect such move­ments to be sub­ject to inter­fer­ence from the cog­nit­ive load, where­as the hypo­thes­is that sub­jects were rely­ing on exclus­ively sens­ory mech­an­isms runs into the prob­lem that such mech­an­isms do not seem to be sens­it­ive to high-level semant­ic prop­er­ties of stim­uli such as their spe­cif­ic cat­egory iden­tity, where­as in this tri­al, sub­jects’ eye move­ments were sens­it­ive to just such semant­ic prop­er­ties of the items in the array.[3]

Interpretation of exper­i­ments such as these is a tricky busi­ness, of course (for a more thor­ough dis­cus­sion, see Shevlin 2017). However, let us pro­ceed on the assump­tion that the CSTM hypo­thes­is is at least worth tak­ing ser­i­ously, and that there may be some high-capacity semant­ic buf­fer in addi­tion to more widely accep­ted mech­an­isms such as icon­ic memory and work­ing memory. What rel­ev­ance might this have for debates in philo­sophy and cog­nit­ive sci­ence? I will now briefly men­tion three such top­ics. Again, I will be over­sim­pli­fy­ing some­what, but my goal will be to out­line some areas where the CSTM hypo­thes­is might be of interest.

The first such debate con­cerns the nature of the con­tents of per­cep­tion. Do we merely see col­ours, shapes, and so on, or do we per­ceive high-level kinds such as tables, cats, and Donald Trump (Siegel, 2010)? Taking our cue from the data on CSTM, we might sug­gest that this ques­tion can be reframed in terms of which forms of short-term memory are genu­inely per­cep­tu­al. If we take there to be good grounds for con­fin­ing per­cep­tu­al rep­res­ent­a­tion to the kinds of rep­res­ent­a­tions in sens­ory memory, then we might be inclined to take an aus­tere view of the con­tents of exper­i­ence. By con­trast, if the kind of pro­cessing involved in encod­ing in CSTM is taken to be a form of late-stage per­cep­tion, then we might have evid­ence for the pres­ence of high-level per­cep­tu­al con­tent. It might reas­on­ably be objec­ted that this move is merely ‘kick­ing the can down the road’ to ques­tions about the perception-cognition bound­ary, and does not by itself resolve the debate about the con­tents of per­cep­tion. However, more pos­it­ively, this might provide a way of ground­ing largely phe­nomen­o­lo­gic­al debates in the more con­crete frame­works of memory research.

A second key debate where CSTM may play a role con­cerns the pres­ence of top-down effects on per­cep­tion. A copi­ous amount of exper­i­ment­al data (dat­ing back to early work by psy­cho­lo­gists such as Perky, 1910, but pro­lif­er­at­ing espe­cially in the last two dec­ades) has been pro­duced in sup­port of the idea that there are indeed ‘top-down’ effects on per­cep­tion, which in turn has been taken to sug­gest that our thoughts, beliefs, and desires can sig­ni­fic­antly affect how the world appears to us. Such claims have been force­fully chal­lenged by the likes of Firestone and Scholl (2015), who have sug­ges­ted that the rel­ev­ant effects can often be explained in terms of, for example, post­per­cep­tu­al judg­ment rather than per­cep­tion prop­er.

However, the CSTM hypo­thes­is may again offer a third com­prom­ise pos­i­tion. By dis­tin­guish­ing core per­cep­tu­al pro­cesses (namely those that rely on sens­ory buf­fers such as icon­ic memory) from the kind of later cat­egor­ic­al pro­cessing per­formed by CSTM, there may be oth­er pos­i­tions avail­able in the inter­pret­a­tion of alleged cases of top-down effects on per­cep­tion. For example, Firestone and Scholl claim that many such res­ults fail to prop­erly dis­tin­guish per­cep­tion from judg­ment, sug­gest­ing that, in many cases, exper­i­ment­al­ists’ res­ults can be inter­preted purely in terms of strictly cog­nit­ive effects rather than as involving effects on per­cep­tu­al exper­i­ence. However, if CSTM is a dis­tinct psy­cho­lo­gic­al pro­cess oper­at­ive between core per­cep­tu­al pro­cesses and later cent­ral cog­nit­ive pro­cesses, then appeals to things such as ‘per­cep­tu­al judg­ments’ may be bet­ter foun­ded than Firestone and Scholl seem to think. This would allow us to claim that at least some putat­ive cases of top-down effects went bey­ond mere post­per­cep­tu­al judg­ments while also respect­ing the hypo­thes­is that early vis­ion is encap­su­lated; see Pylyshyn, 1999).

A final debate in which CSTM may be of interest is the ques­tion of wheth­er per­cep­tu­al exper­i­ence is rich­er than (or ‘over­flows’) what is cog­nit­ively accessed. As noted earli­er, Ned Block has argued that inform­a­tion in sens­ory forms of memory may be con­scious even if it is not accessed – or even access­ible – to work­ing memory (Block, 2007). This would explain phe­nom­ena such as the appar­ent ‘rich­ness’ of exper­i­ence; thus if we ima­gine stand­ing in Times Square, sur­roun­ded by chaos and noise, it is phe­nomen­o­lo­gic­ally tempt­ing to think we can only focus on and access a tiny frac­tion of our ongo­ing exper­i­ences at any one moment. A com­mon chal­lenge to this kind of claim is that it threatens to divorce con­scious­ness from per­son­al level cog­nit­ive pro­cessing, leav­ing us open to extreme pos­sib­il­it­ies such as the ‘pan­psych­ic dis­aster’ of per­petu­ally inac­cess­ible con­scious exper­i­ence in very early pro­cessing areas such as the LGN (Prinz, 2007). Again, CSTM may offer a com­prom­ise pos­i­tion. As noted earli­er, the capa­city of CSTM does indeed seem to over­flow the sparse resources of work­ing memory. However, it also seems rely on per­son­al level pro­cessing, such as an individual’s store of learned cat­egor­ies. Thus one new pos­i­tion, for example, might claim that inform­a­tion must at least reach the stage of CSTM to be con­scious, thus allow­ing that per­cep­tu­al exper­i­ence may indeed over­flow work­ing memory while also rul­ing it out in early sens­ory areas.

These are all bold sug­ges­tions in need of extens­ive cla­ri­fic­a­tion and argu­ment, but it is my hope that I have at least demon­strated to the read­er how CSTM may be a hypo­thes­is of interest not merely to psy­cho­lo­gists of memory, but also those inter­ested in broad­er issues of men­tal archi­tec­ture and con­scious­ness. And while I should also stress that CSTM remains a work­ing hypo­thes­is in the psy­cho­logy of memory, it is one that I think is worth explor­ing on grounds of both sci­entif­ic and philo­soph­ic­al interest.



Baddeley, A.D. (2003). Working memory: Looking back and look­ing for­ward.Nature Reviews Neuroscience, 4(10), 829–839.

Belke, E., Humphreys, G., Watson, D., Meyer, A. and Telling, A., (2008). Top-down effects ofse­mant­ic know­ledge in visu­al search are mod­u­lated by cog­nit­ive but not per­cep­tu­al load. Perception & Psychophysics, 70 8, 1444 – 1458.

Bergström, F., & Eriksson, J. (2014). Maintenance of non-consciously presen­ted inform­a­tion engages the pre­front­al cor­tex. Frontiers in Human Neuroscience 8:938.

Block, N. (2007). Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience, Behavioral and Brain Sciences 30, pp. 481–499.

Cowan, N., (2001). The magic­al num­ber 4 in short-term memory: A recon­sid­er­a­tion of men­tal stor­age capa­city. Behavioral and Brain Sciences 241, 87–114.

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking Press, 2014.

Dretske, F. (1981). Knowledge and the Flow of Information. MIT Press.

Firestone, C., & Scholl, B.J. (2015). Cognition does not affect per­cep­tion: Evaluating the evid­ence for ‘top-down’ effects. Behavioral and Brain Sciences:1–77.

Grill-Spector, K., & Kanwisher, N. (2005). Visual Recognition. Psychological Science, 16(2), 152–160.

Halford, G. S., Phillips, S., & Wilson, W. H. (2001). Processing capa­city lim­its are not explained by stor­age lim­its. Behavioral and Brain Sciences 24 (1), 123–124.

James, W. (1890). The Principles of Psychology. Dover Publications.

Luck, S. J., Vogel, E. K., & Shapiro, K. L. (1996). Word mean­ings can be accessed but not repor­ted dur­ing the atten­tion­al blink. Nature, 383(6601), 616–618.

Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing con­cepts of work­ing memory. Nature Neuroscience, 17(3), 347–356.

Potter, M. C. (2012). Conceptual Short Term Memory in Perception and Thought. Frontiers in Psychology, 3:113.

Potter, M. C., Staub, A., & O’Connor, D. H. (2004). Pictorial and con­cep­tu­al rep­res­ent­a­tion of glimpsed pic­tures. Journal of Experimental Psychology: Human Perception and Performance, 30, 478–489.

Prinz, J. (2007). Accessed, access­ible, and inac­cess­ible: Where to draw the phe­nom­en­al line. Behavioral and Brain Sciences, 305–6.

Pylyshyn, Z. (1999). Is vis­ion con­tinu­ous with cog­ni­tion?: The case for cog­nit­ive impen­et­rab­il­ity of visu­al per­cep­tion. Behavioral and Brain Sciences, 22(03).

Shevlin, H. (2017). Conceptual Short-Term Memory: A Missing Part of the Mind? Journal of Consciousness Studies, 24, No. 7–8.

Siegel, S. (2010). The Contents of Visual Experience. Oxford.

Sperling, G. (1960). The Information Available in Brief Visual Presentations, Psychological Monographs: General and Applied 74, pp. 1–29.


[1] Note that Dretske does not use the term work­ing memory in this con­text, but clearly has some such pro­cess in mind, as made clear by his ref­er­ence to capacity-limited mech­an­isms for extract­ing inform­a­tion.

[2] A com­plic­at­ing factor in dis­cus­sion of work­ing memory comes from the recent emer­gence of vari­able resource mod­els of work­ing memory (Ma et al., 2014) and the dis­cov­ery that some forms of work­ing memory may be able to oper­ate uncon­sciously (see, e.g., Bergström & Eriksson, 2014).

[3] Given that the arrays remained vis­ible to sub­jects through­out the exper­i­ment, one might won­der why this exper­i­ment has rel­ev­ance for our under­stand­ing of memory. However, as noted earli­er, I take it that any short-term pro­cessing of inform­a­tion pre­sumes some kind of under­ly­ing tem­por­ary encod­ing mech­an­ism.

Functional Localization—Complicated and Context-Sensitive, but Still Possible

Dan Burnston—Assistant Professor, Philosophy Department, Tulane University, Member Faculty in the Tulane Brain Institute

The ques­tion of wheth­er func­tions are loc­al­iz­able to dis­tinct parts of the brain, aside from its obvi­ous import­ance to neur­os­cience, bears on a wide range of philo­soph­ic­al issues—reductionism and mech­an­ist­ic explan­a­tion in philo­sophy of sci­ence; cog­nit­ive onto­logy and men­tal rep­res­ent­a­tion in philo­sophy of mind, among many oth­ers. But philo­soph­ic­al interest in the ques­tion has only recently begun to pick up (Bergeron, 2007; Klein, 2012; McCaffrey, 2015; Rathkopf, 2013).

I am a “con­tex­tu­al­ist” about loc­al­iz­a­tion: I think that func­tions are loc­al­iz­able to dis­tinct parts of the brain, and that dif­fer­ent parts of the brain can be dif­fer­en­ti­ated from each oth­er on the basis of their func­tions (Burnston, 2016a, 2016b). However, I also think that what a par­tic­u­lar part of the brain does depends on beha­vi­or­al and envir­on­ment­al con­text. That is, a giv­en part of the brain might per­form dif­fer­ent func­tions depend­ing on what else is hap­pen­ing in the organism’s intern­al or extern­al envir­on­ment.

Embracing con­tex­tu­al­ism, as it turns out, involves ques­tion­ing some deeply held assump­tions with­in neur­os­cience, and con­nects the ques­tion of loc­al­iz­a­tion with oth­er debates in philo­sophy. In neur­os­cience, loc­al­iz­a­tion is gen­er­ally con­strued in what I call abso­lut­ist terms. Absolutism is a form of atomism—it sug­gests that loc­al­iz­a­tion can be suc­cess­ful only if 1–1 map­pings between brain areas and func­tions can be found. Since genu­ine mul­ti­func­tion­al­ity is anti­thet­ic­al to atom­ist assump­tions it has his­tor­ic­ally not been a closely ana­lyzed concept in sys­tems or cog­nit­ive neur­os­cience.

In philo­sophy, con­tex­tu­al­ism takes us into ques­tions about what con­sti­tutes good explan­a­tion—in this case, func­tion­al explan­a­tion. Debates about con­tex­tu­al­ism in oth­er areas of philo­sophy, such as semantics and epi­stem­o­logy (Preyer & Peter, 2005), usu­ally shape up as fol­lows. Contextualists are impressed by data sug­gest­ing con­tex­tu­al vari­ation in the phe­nomen­on of interest (usu­ally the truth val­ues of state­ments or of know­ledge attri­bu­tions). In response, anti-contextualists worry that there are neg­at­ive epi­stem­ic con­sequences to embra­cing this vari­ation. The res­ult­ing explan­a­tions will not, on their view, be suf­fi­ciently power­ful or sys­tem­at­ic (Cappelen & Lepore, 2005). We end up with explan­a­tions that do not gen­er­al­ize bey­ond indi­vidu­al cases. Hence, accord­ing to anti-contextualists, we should be motiv­ated to come up with the­or­ies that deny or explain away the data that seem­ingly sup­port con­tex­tu­al vari­ation.

In order to argue for con­tex­tu­al­ism in the neur­al case, then, one must first estab­lish the data that sug­gests con­tex­tu­al vari­ation, then artic­u­late a vari­ety of con­tex­tu­al­ism that (i) suc­ceeds at dis­tin­guish­ing brain areas in terms of their dis­tinct func­tions, and (ii) describes genu­ine gen­er­al­iz­a­tions.

Usually, in sys­tems neur­os­cience, the goal is to cor­rel­ate physiolo­gic­al responses in par­tic­u­lar brain areas with par­tic­u­lar types of inform­a­tion in the world, sup­port­ing the claim that the responses rep­res­ent that inform­a­tion. I have pur­sued a detailed case study of per­cep­tu­al area MT (also known as “V5” or the “middle tem­por­al” area). The text­book descrip­tion of MT is that it rep­res­ents motion—it has spe­cif­ic responses to spe­cif­ic pat­terns of motion, and vari­ations amongst its cel­lu­lar responses rep­res­ent dif­fer­ent dir­ec­tions and velo­cit­ies. Hence, MT has the uni­vocal func­tion of rep­res­ent­ing motion: an abso­lut­ist descrip­tion.

However, MT research in the last 20 years has uncovered data which strongly sug­gests that MT is not just a motion detect­or. I will only list some of the rel­ev­ant data here, which I dis­cuss exhaust­ively in oth­er places. Let’s con­sider a per­cep­tu­al “con­text” as a com­bin­a­tion of per­cep­tu­al features—including shape/orientation, depth, col­or, luminance/brightness, and motion. On the tra­di­tion­al hier­archy, each of these fea­tures has its own area ded­ic­ated to rep­res­ent­ing it. Contextualism, altern­at­ively, starts from the assump­tion that dif­fer­ent com­bin­a­tions of these fea­tures might res­ult in a giv­en area rep­res­ent­ing dif­fer­ent inform­a­tion.

  • Despite the tra­di­tion­al view that MT is “col­or blind” (Livingstone & Hubel, 1988), MT in fact responds to the iden­tity of col­ors when col­or is use­ful in dis­am­big­u­at­ing a motion stim­u­lus. Now in this case, MT still argu­ably rep­res­ents motion, but it does use col­or as a con­tex­tu­al cue for doing so.
  • Over 93% of MT cells rep­res­ent coarse depth (the rough dis­tance of an object away from the per­ceiv­er. Their tun­ing for depth is inde­pend­ent of their tun­ing for motion, and many cells rep­res­ent depth even in sta­tion­ary These depth sig­nals are pre­dict­ive of psy­cho­phys­ic­al res­ults.
  • A major­ity of MT cells also have spe­cif­ic response prop­er­ties for fine depth (depth sig­nals res­ult­ing from the 3‑d shape and ori­ent­a­tion of objects) fea­tures of tilt and slant, and these can be cued by a vari­ety of dis­tinct fea­tures, includ­ing bin­ocu­lar dis­par­ity and rel­at­ive velo­city.

How do these res­ults sup­port con­tex­tu­al­ism? Consider a par­tic­u­lar physiolo­gic­al response to a stim­u­lus in MT. If the data is cor­rect, then this sig­nal might rep­res­ent motion, or it might rep­res­ent depth—and indeed, either coarse or fine depth—depending on the con­text. Or, it might rep­res­ent a com­bin­a­tion of those influ­ences.[1]

The con­tex­tu­al­ism I advoc­ate focuses on the type of descrip­tions we should invoke in the­or­iz­ing about the func­tions of brain areas. First, our descrip­tions should be con­junct­ive: the func­tion of an area should be described as a con­junc­tion of the dif­fer­ent rep­res­ent­a­tion­al func­tions it serves and the con­texts in which it serves those func­tions. So, MT rep­res­ents motion in a par­tic­u­lar range of con­texts, but also rep­res­ents oth­er types of inform­a­tion in dif­fer­ent contexts—including abso­lute depth in both sta­tion­ary and mov­ing stim­uli, and fine depth in con­texts involving tilt and slant, as defined by either rel­at­ive dis­par­ity or rel­at­ive velo­city.

When I say that a con­junc­tion is “open,” what I mean is that we shouldn’t take the func­tion­al descrip­tion as com­plete. We should see it as open to amend­ment as we study new con­texts. This open­ness is vital—it is an induc­tion on the fact that the func­tion­al descrip­tion of MT has changed as new con­texts have been explored—but also leads us pre­cisely into what both­ers anti-contextualists (Rathkopf, 2013). The worry is that open-descriptions do not have the the­or­et­ic­al strength that sup­ports good explan­a­tions. I argue that this worry is mis­taken.

First, note that con­tex­tu­al­ist descrip­tions can still func­tion­ally decom­pose brain areas. The key to this is the index­ing of func­tions to con­texts. Compare MT to V4. While V4 also rep­res­ents “motion” con­strued broadly (in “kin­et­ic” or mov­ing edges), col­or, and fine depth, the con­texts in which V4 does so dif­fer from MT. For instance, V4 rep­res­ents col­or con­stan­cies which are not present in MT responses. V4’s spe­cif­ic com­bin­a­tion of sens­it­iv­it­ies to fine depth and curvature allows it to rep­res­ent pro­tuber­ances—curves in objects that extend towards the perceiver—which MT can­not rep­res­ent. So, the types of inform­a­tion that these areas rep­res­ent, along with the con­texts in which they rep­res­ent them, tease apart their func­tions.

Indexing to con­texts also points the way to solv­ing the prob­lem of gen­er­al­iz­a­tion, so long as we appro­pri­ately mod­u­late our expect­a­tions. For instance, on con­tex­tu­al­ism it is still a power­ful gen­er­al­iz­a­tion that MT rep­res­ents motion. This is sub­stan­ti­ated by the wide range of con­texts in which it rep­res­ents motion—including mov­ing dots, mov­ing bars, and color-segmented pat­terns. It’s just that rep­res­ent­ing motion is not a uni­ver­sal gen­er­al­iz­a­tion about its func­tion. It is a gen­er­al­iz­a­tion with more lim­ited scope. Similarly, MT rep­res­ents fine depth in some con­texts (tilt and slant, defined by dis­par­ity or velo­city), but not in all of them (pro­tuber­ances). Of course, if the func­tion of MT is genu­inely con­text sens­it­ive, then uni­ver­sal gen­er­al­iz­a­tions about its func­tion will not be pos­sible. Hence, insist­ing on uni­ver­sal gen­er­al­iz­a­tions is not an open strategy for an absolutist—at least not without ques­tion beg­ging.

The real crux of the debate, I believe, is about the notion of pro­ject­ab­il­ity. We want our the­or­ies not just to describe what has occurred, but to inform future hypo­thes­iz­ing about nov­el situ­ations. Absolutists hope for a power­ful form of law-like pro­ject­ab­il­ity, on which a suc­cess­ful func­tion­al descrip­tion tells us for cer­tain what that area will do in new con­texts. The “open” struc­ture of con­tex­tu­al­ism pre­cludes this, but this doesn’t both­er the con­tex­tu­al­ist. This situ­ation might seem remin­is­cent of sim­il­ar stale­mates regard­ing con­tex­tu­al­ism in oth­er areas of philo­sophy.

There are two ways I have sought to break the stale­mate. First is to define a notion of pro­ject­ab­il­ity that informs sci­entif­ic prac­tice, but is com­pat­ible with con­tex­tu­al­ism. Second is to show that even very gen­er­al abso­lut­ist descrip­tions fail to deliv­er on the sup­posed explan­at­ory advant­ages of abso­lut­ism. The key to a con­tex­tu­al­ist notion of pro­ject­ab­il­ity, in my view, is to look for a form of pro­ject­ab­il­ity that struc­tures invest­ig­a­tion, rather than giv­ing law­ful pre­dic­tions. The basic idea is this: giv­en a new con­text, the null hypo­thes­is for an area’s func­tion in that con­text should be that it per­forms its pre­vi­ously known func­tion (or one of its known func­tions). I call this role a min­im­al hypo­thes­is, and the idea is that cur­rently known func­tion­al prop­er­ties struc­ture hypo­thes­iz­ing and invest­ig­a­tion in nov­el con­texts, by provid­ing three options: (i) either the area does not func­tion at all in the nov­el con­text (per­haps MT does not make any func­tion­al con­tri­bu­tion to, say, pro­cessing emo­tion­al valence); (ii) it func­tions in one of its already known ways, in which case anoth­er con­text gets indexed to, and gen­er­al­izes, an already known con­junct, or (iii) it per­forms a new func­tion in that con­text, for­cing a new con­junct to be added to the over­all descrip­tion of the area (indexed to the nov­el con­text, of course). While I won’t go into details here, I argue in (Burnston, 2016a) that this kind of reas­on­ing has shaped the pro­gress of under­stand­ing MT func­tion.

One option open to a defend­er of abso­lut­ism is to attempt to explain away the data sug­gest­ing con­tex­tu­al vari­ation by chan­ging the type of func­tion­al descrip­tion that is sup­posed to gen­er­al­ize over all con­texts (Anderson, 2010; Bergeron, 2007; Rathkopf, 2013). For instance, rather than say­ing that a part of the brain rep­res­ents a spe­cif­ic type of inform­a­tion, maybe we should say that it per­forms the same type of com­pu­ta­tion, whatever inform­a­tion it is pro­cessing. I have called this kind of approach “com­pu­ta­tion­al abso­lut­ism” (Burnston, 2016b).

While com­pu­ta­tion­al neur­os­cience is an import­ant the­or­et­ic­al approach, it can’t save abso­lut­ism. My argu­ment against the view starts from an empir­ic­al premise—in mod­el­ing MT, there is not one com­pu­ta­tion­al descrip­tion that describes everything MT does. Instead, there are a range of the­or­et­ic­al mod­els that each provide good descrip­tions of aspects of MT func­tion. Given this lack of uni­ver­sal gen­er­al­iz­a­tion, the com­pu­ta­tion­al abso­lut­ist has some options. They might move towards more gen­er­al levels of com­pu­ta­tion­al descrip­tion, hop­ing to sub­sume more spe­cif­ic mod­els. The prob­lem with this is that the most gen­er­al com­pu­ta­tion­al descrip­tions in neur­os­cience are what are called canon­ic­al com­pu­ta­tions (Chirimuuta, 2014)—descriptions that can apply to vir­tu­ally all brain areas. But if this is the case, then these descrip­tions won’t suc­cess­fully dif­fer­en­ti­ate brain areas based on their func­tion. Hence, they don’t con­trib­ute to func­tion­al loc­al­iz­a­tion.

On the oth­er hand, sug­gest­ing that it is some­thing about the way these com­pu­ta­tions are applied in par­tic­u­lar con­texts runs right into the prob­lem of con­tex­tu­al vari­ation. Producing a mod­el that pre­dicts what, say, MT will do in cases of pat­tern motion or reverse-phi phe­nom­ena simply does not pre­dict what func­tion­al responses MT will have to depth—not, at least, without invest­ig­at­ing and build­ing in know­ledge about its physiolo­gic­al responses to those stim­uli. So, even if gen­er­al mod­els are help­ful in gen­er­at­ing pre­dic­tions in par­tic­u­lar instances, they don’t explain what goes on in them. If this descrip­tion is right, then the sup­posed explan­at­ory gain of CA is an empty prom­ise, and con­tex­tu­al ana­lys­is of func­tion is neces­sary. My view of the role of highly gen­er­al mod­els mir­rors those offered by Cartwright (1999) and Morrison (2007) in the phys­ic­al sci­ences.

Some caveats are in order here. I’ve only talked about one brain area, and as McCaffrey (2015) points out, dif­fer­ent areas might be amen­able to dif­fer­ent kinds of func­tion­al ana­lys­is. Perceptual areas are import­ant, how­ever, because they are paradigm suc­cess cases for func­tion­al loc­al­iz­a­tion. If con­tex­tu­al­ism works here, it can work else­where, as well as for oth­er units of ana­lys­is, such as cell pop­u­la­tions and net­works (Rentzeperis, Nikolaev, Kiper, & van Leeuwen, 2014). I share McCaffrey’s plur­al­ist lean­ings, but I think that a place for con­tex­tu­al­ist func­tion­al ana­lys­is must be made if func­tion­al decom­pos­i­tion is to suc­ceed. The con­tex­tu­al­ist approach is also com­pat­ible with oth­er frame­works, such as Klein’s (2017) focus on “difference-making” in under­stand­ing the func­tion of brain areas.

I’ll end with a teas­er about my cur­rent pro­ject on these top­ics (Burnston, in prep). Note that, if the func­tion of brain areas can genu­inely shift with con­text, this is not just a the­or­et­ic­al prob­lem, but a prob­lem for the brain. Other parts of the brain must inter­act with MT dif­fer­ently depend­ing on wheth­er it is cur­rently rep­res­ent­ing motion, coarse depth, fine depth, or some com­bin­a­tion. If this is the case, we can expect there to be mech­an­isms in the brain that medi­ate these shift­ing func­tions. Unsurprisingly, I am not the first to note this prob­lem. Neuroscientists have begun to employ con­cepts from com­mu­nic­a­tion and inform­a­tion tech­no­logy to show how physiolo­gic­al activ­ity from the same brain area might be inter­preted dif­fer­ently in dif­fer­ent con­texts, for instance by encod­ing dis­tinct inform­a­tion in dis­tinct dynam­ic prop­er­ties of the sig­nal (Akam & Kullmann, 2014). Contextualism informs the need for this kind of approach. I am cur­rently work­ing on explic­at­ing these frame­works and show­ing how they allow for func­tion­al decom­pos­i­tion even in dynam­ic and context-sensitive neur­al net­works.


[1] The high pro­por­tion and reg­u­lar organ­iz­a­tion of depth-representing cells in MT res­ists the tempta­tion to try to save inform­a­tion­al spe­cificity by sub­divid­ing MT into smal­ler units, as is nor­mally done for V1. V1 is stand­ardly sep­ar­ated into dis­tinct pop­u­la­tions of ori­ent­a­tion, wavelength, and displacement-selective cells, but this kind of move is not avail­able for MT.



Akam, T., & Kullmann, D. M. (2014). Oscillatory mul­ti­plex­ing of pop­u­la­tion codes for select­ive com­mu­nic­a­tion in the mam­mali­an brain. Nature Reviews Neuroscience, 15(2), 111–122.

Anderson, M. L. (2010). Neural reuse: A fun­da­ment­al organ­iz­a­tion­al prin­ciple of the brain. The Behavioral and brain sci­ences, 33(4), 245–266; dis­cus­sion 266–313. doi: 10.1017/S0140525X10000853

Bergeron, V. (2007). Anatomical and func­tion­al mod­u­lar­ity in cog­nit­ive sci­ence: Shifting the focus. Philosophical Psychology, 20(2), 175–195.

Burnston, D. C. (2016a). Computational neur­os­cience and loc­al­ized neur­al func­tion. Synthese, 1–22. doi: 10.1007/s11229-016‑1099‑8

Burnston, D. C. (2016b). A con­tex­tu­al­ist approach to func­tion­al loc­al­iz­a­tion in the brain. Biology & Philosophy, 1–24. doi: 10.1007/s10539-016‑9526‑2

Burnston, D. C. (In pre­par­a­tion). Getting over atom­ism: Functional decom­pos­i­tion in com­plex neur­al sys­tems.

Cappelen, H., & Lepore, E. (2005). Insensitive semantics: A defense of semant­ic min­im­al­ism and speech act plur­al­ism: John Wiley & Sons.

Cartwright, N. (1999). The dappled world: A study of the bound­ar­ies of sci­ence: Cambridge University Press.

Chirimuuta, M. (2014). Minimal mod­els and canon­ic­al neur­al com­pu­ta­tions: the dis­tinct­ness of com­pu­ta­tion­al explan­a­tion in neur­os­cience. Synthese, 191(2), 127–153. doi: 10.1007/s11229-013‑0369‑y

Klein, C. (2012). Cognitive Ontology and Region- versus Network-Oriented Analyses. Philosophy of Science, 79(5), 952–960.

Klein, C. (2017). Brain regions as difference-makers. Philosophical Psychology, 30(1–2), 1–20.

Livingstone, M., & Hubel, D. (1988). Segregation of form, col­or, move­ment, and depth: Anatomy, physiology, and per­cep­tion. Science, 240(4853), 740–749.

McCaffrey, J. B. (2015). The brain’s het­ero­gen­eous func­tion­al land­scape. Philosophy of Science, 82(5), 1010–1022.

Morrison, M. (2007). Unifying sci­entif­ic the­or­ies: Physical con­cepts and math­em­at­ic­al struc­tures: Cambridge University Press.

Preyer, G., & Peter, G. (2005). Contextualism in philo­sophy: Knowledge, mean­ing, and truth: Oxford University Press.

Rathkopf, C. A. (2013). Localization and Intrinsic Function. Philosophy of Science, 80(1), 1–21.

Rentzeperis, I., Nikolaev, A. R., Kiper, D. C., & van Leeuwen, C. (2014). Distributed pro­cessing of col­or and form in the visu­al cor­tex. Frontiers in Psychology, 5.

A Deflationary Approach to the Cognitive Penetration Debate

Dan Burnston—Assistant Professor, Philosophy Department, Tulane University, Member Faculty in the Tulane Brain Institute

I con­strue the debate about cog­nit­ive pen­et­ra­tion (CP) in the fol­low­ing way: are there caus­al rela­tions between cog­ni­tion and per­cep­tion, such that the pro­cessing of the later is sys­tem­at­ic­ally sens­it­ive to the con­tent of the former? Framing the debate in this way imparts some prag­mat­ic com­mit­ments. We need to make clear what dis­tin­guishes per­cep­tion from cog­ni­tion, and what resources each brings to the table. And we need to cla­ri­fy what kind of caus­al rela­tion­ship exists, and wheth­er it is strong enough to be con­sidered “sys­tem­at­ic.”

I think that cur­rent debates about cog­nit­ive pen­et­ra­tion have failed to be clear enough on these vital prag­mat­ic con­sid­er­a­tions, and have become muddled as a res­ult. My view is that once we under­stand per­cep­tion and cog­ni­tion aright, we should recog­nize as an empir­ic­al fact that there are caus­al rela­tion­ships between them—however, these rela­tions are gen­er­al, dif­fuse, and prob­ab­il­ist­ic, rather than spe­cif­ic, tar­geted, and determ­in­ate. Many sup­port­ers of CP cer­tainly seem to have the lat­ter kind of rela­tion­ship in mind, and it is not clear that the former kind sup­ports the con­sequences for epi­stem­o­logy and cog­nit­ive archi­tec­ture that these sup­port­ers sup­pose. My primary goal, then, rather than deny­ing cog­nit­ive pen­et­ra­tion per se, is to de-fuse it (Burnston, 2016, 2017a, in prep).

The view of per­cep­tion, I believe, that informs most debates about CP, is that per­cep­tion con­sists in a set of strictly bottom-up, mutu­ally encap­su­lated fea­ture detect­ors, per­haps along with some basic mech­an­isms for bind­ing these fea­tures into dis­tinct “proto” objects (Clark, 2004). Anything cat­egor­ic­al, any­thing that involves inter-featural (to say noth­ing of inter­mod­al) asso­ci­ation, any­thing that involves top-down influ­ence, or assump­tions about the nature of the world, and any­thing that is learned or involves memory, must strictly be due to cog­ni­tion.

To those of this the­or­et­ic­al per­sua­sion, evid­ence for effects of some sub­set of these types in per­cep­tion is prima facie evid­ence for CP.[1] Arguments in favor of CP move from the sup­posed pres­ence of these effects, along with argu­ments that they are not due to either pre-perceptual atten­tion­al shifts or post-perceptual judg­ments, to the con­clu­sion that CP occurs.

On reflec­tion, how­ever, this is a some­what odd, or at least non-obvious move. We start out from a pre­sup­pos­i­tion that per­cep­tion can­not involve X. Then we observe evid­ence that per­cep­tion does in fact involve X. In response, instead of modi­fy­ing our view of per­cep­tion, we insist that only some oth­er fac­ulty, like cog­ni­tion, must inter­vene and do for per­cep­tion that for which it, on its own, lacks. My argu­ments in this debate are meant to under­mine this kind of intu­ition by show­ing that, giv­en a bet­ter under­stand­ing of per­cep­tion, not only is pos­it­ing CP not required, it is also (in its stronger forms any­way) simply unlikely.

Consider the fol­low­ing example, the Cornsweet illu­sion (also called the Craik‑O’Brien-Cornsweet illu­sion).

Figure 1. The Cornsweet illu­sion.

In this kind of stim­u­lus, sub­jects almost uni­ver­sally per­ceive the patch on the left as dark­er than the patch on the right, des­pite the fact that they have the exact same lumin­ance, aside from the dark-to-light gradi­ent on the left of the cen­ter line (the “Cornsweet edge”) and the light-to-dark gradi­ent on the right. The stand­ard view of the illu­sion in per­cep­tu­al sci­ence is that per­cep­tion assumes that the object is exten­ded towards the per­ceiv­er in depth, with light com­ing from the left, such that the pan­el on the left would be more brightly illu­min­ated, and the patch on the right more dimly illu­min­ated.  Thus, in order for the left pan­el to pro­duce the same lumin­ance value at the ret­ina as the right pan­el, it must in fact be dark­er, and the visu­al sys­tem rep­res­ents it so: such effects are the res­ult of “an extraordin­ar­ily power­ful strategy of vis­ion” (Purves, Shimpi, & Lotto, 1999, p. 8549).[2]

Why con­strue the strategy as visu­al? There are a num­ber of related con­sid­er­a­tions. First, the phe­nomen­on involves fine-grained asso­ci­ations between par­tic­u­lar fea­tures (lumin­ance, dis­con­tinu­ity, and con­trast, in par­tic­u­lar con­fig­ur­a­tions) that vary sys­tem­at­ic­ally and con­tinu­ously with the amount of evid­ence for the inter­pret­a­tion. If one increases the depth-interpretation by fore­short­en­ing or “bow­ing” the fig­ure, the effect is enhanced, and with fur­ther mod­u­la­tion one can get quite pro­nounced effects. It is unclear at best when we would have come by such fine-grained beliefs about these stim­uli. Moreover, the effects are man­dat­ory, and oper­ate insens­it­ively to changes in our occur­rent beliefs. Fodor is (still) right, in my view, that this kind of man­d­it­or­i­ness sup­ports a per­cep­tu­al read­ing.

According to Jonathan Cohen and me (Burnston & Cohen, 2012, 2015), cur­rent per­cep­tu­al sci­ence reveals effects like this to be the norm, at all levels of per­cep­tion. If this “integ­rat­ive” view of per­cep­tion is true, then embody­ing assump­tions in com­plex asso­ci­ations is no evid­ence for CP—in fact it is part-and-parcel of what per­cep­tion does.

What about cat­egor­ic­al per­cep­tion? Consider the fol­low­ing example from Gureckis and Goldstone (2008), of what is com­monly referred to as a morph­space.

Figure 2. Categories for facial per­cep­tion.

According to cur­rent views (Gauthier & Tarr, 2016; Goldstone & Hendrickson, 2010), cat­egor­ic­al per­cep­tion involves higher-order asso­ci­ations between cor­rel­ated low-level fea­tures. So, recog­niz­ing a par­tic­u­lar cat­egory of faces (for instance, an individual’s face, a gender, or a race) involves being able to notice cor­rel­a­tions between a num­ber of low-level facial fea­tures such as light­ness, nose or eye shape, etc., as well as their spa­tial con­fig­ur­a­tions (e.g., the dis­tance between the eyes or between the nose and the upper lip). A wide range of per­cep­tu­al cat­egor­ies have been shown to oper­ate sim­il­arly.

Interestingly, form­ing a cat­egory can morph these spaces, to group exem­plars togeth­er along the rel­ev­ant dimen­sions. In Gureckis and Goldstone’s example, once sub­jects learn to dis­crim­in­ate A from B faces (defined by the arbit­rary cen­ter line), nov­el examples of A faces will be judged to be more alike each oth­er along dia­gnost­ic dimen­sion A than they were pri­or to learn­ing. Despite these effects being cat­egor­ic­al, I sug­gest that they are strongly ana­log­ous to the cases above—they involve fea­t­ur­al asso­ci­ations that are fine-grained (a dimen­sion is “morph­ed” a par­tic­u­lar amount dur­ing the course of learn­ing) and man­dat­ory (it is hard not to see, e.g., your brother’s face as your broth­er) in a sim­il­ar way to those above. Moreover, sub­jects are often simply bad at describ­ing their per­cep­tu­al cat­egor­ies. In stud­ies such as Gureckis and Goldstone’s, sub­jects have trouble say­ing much about the dimen­sion­al asso­ci­ations that inform their per­cepts. As such, and giv­en the resources of the integ­rat­ive view, a way is opened to see­ing these cat­egor­ic­al effects as occur­ring with­in per­cep­tion.[3]

If being asso­ci­at­ive, assumption-involving, or cat­egor­ic­al doesn’t dis­tin­guish a per­cep­tu­al from a cog­nit­ive rep­res­ent­a­tion, what does? While there are issues cash­ing out the dis­tinc­tion in detail, I sug­gest that the best way to mark the perception/cognition dis­tinc­tion is in terms of rep­res­ent­a­tion­al form. Cognitive rep­res­ent­a­tions are dis­crete and language-like, while per­cep­tu­al rep­res­ent­a­tions rep­res­ent struc­tur­al dimen­sions of their referents—these might include shape dimen­sions (tilt, slant, ori­ent­a­tion, curvature, etc.), the dimen­sions that define the phe­nom­en­al col­or space, or higher-order dimen­sions such as the ones in the face case above. The form dis­tinc­tion cap­tures the kinds of con­sid­er­a­tions I’ve advanced here, as well as being com­pat­ible with wide range of related ways of draw­ing the dis­tinc­tion in philo­sophy and cog­nit­ive sci­ence.

With these dis­tinc­tions in place, we can talk about the kinds of cases that pro­ponents of CP take as evid­ence. On Macpherson’s example, Delk and Fillenbaum’s stud­ies pur­port­ing to show that “heart” shapes are per­ceived as a more sat­ur­ated red than non-heart shapes. Let’s put aside for a moment the pre­val­ent meth­od­o­lo­gic­al cri­tiques of these kinds of stud­ies (Firestone & Scholl, 2016). Even so, there is no reas­on to read the effect as one of cog­nit­ive pen­et­ra­tion. Simply the belief “hearts are red,” accord­ing to the form dis­tinc­tion, does not rep­res­ent the struc­tur­al prop­er­ties of the col­or space, and thus has no resources to inform per­cep­tion to modi­fy itself any par­tic­u­lar way. Of course, one might pos­it a more spe­cif­ic belief—say, that this par­tic­u­lar heart is a par­tic­u­lar shade of red—but this belief would have to be based on per­cep­tu­al evid­ence about the stim­u­lus. If per­cep­tion couldn’t rep­res­ent this stim­u­lus as this shade on its own, we wouldn’t come by the belief. Moreover, on the integ­rat­ive view this is the kind of thing per­cep­tion does any­way. Hence, there is no reas­on to see the per­cept as being the res­ult of cog­nit­ive inter­ven­tion.

In cat­egor­ic­al con­texts, one strong motiv­a­tion for cog­nit­ive pen­et­ra­tion is the idea that per­cep­tu­al cat­egor­ies are learned, and often this learn­ing is informed by pri­or beliefs and instruc­tions (Churchland, 1988; Siegel, 2013; Stokes, 2014). There are prob­lems with these views, how­ever, both empir­ic­al and con­cep­tu­al. The empir­ic­al prob­lem is that learn­ing can occur without any cog­nit­ive influ­ence what­so­ever. Perceivers can become attuned to dia­gnost­ic dimen­sions for entirely nov­el cat­egor­ies simply by view­ing exem­plars (Folstein, Gauthier, & Palmeri, 2010). Here, sub­jects have no pri­or beliefs or instruc­tions for how to per­ceive the stim­u­lus, but per­cep­tu­al learn­ing occurs any­way. In many cases, how­ever, even when beliefs are employed in learn­ing a cat­egory, it’s obvi­ous that the belief does not encode any con­tent that is use­ful for inform­ing the spe­cif­ic per­cept. In Goldstone and Gureckis’ case above, sub­jects were shown exem­plar faces and told “this is an A” or “this is a B”. But this index­ic­al belief does not describe any­thing about the cat­egory they actu­ally learn.

One might expect that more detailed instruc­tions or pri­or beliefs can inform more detailed categories—for instance Siegel’s sug­ges­tion that noviti­ate arbor­ists be told to look at the shape of leaves in order to dis­tin­guish (say) pines from birches. However, this runs dir­ectly into the con­cep­tu­al prob­lem. Suppose that pine leaves are pointy while birch leaves are broad. Learners already know what pointy and broad things look like. If these beliefs are all that’s required, then sub­jects don’t need to learn any­thing per­cep­tu­ally in order to make the dis­crim­in­a­tion. However, if the beliefs are not suf­fi­cient to make the discrimination—either because it is a very fine-grained dis­crim­in­a­tion of shape, or because pine versus birch per­cep­tions in fact require the kind of higher-order dimen­sion­al struc­ture dis­cussed above—then their con­tent does not describe what per­cep­tion learns when sub­jects do learn to make the dis­tinc­tion per­cep­tu­ally.[4] In either case, there is a gap between the con­tent of the belief and the con­tent of the learned perception—a gap that is sup­por­ted by stud­ies of per­cep­tu­al learn­ing and expert­ise (for fur­ther dis­cus­sion, see Burnston, 2017a, in prep). So, while beliefs might be import­ant caus­al pre­curs­ors to per­cep­tu­al learn­ing, they do not pen­et­rate the learn­ing pro­cess.

So, the situ­ation is this: we have seen that, on the integ­rat­ive view and the form dis­tinc­tion, cog­ni­tion does not have the resources to determ­ine the kind of per­cep­tu­al effects that are of interest in debates about CP. In both syn­chron­ic and dia­chron­ic cases, per­cep­tion can do much of the heavy lift­ing itself, thus ren­der­ing CP unne­ces­sary to explain the effects. A final advant­age of this view­point, espe­cially the form dis­tinc­tion, is that it brings par­tic­u­lar forms of evid­ence to bear on the debate—particularly evid­ence about what hap­pens when pro­cessing of lexical/amodal sym­bols does in fact inter­act with pro­cessing of mod­al ones. The details are too much to go through here, but I argue that the key to under­stand­ing the rela­tion­ship between per­cep­tion and cog­ni­tion is to give up the notion that there are ever dir­ect rela­tion­ships between the token­ing of a par­tic­u­lar cog­nit­ive con­tent and a spe­cif­ic per­cep­tu­al out­come (Burnston, 2016, 2017b). Instead, I sug­gest that token­ing a cog­nit­ive concept biases per­cep­tion towards a wide range of pos­sible out­comes. Here, rather than determ­in­ate cas­u­al rela­tion­ships, we should expect highly prob­ab­il­ist­ic, highly gen­er­al, and highly flex­ible inter­ac­tions, where cog­ni­tion does not force per­cep­tion to act a cer­tain way, but can shift the baseline prob­ab­il­ity that we’ll per­ceive some­thing con­sist­ent with the cog­nit­ive con­tent. This brings prim­ing, atten­tion­al, and mod­u­lat­ory effects under a single rub­ric, but not one on which cog­ni­tion tinkers with the intern­al work­ings of spe­cif­ic per­cep­tu­al pro­cesses to determ­ine how they work in a giv­en instance. I thus call it the “extern­al effect” view of the cognition-perception inter­face.

Now it is open to the defend­er of cog­nit­ive pen­et­ra­tion to define this dif­fuse inter­ac­tion as an instance of penetration—penetration is a the­or­et­ic­al term one may define as one likes. I think, how­ever, that this notion is not what most cog­nit­ive pen­et­ra­tion the­or­ists have in mind, and it does not obvi­ously carry any of the sup­posed con­sequences for mod­u­lar­ity, the­or­et­ic­al neut­ral­ity, or the epi­stem­ic role of per­cep­tion that pro­ponents of CP assume (Burnston, 2017a; cf. Lyons, 2011). The kind of view I’ve offered cap­tures, in the best avail­able empir­ic­al and prag­mat­ic way, the range of phe­nom­ena at issue, and does so very dif­fer­ently than stand­ard dis­cus­sions of pen­et­ra­tion.



Burnston, D. C. (2016). Cognitive pen­et­ra­tion and the cognition–perception inter­face. Synthese, 1–24. DOI: doi:10.1007/s11229-016‑1116‑y

Burnston, D. C. (2017a). Is aes­thet­ic exper­i­ence evid­ence for cog­nit­ive pen­et­ra­tion? New Ideas in Psychology. DOI: https://doi.org/10.1016/j.newideapsych.2017.03.012

Burnston, D. C. (2017b). Interface prob­lems in the explan­a­tion of action. Philosophical Explorations, 20 (2), 242–258. DOI: http://dx.doi.org/10.1080/13869795.2017.1312504

Burnston, D. C. (In pre­par­a­tion). There is no dia­chron­ic cog­nit­ive pen­et­ra­tion.

Burnston, D., & Cohen, J. (2012). Perception of fea­tures and per­cep­tion of objects. Croatian Journal of Philosophy (36), 283–314.

Burnston, D. C., & Cohen, J. (2015). Perceptual integ­ra­tion, mod­u­lar­ity, and cog­nit­ive pen­et­ra­tion Cognitive Influences on Perception: Implications for Philosophy of Mind, Epistemology, and Philosophy of Action. Oxford: Oxford University Press.

Churchland, P. M. (1988). Perceptual plas­ti­city and the­or­et­ic­al neut­ral­ity: A reply to Jerry Fodor. Philosophy of Science, 55(2), 167–187.

Clark, A. (2004). Feature-placing and proto-objects. Philosophical Psychology, 17(4), 443–469. doi: 10.1080/0951508042000304171

Firestone, C., & Scholl, B. J. (2016). Cognition does not affect per­cep­tion: Evaluating the evid­ence for “top-down” effects. Behavioral and Brain Sciences, 39, 1–77.

Fodor, J. (1984). Observation recon­sidered. Philosophy of Science, 51(1), 23–43.

Folstein, J. R., Gauthier, I., & Palmeri, T. J. (2010). Mere expos­ure alters cat­egory learn­ing of nov­el objects. Frontiers in Psychology, 1, 40.

Gauthier, I., & Tarr, M. J. (2016). Object Perception. Annual Review of Vision Science, 2(1).

Goldstone, R. L., & Hendrickson, A. T. (2010). Categorical per­cep­tion. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 69–78. doi: 10.1002/wcs.26

Gureckis, T. M., & Goldstone, R. L. (2008). The effect of the intern­al struc­ture of cat­egor­ies on per­cep­tion. Paper presen­ted at the Proceedings of the 30th Annual Conference of the Cognitive Science Society.

Lyons, J. (2011). Circularity, reli­ab­il­ity, and the cog­nit­ive pen­et­rab­il­ity of per­cep­tion. Philosophical Issues, 21(1), 289–311.

Macpherson, F. (2012). Cognitive pen­et­ra­tion of col­our exper­i­ence: rethink­ing the issue in light of an indir­ect mech­an­ism. Philosophy and Phenomenological Research, 84(1), 24–62.

Nanay, B. (2014). Cognitive pen­et­ra­tion and the gal­lery of indis­cern­ibles. Frontiers in Psychology, 5.

Purves, D., Shimpi, A., & Lotto, R. B. (1999). An empir­ic­al explan­a­tion of the Cornsweet effect. The Journal of Neuroscience, 19(19), 8542–8551.

Pylyshyn, Z. (1999). Is vis­ion con­tinu­ous with cog­ni­tion? The case for cog­nit­ive impen­et­rab­il­ity of visu­al per­cep­tion. The Behavioral and Brain Sciences, 22(3), 341–365; dis­cus­sion 366–423.

Raftopoulos, A. (2009). Cognition and per­cep­tion: How do psy­cho­logy and neur­al sci­ence inform philo­sophy? Cambridge: MIT Press.

Rock, I. (1983). The logic of per­cep­tion. Cambridge: MIT Press.

Siegel, S. (2013). The epi­stem­ic impact of the eti­ology of exper­i­ence. Philosophical Studies, 162(3), 697–722.

Stokes, D. (2014). Cognitive pen­et­ra­tion and the per­cep­tion of art. Dialectica, 68(1), 1–34.

Yuille, A., & Kersten, D. (2006). Vision as Bayesian infer­ence: ana­lys­is by syn­thes­is? Trends in Cognitive Sciences, 10(7), 301–308.


[1] Different the­or­ists stress dif­fer­ent prop­er­ties. Macpherson (2012) stresses effects being cat­egor­ic­al and asso­ci­ation­al, Nanay (2014) and Churchland (1988) their being top-down. Raftopoulos (2009) cites the role of memory in cat­egor­ic­al effects and Stokes (2014) and Siegel (2013) the import­ance of learn­ing in such con­texts.

[2] This kind of read­ing of intra-perceptual pro­cessing is extremely com­mon across a range of the­or­ists and per­spect­ives in per­cep­tu­al psy­cho­logy (e.g., Pylyshyn, 1999; Rock, 1983; Yuille & Kersten, 2006).

[3] This view also rejects the attempt to make these effects cog­nit­ive by defin­ing them as tacit beliefs. The prob­lem with tacit beliefs is that they simply dic­tate that any­thing cor­res­pond­ing to a cat­egory or infer­ence must be cog­nit­ive, which is exactly what’s under dis­cus­sion here. The move thus doesn’t add any­thing to the debate.

[4] This requires assum­ing a “spe­cificity” con­di­tion on the con­tent of a pur­por­ted pen­et­rat­ing belief—namely that a can­did­ate pen­et­rat­or must have the con­tent that per­cep­tion learns to rep­res­ent. I argue in more detail else­where that giv­ing this con­di­tion up trivi­al­izes the pen­et­ra­tion thes­is (Burnston, in prep).

Enactivism, Computation, and Autonomy

by Joe Dewhurst ‑Teaching Assistant at The University of Edinburgh

Enactivism has his­tor­ic­ally rejec­ted com­pu­ta­tion­al char­ac­ter­isa­tions of cog­ni­tion, at least in more tra­di­tion­al ver­sions. This has led to the per­cep­tion that enact­iv­ist approaches to cog­ni­tion must be opposed to be more main­stream com­pu­ta­tion­al­ist approaches, which offer a com­pu­ta­tion­al char­ac­ter­isa­tion of cog­ni­tion. However, the con­cep­tion of com­pu­ta­tion which enact­iv­ism rejects is in some senses quite old fash­ioned, and it is not so clear that enact­iv­ism need neces­sar­ily be opposed to com­pu­ta­tion, under­stood in a more mod­ern sense. Demonstrating that there could be com­pat­ib­il­ity, or at least not a neces­sary oppos­i­tion, between enact­iv­ism and com­pu­ta­tion­al­ism (in some sense) would open the door to a pos­sible recon­cili­ation or cooper­a­tion between the two approaches.

In a recently pub­lished paper (Villalobos & Dewhurst 2017), my col­lab­or­at­or Mario and I have focused on elu­cid­at­ing some of the reas­ons why enact­iv­ism has rejec­ted com­pu­ta­tion, and have argued that these do not neces­sar­ily apply to more mod­ern accounts of com­pu­ta­tion. In par­tic­u­lar, we have demon­strated that a phys­ic­ally instan­ti­ated Turing machine, which we take to be a paradig­mat­ic example of a com­pu­ta­tion­al sys­tem, can meet the autonomy require­ments that enact­iv­ism uses to char­ac­ter­ise cog­nit­ive sys­tems. This demon­stra­tion goes some way towards estab­lish­ing that enact­iv­ism need not be opposed to com­pu­ta­tion­al char­ac­ter­isa­tions of cog­ni­tion, although there may be oth­er reas­ons for this oppos­i­tion, dis­tinct from the autonomy require­ments.

The enact­ive concept of autonomy first appears in its mod­ern guise in Varela, Thompson, & Rosch’s 1991 book The Embodied Mind, although it has import­ant his­tor­ic­al pre­curs­ors in Maturana’s autopoi­et­ic the­ory (see his 1970, 1975, 1981; see also Maturana & Varela 1980) and cyber­net­ic work on homeo­stas­is (see e.g. Ashby 1956, 1960). There are three dimen­sions of autonomy that we con­sider in our ana­lys­is of com­pu­ta­tion. Self-determination requires that the beha­viour of an autonom­ous sys­tem must be determ­ined by that system’s own struc­ture, and not by extern­al instruc­tion. Operational clos­ure requires that the func­tion­al organ­isa­tion of an autonom­ous sys­tem must loop back on itself, such that the sys­tem pos­sesses no (non-arbitrary) inputs or out­puts. Finally, an autonom­ous sys­tem must be pre­cari­ous, such that the con­tin­ued exist­ence of the sys­tem depends on its own func­tion­al organ­isa­tion, rather than on extern­al factors out­side of its con­trol. In this post I will focus on demon­strat­ing that these cri­ter­ia can be applied to a phys­ic­al com­put­ing sys­tem, rather than address­ing why or how enact­iv­ism argues for them in the first place.

All three cri­ter­ia have tra­di­tion­ally been used to dis­qual­i­fy com­pu­ta­tion­al sys­tems from being autonom­ous sys­tems, and hence to deny that cog­ni­tion (which for enact­iv­ists requires autonomy) can be com­pu­ta­tion­al (see e.g. Thompson 2007: chapter 3). Here it is import­ant to recog­nise that the enact­iv­ists have a par­tic­u­lar account of com­pu­ta­tion in mind, one that they have inher­ited from tra­di­tion­al com­pu­ta­tion­al­ists. According to this ‘semant­ic’ account, a phys­ic­al com­puter is defined as a sys­tem that per­forms sys­tem­at­ic trans­form­a­tions over content-bearing (i.e. rep­res­ent­a­tion­al) states or sym­bols (see e.g. Sprevak 2010). With such an account in mind, it is easy to see why the autonomy cri­ter­ia might rule out com­pu­ta­tion­al sys­tems. We typ­ic­ally think of such a sys­tem as con­sum­ing sym­bol­ic inputs, which it trans­forms accord­ing to pro­grammed instruc­tions, before pro­du­cing fur­ther sym­bol­ic out­puts. Already this sys­tem has failed to meet the self-determination and oper­a­tion­al clos­ure cri­ter­ia. Furthermore, as arte­fac­tu­al com­puters are typ­ic­ally reli­ant on their cre­at­ors for main­ten­ance, etc., they also fail to meet the pre­cari­ous­ness cri­ter­ia. So, giv­en this quite tra­di­tion­al under­stand­ing of com­pu­ta­tion, it is easy to see why enact­iv­ists have typ­ic­ally denied that com­pu­ta­tion­al sys­tems can be autonom­ous.

Nonetheless, under­stood accord­ing to more recent, ‘mech­an­ist­ic’ accounts of com­pu­ta­tion, there is no reas­on to think that the autonomy cri­ter­ia must neces­sar­ily exclude com­pu­ta­tion­al sys­tems. Whilst they dif­fer in some details, all of these accounts deny that com­pu­ta­tion is inher­ently semant­ic, and instead define phys­ic­al com­pu­ta­tion in terms of mech­an­ist­ic struc­tures. We will not rehearse these accounts in any detail here, but the basic idea is that phys­ic­al com­pu­ta­tion can be under­stood in terms of mech­an­isms that per­form sys­tem­at­ic trans­form­a­tions over states that do not pos­sess any intrins­ic semant­ic con­tent (see e.g. Miłkowski 2013; Fresco 2014; Piccinini 2015). With this rough frame­work in mind, we can return to the autonomy cri­ter­ia.

Even under the mech­an­ist­ic account, com­pu­ta­tion is usu­ally under­stood in terms of map­pings between inputs and out­puts, where there is a clear sense of the begin­ning and end of the com­pu­ta­tion­al oper­a­tion. A sys­tem organ­ised in this way can be described as ‘func­tion­ally open’, mean­ing that its func­tion­al organ­isa­tion is open to the world. A func­tion­ally closed sys­tem, on the oth­er hand, is one whose func­tion­al organ­isa­tion loops back through the world, such that the envir­on­ment­al impact of the system’s ‘out­puts’ con­trib­utes to the ‘inputs’ that it receives.

A simple example of this dis­tinc­tion can be found by con­sid­er­ing two dif­fer­ent ways that a ther­mo­stat could be used. In the first case the sensor, which detects ambi­ent tem­per­at­ure, is placed in one house, and the effect­or, which con­trols a radi­at­or, is placed in anoth­er (see fig­ure 1). This sys­tem is func­tion­ally open, because there is only a one-way con­nec­tion between the sensor and the effect­or, allow­ing us to straight­for­wardly identi­fy inputs and out­puts to the sys­tem.

A more con­ven­tion­al way of set­ting up a ther­mo­stat is with both the sensor and the effect­or in the same house (see fig­ure 2). In this case the appar­ent ‘out­put’ (i.e. con­trol of the radi­at­or) loops back way round to the appar­ent ‘input’ (i.e. ambi­ent tem­per­at­ure), form­ing a func­tion­ally closed sys­tem. The ambi­ent air tem­per­at­ure in the house is effect­ively part of the sys­tem, mean­ing that we could just as well treat the effect­or as provid­ing input and the sensor as pro­du­cing out­put – there is no non-arbitrary begin­ning or end to this sys­tem.

Whilst it is typ­ic­al to treat a com­put­ing mech­an­ism more like the first ther­mo­stat, with a clear input and out­put, we do not think that this per­spect­ive is essen­tial to the mech­an­ist­ic under­stand­ing of com­pu­ta­tion. There are two pos­sible ways that we could arrange a com­put­ing mech­an­ism. The func­tion­ally open mech­an­ism (fig­ure 3) reads from one tape and writes onto anoth­er, whilst the func­tion­ally closed mech­an­ism (fig­ure 4) reads and writes onto the same tape, cre­at­ing a closed sys­tem ana­log­ous to the ther­mo­stat with its sensor and effect­or in the same house. As Wells (1998) sug­gests, a con­ven­tion­al Turing machine is actu­ally arranged in the second way, provid­ing an illus­tra­tion of a func­tion­al closed com­put­ing mech­an­ism. Whether or not this is true of oth­er com­pu­ta­tion­al sys­tems is a dis­tinct ques­tion, but it is clear that at least some phys­ic­ally imple­men­ted com­puters can exhib­it oper­a­tion­al clos­ure.

The self-determination cri­terion requires that a system’s oper­a­tions are determ­ined by its own struc­ture, rather than by extern­al instruc­tions. This cri­terion applies straight­for­wardly to at least some com­put­ing mech­an­isms. Whilst many com­puters are pro­gram­mable, their basic oper­a­tions are non­ethe­less determ­ined by their own phys­ic­al struc­ture, such that the ‘instruc­tions’ provided by the pro­gram­mer only make sense in the con­text of the sys­tem itself. To anoth­er sys­tem, with a dis­tinct phys­ic­al struc­ture, those ‘instruc­tions’ would be mean­ing­less. Just as the enact­ive auto­maton ‘Bittorio’ brings mean­ing to a mean­ing­less sea of 1s and 0s (see Varela 1988; Varela, Thompson, & Rosch 1991: 151–5), so the struc­ture of a com­put­ing mech­an­ism bring mean­ing to the world that it encoun­ters.

Finally, we can turn to the pre­cari­ous­ness cri­terion. Whilst the com­pu­ta­tion­al sys­tems that we con­struct are typ­ic­ally reli­ant upon us for con­tin­ued main­ten­ance and a sup­ply of energy, and play no dir­ect role in their own upkeep, this is more a prag­mat­ic fea­ture of our design of those sys­tems, rather than any­thing essen­tial to com­pu­ta­tion. We could eas­ily ima­gine a com­put­ing mech­an­ism designed so that it seeks out its own source of energy and is able to main­tain its own phys­ic­al struc­ture. Such a sys­tem would be pre­cari­ous in just the same sense that enact­iv­ism con­ceives of liv­ing sys­tems as being pre­cari­ous. So there is no in-principle reas­on why a com­put­ing sys­tem should not be able to meet the pre­cari­ous­ness cri­terion.

In this post I have very briefly argued that the enact­iv­ist autonomy cri­ter­ia can be applied to (some) phys­ic­ally imple­men­ted com­put­ing mech­an­isms. Of course, enact­iv­ists may have oth­er reas­ons for think­ing that cog­nit­ive sys­tems can­not be com­pu­ta­tion­al. Nonetheless, we think this ana­lys­is could be inter­est­ing for a couple of reas­ons. Firstly, inso­far as com­pu­ta­tion­al neur­os­cience and com­pu­ta­tion­al psy­cho­logy have been suc­cess­ful research pro­grams, enact­iv­ists might be inter­ested in adopt­ing some aspects of com­pu­ta­tion­al explan­a­tion for their own ana­lyses of cog­nit­ive sys­tems. Secondly, we think that the enact­iv­ist char­ac­ter­isa­tion of autonom­ous sys­tems might help to elu­cid­ate the senses in which a com­pu­ta­tion­al sys­tem might be cog­nit­ive. Now that we have estab­lished the basic pos­sib­il­ity of autonom­ous com­pu­ta­tion­al sys­tems, we hope to devel­op future work along both of these lines, and invite oth­ers to do so too.

I will leave you with this short and amus­ing video of the autonom­ous robot­ic cre­ations of the British cyber­net­i­cist W. Grey Walter, which I hope might serve as a source of inspir­a­tion for future cooper­a­tion between enact­iv­ism and com­pu­ta­tion­al­ism.



  • Ashby, R. (1956). An intro­duc­tion to cyber­net­ics. London: Chapman and Hall.
  • Ashby, R. (1960). Design for a Brain. London: Chapman and Hall.
  • Fresco, N. (2014). Physical com­pu­ta­tion and cog­nit­ive sci­ence. Berlin, Heidelberg: Springer-Verlag.
  • Maturana, H. (1970). Biology of cog­ni­tion. Biological Computer Laboratory, BCL Report 9, University of Illinois, Urbana.
  • Maturana, H. (1975). The organ­iz­a­tion of the liv­ing: A the­ory of the liv­ing organ­iz­a­tion. International Journal of Man-Machine stud­ies, 7, 313–332.
  • Maturana, H. (1981). Autopoiesis. In M. Zeleny (Ed.), Autopoiesis: a the­ory of liv­ing organ­iz­a­tion (pp. 21–33). New York; Oxford: North Holland.
  • Maturana, H. and Varela, F. (1980). Autopoiesis and cog­ni­tion: The real­iz­a­tion of the liv­ing. Dordrecht, Holland: Kluwer Academic Publisher.
  • Miłkowski, M. (2013). Explaining the com­pu­ta­tion­al mind. Cambridge, MA: MIT Press.
  • Piccinini, G. (2015). Physical Computation. Oxford: OUP.
  • Sprevak, M. (2010). Computation, Individuation, and Received View on Representations. Studies in History and Philosophy of Science, 41: 260–70.
  • Thompson, E. (2007). Mind in Life: Biology, phe­nomen­o­logy, and the sci­ences of mind. Cambridge, MA: Harvard University Press.
  • Varela F. 1988. Structural Coupling and the Origin of Meaning in a Simple Cellular Automation. In Sercarz E. E., Celada F., Mitchison N.A., Tada T. (eds.), The Semiotics of Cellular Communication in the Immune System, pp. 151–61. New York: Springer-Verlag.
  • Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.
  • Villalobos, M. & Dewhurst, J. (2017). Enactive autonomy in com­pu­ta­tion­al sys­tems. Synthese, doi:10.1007/s11229-017‑1386‑z
  • Wells, A. J. (1998). Turing’s Analysis of Computation and Theories of Cognitive Architecture. Cognition, 22(3), 269–94.


Are olfactory objects spatial?

by Solveig Aasen — Associate Professor of Philosophy at the University of Oslo

On sev­er­al recent accounts of orthonas­al olfac­tion, olfact­ory exper­i­ence does (in some sense) have a spa­tial aspect. These views open up nov­el ways of think­ing about the spa­ti­al­ity of what we per­ceive. For while olfact­ory exper­i­ence may not qual­i­fy as spa­tial in the way visu­al exper­i­ence does, it may nev­er­the­less be spa­tial in a dif­fer­ent way. What way? And how does it dif­fer from visu­al spa­ti­al­ity?

It is often noted that, by con­trast to what we see, what we smell is neither at a dis­tance nor at a dir­ec­tion from us. Unlike anim­als such as rats and the ham­mer­head shark, which have their nos­trils placed far enough apart that they can smell in ste­reo (much like we can see and hear in ste­reo), we humans are not able to tell which dir­ec­tion a smell is com­ing from (except per­haps under spe­cial con­di­tions (Radil and Wysocki 1998; Porter et al. 2005), or if we indi­vidu­ate olfac­tion so as to include the tri­gem­in­al nerve (Young et al. 2014)). Nor are we able to tell how a smell is dis­trib­uted around where we are sit­ting (Batty 2010a p. 525; 2011, p. 166). Nevertheless, it can be argued that what we smell can be spa­tial in some sense. Several sug­ges­tions to this effect are on offer.

Batty (2010a; 2010b; 2011; 2014) holds that what we smell (olfact­ory prop­er­ties, accord­ing to her) is presen­ted as ‘here’. This is not a loc­a­tion like any oth­er. It is the only loc­a­tion at which olfact­ory prop­er­ties are ever presen­ted, for olfact­ory exper­i­ence, on Batty’s view, lacks spa­tial dif­fer­en­ti­ation. Moreover, she emphas­ises that, if we are to make room for a cer­tain kind of non-veridical olfact­ory exper­i­ence, ‘here’ can­not be a loc­a­tion in our envir­on­ment; it is not to be under­stood as ‘out there’ (Batty 2010b, pp. 20–21). This lat­ter point con­trasts with Richardson’s (2013) view. She observes that, because olfact­ory exper­i­ence involves sniff­ing, it is part of the phe­nomen­o­logy of olfact­ory exper­i­ence that some­thing (odours, accord­ing to Richardson) seems to be brought into the nos­trils from out­side the body. Thus, the object of olfact­ory exper­i­ence seems spa­tial in the sense that what we smell is com­ing from without, although it is not com­ing from any par­tic­u­lar loc­a­tion. It is inter­est­ing that although Batty and Richardson claims con­trast, they both seem to think that they are point­ing out a spa­tial aspect of olfact­ory exper­i­ences when claim­ing that what we smell is, respect­ively, ‘here’ or com­ing from without.

Another view, com­pat­ible with the claim that what we smell is neither at a dis­tance nor dir­ec­tion from us, is presen­ted by Young (2016). He emphas­ises the fact that the molecu­lar struc­ture of chem­ic­al com­pounds determ­ines which olfact­ory qual­ity sub­jects exper­i­ence. It is pre­cisely this struc­ture with­in an odour plume, he argues, that is the object of olfact­ory exper­i­ence. Would an olfact­ory exper­i­ence of the molecu­lar struc­ture have a spa­tial aspect? Young does not spe­cify this. But since the struc­ture of the molecule is spa­tial, one can at least envis­age that exper­i­en­cing molecu­lar struc­ture is, in part, to exper­i­ence the spa­tial rela­tions between molecules. If so, we can envis­age spa­ti­al­ity without per­spect­ive. For, pre­sum­ably, the spa­tial ori­ent­a­tion the molecules have rel­at­ive to each oth­er and to the per­ceiv­er would not mat­ter to the exper­i­ence. Presumably, it would be their intern­al spa­tial struc­ture that is exper­i­enced, regard­less of their ori­ent­a­tion rel­at­ive to oth­er things.

The claim that what we smell is neither at a dir­ec­tion nor dis­tance from us can, how­ever, be dis­puted. As Young (2016) notes, this claim neg­lects the pos­sib­il­ity of track­ing smells over time. Although the bound­ar­ies of the cloud of odours are less clear than for visu­al objects, the exten­sion of the cloud in space and the changes in its intens­ity seem to be spa­tial aspects of our olfact­ory exper­i­ences when we move around over time. Perhaps one would object that the more fun­da­ment­al type of olfact­ory exper­i­ence is syn­chron­ic and not dia­chron­ic. The syn­chron­ic vari­ety has cer­tainly received the most atten­tion in the lit­er­at­ure. But if one’s inter­ested in an invest­ig­a­tion of our ordin­ary olfact­ory exper­i­ences, it is not clear why dia­chron­ic exper­i­ences should be less worthy of con­sid­er­a­tion.

Perhaps one would think that an obvi­ous spa­tial aspect of olfact­ory exper­i­ence is the spa­tial prop­er­ties of the source, i.e. the phys­ic­al object from which the chem­ic­al com­pounds in the air ori­gin­ate. But there is a sur­pris­ingly wide­spread con­sensus in the lit­er­at­ure that the source is not part of what we per­ceive in olfac­tion. Lycan’s (1996; 2014) lay­er­ing view may be an excep­tion. He claims that we smell sources by smelling odours. But, as Lycan him­self notes, there is a ques­tion as to wheth­er the ‘by’-relation is an infer­ence rela­tion. If it is, his claim is not neces­sar­ily sub­stan­tially dif­fer­ent from Batty’s (2014, pp. 241–243) claim that olfact­ory prop­er­ties are locked onto source objects at the level of belief, but that sources are not per­ceived.

Something that makes eval­u­ation of the above­men­tioned ideas about olfact­ory spa­ti­al­ity com­plic­ated is that there is a vari­ety of facts about olfac­tion that can be taken to inform an account of olfact­ory exper­i­ence. As Stevenson and Wilson (2006) note, chem­ic­al struc­ture has been much stud­ied. But even though the nose has about 300 recept­ors ‘which allow the detec­tion of a nearly end­less com­bin­a­tion of dif­fer­ent odor­ants’ (ibid., p. 246), how rel­ev­ant is the chem­ic­al struc­ture to the ques­tion ‘what we can per­ceive?’, when the dis­crim­in­a­tions we as per­ceiv­ers report are much less detailed? What is the rel­ev­ance of facts about the work­ings and indi­vidu­ation of the olfact­ory sys­tem? Is it a ser­i­ous flaw if our con­clu­sions about olfact­ory exper­i­ence con­tra­dict the phe­nomen­o­logy? Different con­trib­ut­ors to the debate seem to provide or pre­sup­pose dif­fer­ent answers to ques­tions like these. This makes com­par­is­on com­plic­ated. Comparison aside, how­ever, some inter­est­ing ideas about olfact­ory spa­ti­al­ity can, as briefly shown, be appre­ci­ated on their own terms.




Batty, C. 2014. ‘Olfactory Objects’. In D. Stokes, M. Matthen and S. Biggs (eds.), Perception and Its Modalities. Oxford: Oxford University Press.

Batty, C. 2011. ‘Smelling Lessons’. Philosophical Studies 153: 161–174.

Batty, C. 2010a. ‘A Representationalist Account of Olfactory Expereince’. Canadian Journal of Philosophy 40(4): 511–538.

Batty, C. 2010b. ‘What the Nose Doesn’t Know: Non-veridicality and Olfactory Experience’. Journal of Consciousness Studies 17: 10–27.

Lycan, W. G. 2014. ‘The Intentionality of Smell’. Frontiers in Psychology 5: 68–75.

Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: Bradford Books/MIT Press.

Radil, T. and C. J. Wysocki. 1998. ‘Spatiotemporal mask­ing in pure olfac­tion’. Olfaction and Taste 12(855): 641–644.

Richardson, L. 2013. ‘Sniffing and Smelling’. Philosophical Studies 162: 401–419.

Porter, J. Anand, T., Johnson, B. N., Kahn, R. M., and N. Sobel. 2005. ‘Brain mech­an­isms for extract­ing spa­tial inform­a­tion from smell’. Neuron 47: 581–592.

Young, B. D. 2016. ‘Smelling Matter’. Philosophical Psychology 29(4): 520–534.

Young, B. D., A. Keller and D. Rosenthal. 2014. ‘Quality-space Theory in Olfaction’. Frontiers in Psychology 5: 116–130.

Wilson, D. A. and R. J. Stevenson. 2006. Learning to Smell. Olfactory Perception from Neurobiology to Behaviour. Baltimore, MD: The John Hopkins University Press.

How stereotypes shape our perceptions of other minds

 by Evan Westra — Ph.D. Candidate, University of MarylandAmbiguous Pictures Task

McGlothlin & Killen (2006) showed groups of (pre­dom­in­antly white) American ele­ment­ary school chil­dren from ages 6 to 10 a series of vign­ettes depict­ing chil­dren in ambigu­ous situ­ations. For instance, one pic­ture (above) showed two chil­dren by a swing set, with one on the ground frown­ing, and one behind the swing with a neut­ral expres­sion. Two things might be going on in this pic­ture: i) the child on the ground may have fallen off by acci­dent (neut­ral scen­ario), or ii) the child on the ground may have been inten­tion­ally pushed by the one stand­ing behind the swing (harm­ful scen­ario). Crucially, McGlothlin and Killen var­ied the race of the chil­dren depic­ted in the image, such that some chil­dren saw a white child stand­ing behind the swing (left), and some saw a black child (right). Children were asked to explain what had just happened in the scen­ario, to pre­dict what would hap­pen next, and to eval­u­ate the action that had just happened. Overwhelmingly, chil­dren were more likely to give the harm­ful scen­ario inter­pret­a­tion — that the child behind the swing inten­tion­ally pushed the oth­er child — when the child behind the swing was black than when she was white. The race the child depic­ted, it seems, influ­enced wheth­er or not par­ti­cipants made an infer­ence to harm­ful inten­tions.

This is yet anoth­er depress­ing example of how racial bias can warp our per­cep­tions of oth­ers. But this study (and oth­ers like it: Sagar & Schofield 1990; Burnham & Harris 1992; Condry et al. 1985)) also hint at a rela­tion­ship between two forms of social cog­ni­tion that are not often stud­ied togeth­er: mindread­ing and ste­reo­typ­ing. The ste­reo­typ­ing com­pon­ent is clear enough. The mindread­ing com­pon­ent comes from the fact that race did­n’t just affect kids’ atti­tudes towards the tar­get — it affected what they thought was going on in the tar­get’s mind. Although these two ways of think­ing about oth­er people — mindread­ing and ste­reo­typ­ing — both seem to play an import­ant role in how we nav­ig­ate the social world, curi­ously little atten­tion has been paid to under­stand­ing the way they relate to one anoth­er. In this post, I want to explore this rela­tion­ship. I’ll first briefly explain what I mean by “mindread­ing” and “ste­reo­typ­ing.” Next, I’ll dis­cuss one exist­ing pro­pos­al about the rela­tion­ship between mindread­ing and ste­reo­typ­ing, and raise some prob­lems for it. Then I will lay out the begin­nings of a dif­fer­ent way of cash­ing out this rela­tion­ship.

*          *          *

 First, lets get clear on what I mean by “mindread­ing” and “ste­reo­typ­ing.”


In order to achieve our goals in highly social envir­on­ments, we need to be able to accur­ately pre­dict what oth­er people will do, and how they will react to us. To do this, our brains gen­er­ate com­plex mod­els of oth­er people’s beliefs, desires, and inten­tions, which we use to pre­dict and inter­pret their beha­vi­or. This capa­city to rep­res­ent oth­er minds is known vari­ous as the­ory of mind, mindread­ing, men­tal­iz­ing, and folk psy­cho­logy. In human beings, this abil­ity begins to emerge very early in devel­op­ment. As adults, we use it con­stantly, in a fast, flex­ible, and uncon­scious fash­ion. We use it in many import­ant social activ­it­ies, includ­ing com­mu­nic­a­tion, social coordin­a­tion, and mor­al judg­ment.


Stereotypes are ways of stor­ing gen­er­ic inform­a­tion about social groups (includ­ing races, genders, sexu­al ori­ent­a­tion, age-groups, nation­al­it­ies, pro­fes­sions, polit­ic­al affil­i­ation, phys­ic­al or men­tal abil­ity, and so on) (Amodio 2014). A par­tic­u­larly import­ant aspect of ste­reo­types is that they often con­tain inform­a­tion about stable per­son­al­ity traits. Unfortunately, it is all too easy for us to think of ste­reo­types about how cer­tain social groups are lazy, or greedy, or aggress­ive, or sub­missive, and so on. According to Susan Fiske and col­leagues’ Stereotype Content Model (SCM), there is an under­ly­ing pat­tern to the way we attrib­ute per­son­al­ity traits to groups (Cuddy et al. 2009; Cuddy et al. 2007; Fiske et al. 2002; Fiske 2015). Personality trait attri­bu­tion, on this view, var­ies along two primary dimen­sions: warmth and com­pet­ence. The warmth dimen­sion includes traits like (dis-)honesty, (un-)trustworthiness, and (un-)friendliness. These are traits that tell you wheth­er or not someone is liable to help you or harm you. The com­pet­ence dimen­sion con­tains traits like (un-)intelligence, skill­ful­ness, per­sist­ence, lazi­ness, clum­si­ness, etc. These traits tell you how effect­ively someone is at achiev­ing their goals.

Together, these two dimen­sions com­bine to yield four dis­tinct clusters of traits, each of which picks out a dif­fer­ent kind of ste­reo­type:

the stereotype content model

*          *          *

So what do ste­reo­typ­ing and mindread­ing have to do with one anoth­er? There are some obvi­ous dif­fer­ences, of course: ste­reo­types are mainly about groups, while mindread­ing is mainly about indi­vidu­als. But intu­it­ively, it seems like know­ing about some­body’s social group mem­ber­ship could tell you a lot about what they think: if I tell you that I am a lib­er­al, for instance, that should tell you a lot about my beliefs, val­ues, and social pref­er­ences — valu­able inform­a­tion, when it comes to pre­dict­ing and inter­pret­ing my beha­vi­or.

Some philo­soph­ers and psy­cho­lo­gists, such as Kristin Andrews, Anika Fiebich and Mark Coltheart, have sug­ges­ted that ste­reo­types and mindread­ing may actu­ally be altern­at­ive strategies for pre­dict­ing and inter­pret­ing beha­vi­or (Andrews 2012; Fiebich & Coltheart 2015). That is, it may be that some­times we use ste­reo­types instead of mindread­ing to fig­ure out what a per­son is going to do. According to one such pro­pos­al (Fiebich & Coltheart 2015), ste­reo­types allow us to pre­dict beha­vi­or because they encode asso­ci­ations between social cat­egor­ies, situ­ations, and beha­vi­ors. Thus, one might form a three-way asso­ci­ation between the social cat­egory police, the situ­ation donut shops, and the beha­vi­or eat­ing donuts, which would lead one to pre­dict that, when one sees a police officer in a donut shop, he or she will likely be eat­ing a donut. A more com­plex ver­sion of this asso­ci­ation­ist approach would be to asso­ci­ate social groups with par­tic­u­lar traits labels (as per the SCM), and thus con­sist in four-way asso­ci­ations between social cateo­gires, trait labels, situ­ations, and beha­vi­ors (Fiebich & Coltheart 2015; Andrews 2012). Thus, one might come to the trait of gen­er­os­ity with leav­ing large tips in res­taur­ants, and asso­ci­ate the social cat­egory of uncles with gen­er­os­ity, and thereby come to expect uncles to leave large tips in res­taur­ants. One might then explain this beha­vi­or by refer­ring to the uncle’s gen­er­os­ity. The key thing to notice about these accounts is that their pre­dic­tions do not rely at all upon mental-state attri­bu­tions. This is by design: these pro­pos­als are meant to show that we often don’t need mindread­ing to pre­dict or inter­pret beha­vi­or.

One prob­lem for this sort of view comes from its invoc­a­tion of “situ­ations.” What inform­a­tion, one might won­der, is con­tained with­in the scope of a par­tic­u­lar “situ­ation”? Surely, a situ­ation does not include everything about the state of the world at a giv­en moment. Situations are prob­ably meant to pick out loc­al states of affairs. But not all the facts about a loc­al state of affairs will be rel­ev­ant to beha­vi­or pre­dic­tion. The pres­ence of mice in the kit­chen of a res­taur­ant, for instance will not affect your pre­dic­tions about the size of your uncle’s tip. It might, how­ever, affect our pre­dic­tions about the beha­vi­or of the health inspect­or, should one sud­denly arrive. Which loc­al facts are pre­dict­ively use­ful will ulti­mately depend upon their rel­ev­ance to the agent whose beha­vi­or we are pre­dict­ing. But wheth­er or not a fact is rel­ev­ant to an agent will depend upon that agent’s beliefs about the loc­al state of affairs, as well as her goals and desires. If this is how rep­res­ent­a­tions of pre­dict­ively use­ful situ­ations are com­puted, then the pur­portedly non-mentalistic pro­pos­al giv­en above really includes a tacit appeal to mindread­ing. If this is not how situ­ations are com­puted, then we are owed an explan­a­tion for how the non-mentalistic behavior-predictor arrives at pre­dict­ively use­ful rep­res­ent­a­tions of situ­ations that do not depend upon con­sid­er­a­tions of rel­ev­ance.

*          *          *

Instead of treat­ing mindread­ing and ste­reo­types as sep­ar­ate forms of behavior-prediction and inter­pret­a­tion, we might instead explore the ways in which ste­reo­types might inform mindread­ing. The key to this approach, I sug­gest, lies in the fact that ste­reo­types encode inform­a­tion about per­son­al­ity traits. In many ways, per­son­al­ity traits are like men­tal states: they are unob­serv­able men­tal prop­er­ties of indi­vidu­al, and they are caus­ally related to beha­vi­or. But they also dif­fer in one key respect: their tem­por­al sta­bil­ity. Beliefs and desires are inher­ently unstable: a belief that P can be changed by the obser­va­tion of not‑P; a desire for Q can be extin­guished by the attain­ment of Q. Personality traits, in con­trast, can­not be extin­guished or aban­doned based on every­day events. Rather, they tend to last through­out a per­son’s life­time, and mani­fest them­selves in many dif­fer­ent ways across many dif­fer­ent situ­ations. A unique fea­ture of per­son­al­ity traits, in oth­er words, is that they are highly stable men­tal entit­ies (Doris 2002). So when ste­reo­types ascribe traits to groups, they are ascrib­ing a prop­erty that one could reas­on­ably expect to remain con­sist­ent across many dif­fer­ent situ­ations.

The tem­por­al prop­er­ties of men­tal states are extremely rel­ev­ant for mindread­ing, espe­cially in mod­els that employ Bayesian Predictive Coding (Kilner & Frith 2007; Koster-Hale & Saxe 2013; Hohwy & Palmer 2014; Hohwy 2013; Clark 2015). To see why, let’s start with an example:

Suppose that we believe that Laura is thirsty, and have attrib­uted to her the goal of get­ting a drink (G). As goals go, this one is rel­at­ively short-term (unlike, say, the goal of get­ting a PhD). But we know that in order to achieve (G), we pre­dict that Laura must form a num­ber of even shorter-term sub-goals: (G1) get the juice from the fridge, and (G2) pour her­self a glass of juice. But each of these requires the form­a­tion of still shorter-term sub-sub-goals: (G1a) walk over to kit­chen, (G1b) open fridge door, (G1c) remove juice con­tain­er, (G2a) remove cup from cup­board, (G2b) pour juice into cup. Predicting Laura’s beha­vi­or in this con­text thus begins with the ascrip­tion of a longer-duration men­tal state (G), fol­lowed by the ascrip­tion of suc­cess­ively shorter-term mental-state attri­bu­tions (G1, G2, G1a, G1b, G1c, G2a, G2b).

As mindread­ers, we can use attri­bu­tions of more abstract, tem­por­ally exten­ded men­tal states to make top-down infer­ences about more tran­si­ent men­tal states. At each level in this action-prediction hier­archy, we use higher-level goal-attributions to con­strain the space of pos­sible sub-goals that the agent might form. We then use our pri­or exper­i­ence to select the most likely sub-goal from the hypo­thes­is space, and the pro­cess repeats itself. Ultimately, this yields fairly fine-grained expect­a­tions about motor-intentions, which mani­fest them­selves as mirror-neuron activ­ity (Kilner & Frith 2007; Csibra 2008). Action-prediction thus plays out as a des­cent from more stable mental-state attri­bu­tions to more tran­si­ent ones, which ulti­mately bot­tom out in highly con­crete expect­a­tions about beha­vi­or.

Personality traits, which are dis­tin­guished by their high degree of tem­por­al sta­bil­ity, fit nat­ur­ally into the upper levels of this action-prediction hier­archy. Warmth traits, for instance, can tell us about the gen­er­al pref­er­ences of an agent: a gen­er­ous per­son prob­ably has a gen­er­al pref­er­ence for help­ing oth­ers, while a greedy per­son prob­ably has a gen­er­al desire to enrich her­self. These broad preference-attributions can in turn inform more imme­di­ate goal-attributions, which can then be used to pre­dict beha­vi­or.

This role for rep­res­ent­a­tions of per­son­al­ity traits in mental-state infer­ence fits well with what we know about how we reas­on about traits more gen­er­ally. For instance, we often make extremely rap­id judg­ments about the warmth and com­pet­ence traits of indi­vidu­als based on fairly super­fi­cial evid­ence, such as facial fea­tures (Todorov et al. 2008); we also tend to over attrib­ute the causes of beha­vi­or to per­son­al­ity traits, rather than situ­ation­al factors — a phe­nomenom com­monly known as the “fun­da­ment­al attri­bu­tion error” or the “cor­res­pond­ence bias (Gawronski 2004; Ross 1977; Gilbert et al. 1995). Prioritizing per­son­al­ity traits makes a lot of sense if they form the infer­en­tial basis for more com­plex forms of beha­vi­or pre­dic­tion. It also makes sense that this aspect of mindread­ing would need to rely on fast, rough-and-ready heur­ist­ics, since per­son­al­ity trait inform­a­tion would need to be inferred very quickly in order to be use­ful in action-planning.

From a com­pu­ta­tion­al per­spect­ive, thus, using per­son­al­ity traits to make infer­ences about beha­vi­or makes a lot of sense, and might make mindread­ing more effi­cient. But in exchange for this effi­ciency, we make a very dis­turb­ing trade. Stereotypes, which can be activ­ated rap­idly based on eas­ily avail­able per­cep­tu­al cues provide the mindread­ing sys­tem with a rap­id means for stor­ing trait inform­a­tion (Mason et al. 2006; Macrae et al. 1994). With this speed comes one of the most mor­ally per­ni­cious forms of human social cog­ni­tion, one that helps to per­petu­ate dis­crim­in­a­tion and social inequal­ity.

*          *          *

 The pic­ture I’ve painted in this post is, admit­tedly, rather pess­im­ist­ic. But just because the roots of dis­crim­in­a­tion are cog­nit­ively deep, we should not con­clude that it is inev­it­able. More recent work from McGlothlin and Killen (2010) should give us some hope: while chil­dren from racially homo­gen­eous schools (who had little dir­ect con­tact with mem­bers of oth­er races) ten­ded to show signs of biased intention-attribution, McGlothlin and Killen also found that chil­dren from racially het­ero­gen­eous schools (who had reg­u­lar con­tact with mem­bers of oth­er races) did not dis­play such signs of bias. Evidently, inter­group con­tact is effect­ive in curb­ing the devel­op­ment of ste­reo­types — and, by exten­sion, biased mindread­ing.



Amodio, D.M., 2014. The neur­os­cience of pre­ju­dice and ste­reo­typ­ing. Nature Reviews: Neuroscience, 15(10), pp.670–682.

Andrews, K., 2012. Do apes read minds?: Toward a new folk psy­cho­logy, Cambridge, MA: MIT Press.

Burnham, D.K. & Harris, M.B., 1992. Effects of Real Gender and Labeled Gender on Adults’ Perceptions of Infants. Journal of Genetic Psychology, 15(2), pp.165–183.

Clark, A., 2015. Surfing uncer­tainty: Prediction, action, and the embod­ied mind, Oxford: Oxford University Press.

Condry, J.C. et al., 1985. Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children Development Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children. Child Development, 56(1), pp.225–233.

Csibra, G., 2008. Action mir­ror­ing and action under­stand­ing: an altern­at­ive account. In P. Haggard, Y. Rossetti, & M. Kawato, eds. Sensorymotor Foundations of Higher Cognition. Attention and Performance XXII. Oxford: Oxford University Press, pp. 435–459.

Cuddy, A.J.C. et al., 2009. Stereotype con­tent mod­el across cul­tures: Towards uni­ver­sal sim­il­ar­it­ies and some dif­fer­ences. British Journal of Social Psychology, 48(1), pp.1–33.

Cuddy, A.J.C., Fiske, S.T. & Glick, P., 2007. The BIAS map: beha­vi­ors from inter­group affect and ste­reo­types. Journal of per­son­al­ity and social psy­cho­logy, 92(4), pp.631–48.

Doris, J.M., 2002. Lack of char­ac­ter: Personality and mor­al beha­vi­or, Cambridge, UK: Cambridge University Press.

Fiebich, A. & Coltheart, M., 2015. Various Ways to Understand Other Minds: Towards a Pluralistic Approach to the Explanation of Social Understanding. Mind and Language, 30(3), pp.235–258.

Fiske, S.T., 2015. Intergroup biases: A focus on ste­reo­type con­tent. Current Opinion in Behavioral Sciences, 3(April), pp.45–50.

Fiske, S.T., Cuddy, A.J.C. & Glick, P., 2002. A Model of (Often Mixed Stereotype Content: Competence and Warmth Respectively Follow From Perceived Status and Competition. Journal of per­son­al­ity and social psy­cho­logy­er­son­al­ity and social psy­cho­logy, 82(6), pp.878–902.

Gawronski, B., 2004. Theory-based bias cor­rec­tion in dis­pos­i­tion­al infer­ence: The fun­da­ment­al attri­bu­tion error is dead, long live the cor­res­pond­ence bias. European Review of Social Psychology, 15(1), pp.183–217.

Gilbert, D.T. et al., 1995. The Correspondence Bias. Psychological Bulletin, 117(1), pp.21–38.

Hohwy, J., 2013. The pre­dict­ive mind, Oxford University Press.

Hohwy, J. & Palmer, C., 2014. Social Cognition as Causal Inference: Implications for Common Knowledge and Autism. In M. Gallotti & J. Michael, eds. Perspectives on Social Ontology and Social Cognition. Dordrecht: Springer Netherlands, pp. 167–189.

Kilner, J.M. & Frith, C.D., 2007. Predictive cod­ing: an account of the mir­ror neur­on sys­tem. Cognitive Processes, 8(3), pp.159–166.

Koster-Hale, J. & Saxe, R., 2013. Theory of Mind: A Neural Prediction Problem. Neuron, 79(5), pp.836–848.

Macrae, C.N., Stangor, C. & Milne, A.B., 1994. Activating Social Stereotypes: A Functional Analysis. Journal of Experimental Social Psychology, 30(4), pp.370–389.

Mason, M.F., Cloutier, J. & Macrae, C.N., 2006. On con­stru­ing oth­ers: Category and ste­reo­type activ­a­tion from facial cues. Social Cognition, 24(5), p.540.

McGlothlin, H. & Killen, M., 2010. How social exper­i­ence is related to children’s inter­group atti­tudes. European Journal of Social Psychology, 40(4), pp.625–634.

Mcglothlin, H. & Killen, M., 2006. Intergroup Attitudes of European American Children Attending Ethnically Homogeneous Schools. Child Development, 77(5), pp.1375–1386.

Ross, L., 1977. The Intuitive Psychologist And His Shortcomings: Distortions in the Attribution Process. Advances in Experimental Social Psychology, 10©, pp.173–220.

Sagar, H.A. & Schofield, J.W., 1990. Racial and beha­vi­or­al cues in Black and White chil­dren’ s per­cep­tion of ambigu­ously aggress­ive acts. Journal of per­son­al­ity and social psy­cho­logy, 39(October), pp.590–598.

Todorov, A. et al., 2008. Understanding eval­u­ation of faces on social dimen­sions. Trends in Cognitive Sciences, 12(12), pp.455–460.


Thanks to Melanie Killen and Joan Tycko for per­mis­sion to use images of exper­i­ment­al stim­uli from McGlothlin & Killen (2006, 2010).


Delusions as Explanations


by Matthew Parrott– Lecturer in the Department of Philosophy at King’s College London

One idea that has been extremely influ­en­tial with­in cog­nit­ive neuro­psy­cho­logy and neuro­psy­chi­atry is that delu­sions arise as intel­li­gible responses to highly irreg­u­lar exper­i­ences. For example, we might think that the reas­on a sub­ject adopts the belief that a house has inser­ted a thought into her head is because she has in fact had an extremely bizarre exper­i­ence rep­res­ent­ing a house push­ing a thought into her head (the case comes from Saks 2007; see Sollberger 2014 for an account of thought inser­tion along these lines). If this were to hap­pen, then delu­sions would arise for reas­ons that are famil­i­ar from cases of ordin­ary belief. A delu­sion­al sub­ject would simply be endors­ing or tak­ing on board the con­tent of her exper­i­ence.

However the notion that a delu­sion is an under­stand­able response to an irreg­u­lar exper­i­ence need not be con­strued along the lines of a sub­ject accept­ing the con­tent of her exper­i­ence. Over a num­ber of years, Brendan Maher advoc­ated an influ­en­tial altern­at­ive pro­pos­al, accord­ing to which an indi­vidu­al adopts a delu­sion­al belief because it serves as an explan­a­tion of her ‘strange’ or ‘sig­ni­fic­ant’ exper­i­ence (see Maher 1974, 1988, 1999). Crucially, for Maher, the con­tent of the subject’s exper­i­ence is not identic­al to the con­tent of her delu­sion­al belief. Rather, the lat­ter is determ­ined in part by con­tex­tu­al factors, such as cul­tur­al back­ground or what Maher calls ‘gen­er­al explan­at­ory sys­tems’ (cf. 1974). Maher’s approach is often referred to as the ‘explan­a­tion­ist’ approach to under­stand­ing delu­sions (Bayne and Pacherie 2004).

Explanationist accounts have been espe­cially pop­u­lar with respect to the Capgras delu­sion that one’s friend or rel­at­ive is really an imposter (e.g., Stone and Young 1997) and delu­sions of ali­en con­trol (e.g., Blakemore, et. al. 2002). Yet, des­pite their pre­val­ence, the explan­a­tion­ist approach has been called into ques­tioned by a num­ber of philo­soph­ers on the grounds that delu­sions are quite obvi­ously very bad explan­a­tions.

For instance, Davies and col­leagues argue:

‘The sug­ges­tion that delu­sions arise from the nor­mal con­struc­tion and adop­tion of an explan­a­tion for unusu­al fea­tures of exper­i­ence faces the prob­lem that delu­sion­al patients con­struct explan­a­tions that are not plaus­ible and adopt them even when bet­ter explan­a­tions are avail­able. This is a strik­ing depar­ture from the more nor­mal think­ing of non-delusional sub­jects who have sim­il­ar exper­i­ences.’ (Davies, et. al. 2001, pg. 147; but see also Bayne and Pacherie 2004, Campbell 2001, Pacherie, et. al. 2006)

Indeed, since delu­sions strike most of us as highly implaus­ible, it is hard to see how they could explain any exper­i­ence, no mat­ter how unusu­al. So if we want to under­stand delu­sion­al cog­ni­tion along Maher’s lines, we will need to cla­ri­fy the cog­nit­ive trans­ition from anom­al­ous exper­i­ence to delu­sion­al belief in a way that illu­min­ates how it could be a genu­inely explan­at­ory trans­ition.

In what fol­lows, I would like to dis­tin­guish three dis­tinct ways in which a delu­sion­al belief might be thought to be explan­at­or­ily inad­equate, each of which I think poses a dis­tinct chal­lenge for the explan­a­tion­ist approach.

The first con­cerns the phe­nom­en­al char­ac­ter of a delu­sion­al subject’s anom­al­ous exper­i­ence. Maher claims that the strange exper­i­ences we find in cases of delu­sion ‘demand’ explan­a­tions. But why is that? If the exper­i­ences that give rise to delu­sions do not them­selves rep­res­ent highly unusu­al states of affairs (as Maher seems to think), what is it about them that calls for or ‘demands’ an explan­a­tion? And does the par­tic­u­lar phe­nom­en­al char­ac­ter of a ‘strange’ exper­i­ence ‘demand’ a spe­cif­ic form of explan­a­tion, or are all ‘strange’ exper­i­ences rel­at­ively equal when it comes to their demands? The chal­lenge for the explan­a­tion­ist is to cla­ri­fy the phe­nom­en­al char­ac­ter of a delu­sion­al subject’s anom­al­ous exper­i­ence in such a man­ner that makes clear how it could be the explanad­um of a delu­sion. Let’s call this the Phenomenal Challenge.

I actu­ally think some very influ­en­tial neuro­psy­cho­lo­gic­al accounts have dif­fi­culty with the Phenomenal Challenge. To briefly take one example, Ellis and Young (1990) pro­posed that the Capgras delu­sion arises because of a lack of respons­ive­ness to famil­i­ar faces in the auto­nom­ic nervous sys­tem. In non-delusional sub­jects, an exper­i­ence of a famil­i­ar face is asso­ci­ated with an affect­ive response in the auto­nom­ic nervous sys­tem, but Capgras sub­jects fail to have this response. Ellis and Young’s the­ory pre­dicted that there would be no sig­ni­fic­ant dif­fer­ence in the skin con­duct­ance responses of Capgras sub­jects when they are shown famil­i­ar verses unfa­mil­i­ar faces, which has sub­sequently been con­firmed by a num­ber of stud­ies. Thus it seems there is good evid­ence that a typ­ic­al Capgras subject’s auto­nom­ic nervous sys­tem is not sens­it­ive to famil­i­ar faces.

This seems prom­ising but I don’t think it answers the Phenomenal Challenge because it doesn’t tell us any­thing about what a Capgras subject’s exper­i­ence of a face is like. As John Campbell notes, ‘the mere lack of affect does not itself con­sti­tute the perception’s hav­ing a par­tic­u­lar con­tent.’ (2001, pg. 96) Moreover, indi­vidu­als are not nor­mally con­scious of their auto­nom­ic nervous sys­tem (see Coltheart 2005). So it isn’t clear how dimin­ished sens­it­iv­ity with­in that sys­tem con­sti­tutes an exper­i­ence that ‘demands’ an explan­a­tion involving imposters. To really under­stand why an anom­al­ous exper­i­ence of a famil­i­ar face calls for a delu­sion­al explan­a­tion, we need to get a bet­ter sense on what that exper­i­ence is like.

A second worry raised in the pre­vi­ous pas­sage is that delu­sion­al sub­jects adopt delu­sion­al explan­a­tions ‘even when bet­ter explan­a­tions are avail­able’. Why does this hap­pen? Why does a delu­sion­al sub­ject select an inferi­or hypo­thes­is from the set of those avail­able to her? Let’s call this the Abductive Challenge.

To illus­trate, let’s stick with Capgras. The explan­a­tion­ist pro­pos­al is that a sub­ject adopts the belief that her friend has been replaced by an imposter in order to explain some odd exper­i­ence. But even if we sup­pose the imposter hypo­thes­is is empir­ic­ally adequate, it is highly unlikely to be the best explan­a­tion avail­able. As Davies and Egan remark, ‘one might ask wheth­er there is an altern­at­ive to the imposter hypo­thes­is that provides a bet­ter explan­a­tion of the patient’s anom­al­ous exper­i­ence. There is, of course, an obvi­ous can­did­ate for such a pro­pos­i­tion.’ (2013, pg. 719) In fact, there seems to be a num­ber of bet­ter avail­able hypo­theses; for example, that one has suffered brain injury or any hypo­thes­is that appealed to more famil­i­ar changes affect­ing facial appear­ance, such as hair-style or ill­ness.

Put simply, the Abductive Challenge is that even if we assume the cog­nit­ive trans­ition from unusu­al exper­i­ence to delu­sion involves some­thing like abduct­ive reas­on­ing or infer­ence to the best explan­a­tion, delu­sion­al sub­jects select poor explan­a­tions instead of bet­ter avail­able altern­at­ives. The explan­a­tion­ist needs to tell us why this hap­pens (for some attempts see Coltheart et. al. 2010, Davies and Egan 2013, McKay 2012, Parrott and Koralus, 2015).

The final chal­lenge for explan­a­tion­ism is, in my view, the most prob­lem­at­ic. In the above pas­sage, Davies and col­leagues remark that delu­sions are extremely implaus­ible. Along these lines, we might nat­ur­ally won­der why a sub­ject would even con­sider one to be a can­did­ate explan­a­tion of her unusu­al exper­i­ence. Why would she not instead imme­di­ately rule out a delu­sion­al hypo­thes­is on the grounds that it is far too implaus­ible to be giv­en ser­i­ous con­sid­er­a­tion? This con­cern is echoed by Fine and col­leagues:

‘They explain the anom­al­ous thought in a way that is so far-fetched as to strain the notion of explan­a­tion. The explan­a­tions pro­duced by patients with delu­sions to account for their anom­al­ous thoughts are not just incor­rect; they are non­starters. Appealing to the notion of explan­a­tion, there­fore, does not cla­ri­fy how the delu­sion­al belief comes about in the first place because the explan­a­tions of the delu­sion­al patients are noth­ing like explan­a­tions as we under­stand them.’ (Fine, et. al. 2005, pg. 160)

The task of explain­ing some tar­get phe­nomen­on demands cog­nit­ive resources and the idea that delu­sions are explan­at­ory ‘non­starters’ means that they nor­mally would be imme­di­ately rejec­ted. When engaged in an explan­at­ory task, we know that a per­son con­siders only a restric­ted set of hypo­theses and it seems quite nat­ur­al to exclude ones that are incon­sist­ent with one’s back­ground know­ledge. Since delu­sions seem to be in con­flict with our back­ground know­ledge, this is per­haps why we find it dif­fi­cult to under­stand how someone could think a delu­sion is even poten­tially explan­at­ory (for fur­ther dis­cus­sion, see Parrott 2016).

So why do sub­jects con­sider delu­sion­al explan­a­tions as can­did­ate hypo­theses? This is the final chal­lenge for the explan­a­tion­ist. Let’s call it the implaus­ib­il­ity chal­lenge. Notice that where­as the abduct­ive chal­lenge asks why a sub­ject even­tu­ally adopts one hypo­thes­is instead of anoth­er from among a fixed set of avail­able altern­at­ives, the implaus­ib­il­ity chal­lenge is more gen­er­al. It asks where these hypo­theses, the ones sub­ject to fur­ther invest­ig­a­tion, come from in the first place.

Can these three chal­lenges be over­come? I am optim­ist­ic and have tried to address them for the case of thought inser­tion (see Parrott forth­com­ing). However, I also think much more work needs to be done.

First, as I men­tioned above, it is not clear that we have a good under­stand­ing of what it is like for an indi­vidu­al to have the sorts of exper­i­ences we think are implic­ated in many cases of delu­sion. Without such under­stand­ing, I think it is hard to see why some exper­i­ences make demands on a person’s cog­nit­ive explan­at­ory resources. I also sus­pect that under­stand­ing what vari­ous anom­al­ous exper­i­ences are like might shed more light on why delu­sion­al indi­vidu­als tend to adopt very sim­il­ar explan­a­tions.

Second, I think that address­ing the implaus­ib­il­ity chal­lenge requires us to obtain a far bet­ter under­stand­ing of how hypo­theses are gen­er­ated than we cur­rently have. In both delu­sion­al and non-delusional cog­ni­tion, an explan­at­ory task presents a com­pu­ta­tion­al prob­lem. Which can­did­ate hypo­theses should be selec­ted for fur­ther empir­ic­al test­ing? Although I have sug­ges­ted that epi­stem­ic­ally impossible hypo­theses are nor­mally ruled out, that doesn’t tell us how can­did­ates are ruled in. Plausibly, there is some selec­tion function(s) that chooses can­did­ate explan­a­tions of a tar­get phe­nomen­on, but, as Thomas and col­leagues note, we have very little sense of how this might work:

‘Hypothesis gen­er­a­tion is a fun­da­ment­al com­pon­ent of human judg­ment. However, des­pite hypo­thes­is generation’s import­ance in under­stand­ing judg­ment, little empir­ic­al and even less the­or­et­ic­al work has been devoted to under­stand­ing the pro­cesses under­ly­ing hypo­thes­is gen­er­a­tion (Thomas, et. al. 2008, pg. 174).

The implaus­ib­il­ity chal­lenge strikes me as espe­cially puzz­ling because I think we can eas­ily see that cer­tain strategies for hypo­thes­is gen­er­a­tion would be bad. For instance, it wouldn’t gen­er­ally be good to con­sider hypo­theses only if they have a pri­or prob­ab­il­ity above a cer­tain threshold, because a hypo­thes­is with a low pri­or prob­ab­il­ity might best explain a new piece of evid­ence.

Delusional cog­ni­tion raises quite a few deep and inter­est­ing ques­tions, many of which bear on how we think about belief form­a­tion and reas­on­ing. And I have only scratched the sur­face when it comes to the kinds of puzzles that arise when we start think­ing about the ori­gins of delu­sion. But I hope that dis­tin­guish­ing these explan­at­ory chal­lenges will help us in think­ing about the ques­tions which need to be pur­sued if we are to assess the plaus­ib­il­ity of the explan­a­tion­ist strategy.



Bayne, T. and E. Pacherie. 2004. “Bottom-up or Top-down?: Campbell’s Rationalist Account of Monothematic Delusions.” Philosophy, Psychiatry, and Psychology, 11: 1–11.

Blakemore, S., D. Wolpert, and C. Frith. 2002. “Abnormalities in the Awareness of Action.” Trends in Cognitive Science, 6: 237–242.

Campbell, J. 2001. “Rationality, Meaning and the Analysis of Delusion.” Philosophy, Psychiatry and Psychology,8: 89–100.

Coltheart, M., P. Menzies, and J. Sutton. 2010. “Abductive Inference and Delusional Belief.” Cognitive Neuropsychiatry, 15: 261–87.

Coltheart, M. 2005. “Conscious Experience and Delusional Belief.” Philosophy, Psychiatry and Psychology, 12: 153–57.

Davies, M., M. Coltheart, R. Langdon, and N. Breen. 2001. “Monothematic Delusions: Towards a Two-Factor Account.” Philosophy, Psychiatry and Psychology, 8: 133–158.

Davies, M. and Egan, A. 2013. “Delusion: Cognitive Approaches, Bayesian Inference and Compartmentalization.” In K.W.M. Fulford, M. Davies, R.G.T. Gipps, G. Graham, J. Sadler, G. Stanghellini and T. Thornton (eds.), The Oxford Handbook of Philosophy of Psychiatry. Oxford: Oxford University Press.

Ellis, H. and A. Young. 1990. “Accounting for Delusional Misidentifications.” British Journal of Psychiatry, 157: 239–48.

Fine, C. M, J. Craigie, & I. Gold. 2005. “The Explanation Approach to Delusion.” Philosophy, Psychiatry, and Psychology, 12 (2): 159–163.

Maher, B. 1974. “Delusional Thinking and Perceptual Disorder.” Journal of Individual Psychology, 30: 98–113.

Maher, B. 1988. “Anomalous Experience and Delusional Thinking: The Logic of

Explanations”, in T. Oltmanns and B. Maher (eds.), Delusional Beliefs, Chichester: John Wiley and Sons, pp. 15–33.

Maher, B. 1999. “Anomalous Experience in Everyday Life: Its Significance for Psychopathology.” The Monist, 82: 547–570.

McKay, R. 2012. “Delusional Inference.” Mind and Language, 27: pp. 330–55.

Pacherie, E., M. Green, and T. Bayne. 2006. “Phenomenology and Delusions: Who Put the ‘Alien’ in Alien Control?” Consciousness and Cognition, 15: 566–577.

Parrott, M. 2016. “Bayesian Models, Delusional Beliefs, and Epistemic Possibilities.” The British Journal for the Philosophy of Science, 67: 271–296.

Parrott, M. forth­com­ing. “Subjective Misidentification and Thought Insertion.” Mind and Language.

Parrott, M. and P. Koralus. 2015. “The Erotetic Theory of Delusional Thinking.” Cognitive Neuropsychiatry, 20 (5): 398–415.

Saks, E. 2007. The Center Cannot Hold. New York: Hyperion.

Sollberger, M. 2014. “Making Sense of an Endorsement Model of Thought Insertion.” Mind and Language, 29: 590–612.

Stone, T. and A. Young. 1997. “Delusions and Brain Injury: the Philosophy and Psychology of Belief.” Mind and Language, 12: 327–364.

Thomas, R., M. Dougherty, A. Sprenger, and J. Harbison. 2008. “Diagnostic Hypothesis Generation and Human Judgment.” Psychological Review, 115(1): 155–185.