iCog research grants selection process

iCog had some leg­acy funds to dis­burse in the 2019/20 aca­dem­ic year. In keep­ing with iCog’s remit to sup­port and encour­age inter­dis­cip­lin­ary research, the iCog steer­ing com­mit­tee decided that the best use for these funds would be to award grants for the par­ti­cipant costs of empir­ic­al research by juni­or research­ers doing inter­dis­cip­lin­ary work in cog­nit­ive sci­ence. Projects which were par­tic­u­larly likely to fall through the gaps of the fund­ing remit of oth­er fund­ing bod­ies due to their inter­dis­cip­lin­ary nature would be giv­en priority.

We received 33 applic­a­tions: 9 from the UK, 8 from the US, 4 from Germany, 3 from Switzerland, 2 from Finland, and 1 each from Australia, Austria, Belgium, France, Italy, the Netherlands, and Turkey.

The selection process

Usually all that an applic­ant for research fund­ing is told is that his or her applic­a­tion was or (more often – in this case about ten times more often) was not suc­cess­ful, while the pro­cess lead­ing to that res­ult remains opaque. With this post, we wish to bring some trans­par­ency to the selec­tion pro­cess we used. By explain­ing in some detail here how we pro­ceeded and why, we hope to show that the pro­cess was fair and systematic.

No selec­tion pro­cess is per­fect. But we also hope that our way of pro­ceed­ing with this selec­tion will find imit­at­ors across the academy, in par­tic­u­lar in its strict com­mit­ment to blind review­ing, as detailed below. Preparing this can be a little labour-intensive. But, in the interest of fair­ness to all applic­ants, blind­ing applic­a­tions as needed was a labour worth doing.

Of the total of 33 applic­a­tions received, one applic­a­tion was ineligible as its pur­pose fell out­side the remit of the grants, i.e. par­ti­cipant costs in empir­ic­al stud­ies. The 32 eli­gible applic­a­tions were blind reviewed, ini­tially by two aca­dem­ic review­ers (drawn from a grants review sub­com­mit­tee of five) with cross-disciplinary expert­ise but from two dif­fer­ent ‘home’ dis­cip­lines, who assessed and scored their qual­ity, inter­dis­cip­lin­ar­ity, and feasibility. 

From this pro­cess four research pro­pos­als emerged with very high aggreg­ate scores from both review­ers. These were auto­mat­ic­ally short­l­is­ted. A fur­ther five applic­a­tions had suf­fi­ciently high scores from at least one review­er to be con­sidered fur­ther; these pro­pos­als were then each reviewed by a third review­er. Two of these five pro­pos­als thereby reached an aggreg­ate score com­par­able to that of the four pro­pos­als already short­l­is­ted, lead­ing to a final short­l­ist of six. (The length of the short­l­ist was dic­tated by the need to keep it man­age­able for the grants sub­com­mit­tee to review all short­l­is­ted proposals.)

Common reas­ons why pro­pos­als were rejec­ted at this stage were that oth­er pro­pos­als were qual­it­at­ively super­i­or, insuf­fi­cient detail on the meth­od or feas­ib­il­ity of the pro­posed stud­ies, or – in some cases – lack of interdisciplinarity.

The six short­l­is­ted research pro­pos­als, along with their applic­ants’ CVs and the two or three ini­tial reviewer’s reports and scores, were then sub­mit­ted to the full grants sub­com­mit­tee for final delib­er­a­tion. This was done by a com­bin­a­tion of dis­cus­sion and, where this proved incon­clus­ive, votes. Taking into account avail­able funds and the amount of fund­ing reques­ted by applic­ants, the subcommittee’s decision was to award full fund­ing to two research pro­pos­als and par­tial fund­ing to a third. The three short­l­is­ted applic­ants who were ulti­mately unsuc­cess­ful were offered feed­back on their applic­a­tions; all of them took up that offer.

Blind review

The entire review pro­cess up to and includ­ing the final decisions on fund­ing was ‘blind’. In the stages up to short­l­ist­ing, review­ers were not shown the names or affil­i­ations of the applic­ants. For the short­l­is­ted applic­a­tions, affil­i­ations were then vis­ible inso­far as they appeared on applic­ants’ CVs, but applic­ants’ names were still redac­ted from these. Why and how did we do that?

The rationale for blind review­ing is simple: it is to avoid review­ers hav­ing impli­cit biases on the grounds of an applicant’s gender, pre­sumed back­ground, or loc­a­tion. Personal names often reveal a person’s gender and eth­nic ori­gin, so these were with­held from review­ers through­out the pro­cess. The decision also to with­hold applic­ants’ insti­tu­tion­al affil­i­ations dur­ing the first stages of the review pro­cess, up to short­l­ist­ing, was like­wise to pre­vent review­ers from being influ­enced by favour­able or unfa­vour­able biases they might have about an applicant’s insti­tu­tion or depart­ment or its loc­a­tion. (For the short­l­ist, it would not have been prac­tic­able to con­tin­ue with­hold­ing affil­i­ations, since applic­ants’ CVs were also being con­sidered at that stage.)

As to the meth­od of ensur­ing blind review­ing, this too was rel­at­ively straight­for­ward. The applic­a­tion form was designed in such a way that applic­ants’ research pro­pos­als could eas­ily be pulled out of the data sub­mit­ted without the accom­pa­ny­ing per­son­al or insti­tu­tion­al details (and just as eas­ily matched again at the end of the pro­cess, by means of a unique ID alloc­ated to each application). 

We had asked for the research pro­pos­als sub­mit­ted to be ‘suit­able for blind review­ing’. Many applic­ants were com­mend­ably assidu­ous in com­ply­ing with this request, though some were not – we ought per­haps to have been more expli­cit by e.g. put­ting a note on the applic­a­tion form request­ing ‘No names or affil­i­ations in the text of the pro­pos­al, please’. We had like­wise asked for uploaded CVs to con­tain the applicant’s name only once, to speed up the blind­ing process.

To make sure that the mater­i­al passed to review­ers was indeed ‘suit­able for blind review­ing’, it was neces­sary to redact any men­tion of applic­ants’ names or gendered per­son­al pro­nouns and insti­tu­tion­al affil­i­ations from research pro­pos­als and, in the CVs of short­l­is­ted applic­ants, redact the one occur­rence of their name from the doc­u­ment. We were able to do this because the iCog com­mit­tee had des­ig­nated an admin­is­trat­or for this pur­pose, as well as for organ­ising the review pro­cess gen­er­ally, without him­self being a mem­ber of the review subcommittee.

Reading Minds & Reading Maps

Do non­hu­man anim­als know they’re not alone? Of course they must know there are lots of things in the world around them – rocks, water, trees, oth­er creatures and what have you. But do they know that they inhab­it a world pop­u­lated by minded creatures – that the anim­als around them see and know things, that they have beliefs, inten­tions and desires? Can they attrib­ute men­tal states to oth­er anim­als, and use those attri­bu­tions to pre­dict or explain their beha­viour? If so, then they’re what philo­soph­ers and psy­cho­lo­gists call ‘mindread­ers’.

Whether anim­als are mindread­ers has been a con­tested ques­tion in com­par­at­ive cog­ni­tion for around forty years (begin­ning with Premack & Woodruff, 1978), and it remains con­tro­ver­sial. My interest in this post is not so much wheth­er anim­als are mindread­ers but rather, if anim­als are mindread­ers, what kind of mindread­ers might they be? The motiv­at­ing thought is this: even if anim­als do rep­res­ent and reas­on about the men­tal states of oth­ers, their under­stand­ing of men­tal those states might be some­what dif­fer­ent from ours.

The idea that anim­als might have a lim­ited or ‘min­im­al’ under­stand­ing of men­tal states has been explored in a num­ber of places (see, for instance, Bermúdez, 2011; Butterfill & Apperly, 2013; Call & Tomasello, 2008). These pro­pos­als dif­fer, but they have in com­mon the idea that anim­als don’t con­strue men­tal states as rep­res­ent­a­tions – that is, as states which rep­res­ent the world, and which can do so accur­ately or inac­cur­ately. If these pro­pos­als are right, anim­als might be able to rep­res­ent oth­ers as hav­ing fact­ive men­tal states like see­ing or know­ing, but would not be able to make sense of anoth­er agent hav­ing a false belief, or any state that mis­rep­res­ents the world.

Recent work on mindread­ing in chim­pan­zees puts pres­sure on this sort of pro­pos­al. Christopher Krupenye and col­leagues (Krupenye, Kano, Hirata, Call, & Tomasello, 2016) found that chim­pan­zees were able to pre­dict the beha­viour of a human with a false belief. It’s not uncon­tro­ver­sial (see Andrews, 2018 for dis­cus­sion), but for the sake of argu­ment let’s say that this is indeed evid­ence that chimps under­stand false beliefs, as states that mis­rep­res­ent the world. Does that mean that chimps’ under­stand­ing of men­tal states is essen­tially the same as our own?

I’ve argued that it doesn’t. That’s because there are import­ant ways in which mindread­ers might dif­fer from one anoth­er, even if they rep­res­ent men­tal states as rep­res­ent­a­tion­al. To see that, let’s think a bit more about rep­res­ent­a­tions. A rep­res­ent­a­tion has a con­tent – how it rep­res­ents the world as being – which can be accur­ate or inac­cur­ate. The sen­tence ‘Santa is in the chim­ney’ is a rep­res­ent­a­tion whose con­tent is that Santa is in the chim­ney.It’s accur­ate if Santa is in the chim­ney, and inac­cur­ate if he’s some­where else. But a rep­res­ent­a­tion also has a format– it exploits a par­tic­u­lar rep­res­ent­a­tion­al sys­tem in order to rep­res­ent what it rep­res­ents. ‘Santa is in the chim­ney’ is a rep­res­ent­a­tion with a sen­ten­tial, lin­guist­ic format. But we could rep­res­ent the same con­tent in a num­ber of oth­er formats. For instance, we might rep­res­ent it pictori­ally by draw­ing Santa in the chim­ney, as in Figure 1. Or we might draw up a map rep­res­ent­ing the same thing, as in Figure 2.

Given that rep­res­ent­a­tions may dif­fer with respect to the rep­res­ent­a­tion­al format they exploit, mindread­ers might dif­fer with respect to the rep­res­ent­a­tion­al format they take men­tal states to have. Some might treat beliefs as some­thing like ‘sen­tences in the head’. Others might treat them as more picture-like. Still oth­ers might be what I’ve called ‘mindmap­pers’ (Boyle, 2019) – they might take lit­er­ally the idea that a belief is a ‘map of the neigh­bour­ing space by which we steer’ (Ramsey, 1931).

This mat­ters, because the rep­res­ent­a­tion­al format one takes men­tal states to have has a sig­ni­fic­ant impact on one’s mindread­ing abil­it­ies – because dif­fer­ent rep­res­ent­a­tion­al formats them­selves dif­fer from one anoth­er in sys­tem­at­ic ways.

Take maps. As I’m using the term, a map makes use of a lex­icon of icons, each of which stands for a par­tic­u­lar (type of) thing, which it com­bines accord­ing to the prin­ciple of spa­tial iso­morph­ism. Simply put, by pla­cing two icons in a par­tic­u­lar spa­tial rela­tion­ship on a map, one thereby rep­res­ents that the two things denoted by the icons stand in an iso­morph­ic spa­tial rela­tion­ship in real­ity. That’s all there is to it.

If you want to rep­res­ent the spa­tial lay­out of a num­ber of objects in a par­tic­u­lar region of space, there are lots of advant­ages to using a map: it’s a very nat­ur­al and user-friendly way to rep­res­ent that kind of inform­a­tion. A single map can con­tain an awful lot of inform­a­tion about the spa­tial lay­out of a region. To con­vey the con­tent of a map in lan­guage would usu­ally require a large and unwieldy set of sen­tences (or a very lengthy sen­tence). And updat­ing inform­a­tion in a map without intro­du­cing incon­sist­ency is easy to do. Updating the rep­res­en­ted loc­a­tion of an object by mov­ing an icon thereby also updates the rep­res­en­ted rela­tion­ships between that object and everything else on the map, keep­ing the whole con­sist­ent. If one rep­res­en­ted all of this spa­tial inform­a­tion sen­ten­tially, it would be easy to intro­duce incon­sist­en­cies. (See Camp, 2007 for a fuller dis­cus­sion of maps’ rep­res­ent­a­tion­al features.)

For all that, maps are an extremely lim­it­ing rep­res­ent­a­tion­al format: all they can really rep­res­ent is the spa­tial lay­out of objects in a region. If you want to rep­res­ent that Christmas is com­ing, that the goose is get­ting fat, or that Santa is really your dad, a map would be a poor format to choose. These are not the kinds of con­tents that a map can express. For that kind of thing, you need a more express­ively power­ful format – like language.

The point is that the dis­tinct­ive strengths and weak­nesses of rep­res­ent­a­tion­al formats will show up in their mindread­ing abil­it­ies and beha­viour. Humans can ascribe an appar­ently unlim­ited range of beliefs – beliefs about Santa’s true iden­tity, about death and resur­rec­tion, about pos­sible presents with no known loc­a­tion. I think this is good evid­ence that we take men­tal states to be lin­guist­ic, or at least to have a format which mir­rors language’s express­ive power.

But anim­als might not be like us in that respect: they might think of beliefs as maps in the head. If they do, they would be able to cap­ture what oth­ers think about where things are to be found, but they wouldn’t be able to make sense of beliefs about object iden­tit­ies or about non-spatial prop­er­ties – and nor could they make sense of someone hav­ing a belief about an object whilst hav­ing no belief about its loc­a­tion. To my know­ledge, wheth­er anim­als can rep­res­ent these non-spatial beliefs has not been invest­ig­ated. So, it remains an open empir­ic­al ques­tion wheth­er they treat beliefs as map-like, lin­guist­ic, or hav­ing some oth­er format. But it’s a ques­tion worth invest­ig­at­ing. If anim­als con­strued men­tal states as hav­ing a non-linguistic format, there would remain a sig­ni­fic­ant sense in which anim­als’ mindread­ing abil­it­ies differed qual­it­at­ively from ours.



Andrews, K. (2018). Do chim­pan­zees reas­on about belief? In K. Andrews & J. Beck (Eds.), The Routledge Handbook of Philosophy of Animal Minds. Abingdon: Routledge.

Bermúdez, J. L. (2011). The force-field puzzle and mindread­ing in non-human prim­ates. Review of Philosophy and Psychology, 2(3), 397–410. https://doi.org/10.1007/s13164-011‑0077‑9

Boyle, A. (2019). Mapping the minds of oth­ers. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-019–00434‑z

Butterfill, S. A., & Apperly, I. A. (2013). How to con­struct a min­im­al the­ory of mind. Mind & Language, 28(5), 606–637.

Call, J., & Tomasello, M. (2008). Does the chim­pan­zee have a the­ory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192.

Camp, E. (2007). Thinking with maps. Philosophical Perspectives, 21, 145–182.

Krupenye, C., Kano, F., Hirata, S., Call, J., & Tomasello, M. (2016). Great apes anti­cip­ate that oth­er indi­vidu­als will act accord­ing to false beliefs. Science, 354(6308), 110–114. https://doi.org/10.1126/science.aaf8110

Premack, D., & Woodruff, G. (1978). Does the chim­pan­zee have a the­ory of mind? Behavioral and Brain Sciences, 4, 515–526.

Ramsey, F. P. (1931). The Foundations of Mathematics. London: Kegan Paul.

Foraging in the Global Workspace: The Central Executive Reconsidered

David L Barack, Postdoctoral Research Fellow, Salzman Lab, Columbia University

What is the cent­ral exec­ut­ive? In cog­nit­ive psy­cho­logy, exec­ut­ive func­tion­ing con­cerns the com­pu­ta­tion­al pro­cesses that con­trol cog­ni­tion, includ­ing the dir­ec­tion of atten­tion, action selec­tion, decision mak­ing, task switch­ing, and oth­er such func­tions. In cog­nit­ive sci­ence, the cent­ral pro­cessor is some­times modeled after the CPU of a von Neumann archi­tec­ture, the mod­ule of a com­pu­ta­tion­al sys­tem that makes calls to memory, executes trans­form­a­tions in line with algorithms over the retrieved data, and then writes back to memory the res­ults of these trans­form­a­tions. On my account of the mind, the cent­ral pro­cessor pos­sesses the psy­cho­lo­gic­al func­tions that are part of exec­ut­ive func­tion­ing. I will refer to this com­bined con­struct of a cent­ral pro­cessor that per­forms exec­ut­ive func­tions as the cent­ral executive.

The cent­ral exec­ut­ive has a range of prop­er­ties, but for this post, I will focus on domain gen­er­al­ity, inform­a­tion­al access­ib­il­ity, and infer­en­tial rich­ness. By domain gen­er­al, I mean that the cent­ral exec­ut­ive con­tains inform­a­tion from dif­fer­ent mod­al­it­ies (such as vis­ion, audi­tion, etc.). By inform­a­tion­ally access­ible, I mean both that the cent­ral executive’s algorithms have access to inform­a­tion out­side of the cent­ral exec­ut­ive and that inform­a­tion con­tained in these algorithms is access­ible by oth­er pro­cesses, wheth­er also part of the cent­ral exec­ut­ive or part of input or out­put spe­cif­ic sys­tems. By infer­en­tially rich, I mean that the inform­a­tion in the cent­ral exec­ut­ive is poten­tially com­bined with any oth­er piece of inform­a­tion to res­ult in new beliefs. The func­tions of the cent­ral exec­ut­ive may or may not be conscious.

Three con­cepts at the heart of my mod­el of the cent­ral exec­ut­ive will provide the resources to begin to explain these three prop­er­ties: intern­al search, a glob­al work­space, and foraging.

The first concept is intern­al search. Newell fam­ously said that search is at the heart of cog­ni­tion (Newell 1994), a pos­i­tion with which much mod­ern cog­nit­ive neur­os­cience agrees (Behrens, Muller et al. 2018; Bellmund, Gärdenfors et al. 2018). Search is the pro­cess of trav­el­ing through some space (phys­ic­al or abstract, such as concept space or the inter­net) in order to loc­ate a goal, and intern­al search refers to a search that occurs with­in the organ­ism. Executive func­tions, I con­tend, are types of search.

The second concept in my ana­lys­is is the glob­al work­space. Search requires some space through which to occur. In the case of cog­ni­tion, search occurs in the glob­al work­space: a com­pu­ta­tion­al space in which dif­fer­ent data struc­tures are loc­ated and com­pete for com­pu­ta­tion­al resources and oper­a­tions. The glob­al work­space is a notion that ori­gin­ated in cog­nit­ive the­or­ies of con­scious­ness (Baars 1993) but has recently been applied to cog­ni­tion (Schneider 2011). The glob­al work­space can be con­cep­tu­al­ized in dif­fer­ent ways. The glob­al work­space could be some­thing like a hard drive that stores data but to which many dif­fer­ent oth­er parts of the sys­tem (such as the brain) sim­ul­tan­eously have access. Or, it could be some­thing like an arena where dif­fer­ent data struc­tures lit­er­ally roam around and inter­act with com­pu­ta­tion­al oper­a­tions (like a lit­er­al imple­ment­a­tion of a pro­duc­tion archi­tec­ture; see Newell 1994; Simon 1999). The cent­ral exec­ut­ive is partly con­sti­tuted by intern­al search through a glob­al workspace.

The third and final concept in my ana­lys­is is for­aging. Foraging is a spe­cial type of dir­ec­ted search for resources under ignor­ance. Specifically, for­aging is the goal-directed search for resources in non-exclusive, iter­ated, accept-or-reject decision con­texts (Barack and Platt 2017;Barack ms). I con­tend that cent­ral exec­ut­ive pro­cesses involve for­aging (and hence this third concept is a spe­cial case of the first concept, intern­al search). While cent­ral exec­ut­ive pro­cesses may not lit­er­ally make decisions, the ana­logy is apt. The intern­al search through the glob­al work­space is dir­ec­ted: a par­tic­u­lar goal is sought, which in the case of the cent­ral exec­ut­ive is going to be defined by some sort of loss func­tion that the sys­tem is attempt­ing to min­im­ize. This search is non-exclusive, as oper­a­tions on data that are fore­gone can be executed at a later time. The search is iter­ated, as the same oper­a­tion can be per­formed repeatedly. Finally, the oper­a­tions of the cent­ral exec­ut­ive are accept-or-reject in the sense that com­pu­ta­tion­al oper­a­tions per­formed on data struc­tures either occur or they do not in a one-at-a-time, seri­al fashion.

The ana­lys­is of the cent­ral exec­ut­ive as foraging-type searches through an intern­al, glob­al work­space may shed light on the three key prop­er­ties men­tioned earli­er: domain gen­er­al­ity, inform­a­tion­al access­ib­il­ity, and infer­en­tial richness.

First, domain gen­er­al­ity is provided for by the glob­al work­space and unres­tric­ted search. This work­space is neut­ral with regard to the sub­ject mat­ter of the data struc­tures it con­tains, and so is domain gen­er­al. The search pro­cesses that oper­ate in that work­space are also unres­tric­ted in their sub­ject matter—those pro­cesses can oper­ate over any data that matches the input con­straints for the pro­duc­tion sys­tem. (While they may be unres­tric­ted in their sub­ject mat­ter, they are restric­ted by the con­straints on the data imposed by the pro­duc­tion system’s trig­ger­ing con­di­tions.) The unres­tric­ted sub­ject mat­ter of the glob­al work­space and the unres­tric­ted nature of the pro­duc­tion pro­cesses both con­trib­ute to the domain gen­er­al nature of the cent­ral exec­ut­ive. This ana­lys­is of domain gen­er­al­ity sug­gests two types of such gen­er­al­ity should be dis­tin­guished. There are con­straints on what type of con­tent (per­cep­tu­al, motor­ic, gen­er­al, etc.) can be con­tained in stored data struc­tures, and there are con­straints on the type of con­tent that can trig­ger a trans­form­a­tion. A domain gen­er­al work­space can con­tain domain spe­cif­ic pro­duc­tions, for example.

Second, inform­a­tion­al access­ib­il­ity reflects the glob­al workspace’s struc­ture. In order to be a glob­al work­space, dif­fer­ent modality- or domain-specific mod­ules must have access to the work­space. But this access means that there must be con­nec­tions to the work­space. Other aspects of inform­a­tion­al access remain to be explained. In par­tic­u­lar, while the glob­al work­space may be widely inter­con­nec­ted, that does not entail that mod­ules have access to inform­a­tion in spe­cif­ic algorithms in the work­space. The pres­ence of a work­space merely insures some of the needed archi­tec­tur­al fea­tures for such access are present.

Third, infer­en­tial rich­ness res­ults from this intern­al for­aging through the work­space. Foraging com­pu­ta­tions are optim­al in that they min­im­ize or max­im­ize some func­tion under uncer­tainty. Such optim­al­ity implies that the executed com­pu­ta­tion reflects the best data at hand, regard­less of its con­tent. Any such data can be util­ized to determ­ine the oper­a­tion that is actu­ally executed at a giv­en time. This explan­a­tion of infer­en­tial rich­ness is not quite the sort described by Quine (Quine 1960)or Fodor (Fodor 1983), who envi­sion infer­en­tial rich­ness as the poten­tial for any piece of inform­a­tion to influ­ence any oth­er. But with enough simple foraging-like com­pu­ta­tions and enough time, this poten­tial wide­spread influ­ence can be approximated.

These com­ments have been spec­u­lat­ive, but I hope I have provided an out­line of a sketch for a new mod­el of the cent­ral exec­ut­ive. Obviously much more con­cep­tu­al and the­or­et­ic­al work needs to be done, and many objections—perhaps most fam­ously those of Fodor, who des­paired of a sci­entif­ic account of such cent­ral processes—remain to be addressed. I intend on flesh­ing out these ideas in a series of essays. Regardless, I think that there is much more prom­ise in a sci­entif­ic explan­a­tion of these cru­cial, cent­ral psy­cho­lo­gic­al pro­cesses than has been pre­vi­ously appreciated.



Baars, B. J. (1993). A cog­nit­ive the­ory of con­scious­ness, Cambridge University Press.

Barack, D. L. and M. L. Platt (2017). Engaging and Exploring: Cortical Circuits for Adaptive Foraging Decisions. Impulsivity, Springer: 163–199.

Barack, D. L. (ms). “Information Harvesting: Reasoning as Foraging in the Space of Propositions.”

Behrens, T. E., T. H. Muller, J. C. Whittington, S. Mark, A. B. Baram, K. L. Stachenfeld and Z. Kurth-Nelson (2018). “What is a cog­nit­ive map? Organizing know­ledge for flex­ible beha­vi­or.” Neuron100(2): 490–509.

Bellmund, J. L., P. Gärdenfors, E. I. Moser and C. F. J. S. Doeller (2018). “Navigating cog­ni­tion: Spatial codes for human think­ing.”  362(6415): eaat6766.

Fodor, J. A. (1983). The mod­u­lar­ity of mind: An essay on fac­ulty psy­cho­logy, MIT press.

Newell, A. (1994). Unified Theories of Cognition, Harvard University Press.

Quine, W. V. O. (1960). Word and object, MIT press.

Schneider, S. (2011). The lan­guage of thought, The MIT Press.

Simon, H. (1999). Production sys­tems. The MIT Encyclopedia of the Cognitive Sciences: 676–677.

Seeking social connection: How children recover from social exclusion

Amanda Mae Woodward, PhD can­did­ate, Department of Psychology, University of Maryland

Think of a time that you met up with a friend at a cof­fee shop. The two of you sat at a table, drank cof­fee, and filled each oth­er in on your lives. Over the course of the dis­cus­sion, you may have exper­i­enced pos­it­ive emo­tions like hap­pi­ness, and you left the café with a sense of social con­nec­tion. Positive social inter­ac­tions, like the one just described, cor­res­pond with our over­all well-being and help ful­fill a fun­da­ment­al human need: the need to belong with oth­ers (Baumeister & Leary, 1995; Wesselman & Williams, 2013). However, as we all know, not all social inter­ac­tions are pos­it­ive. Imagine anoth­er scen­ario. You call one of your friends to make din­ner plans. Your friend explains that he already has plans for din­ner and will not be able to join you. You ask about his plans and learn that he is going to din­ner with all of your mutu­al friends and no one has exten­ded an invit­a­tion to you. How would you feel? You may, expec­tedly, exper­i­ence neg­at­ive emo­tions and feel lonely.

This inter­ac­tion, and oth­ers like it, are instances of social exclu­sion. Being excluded neg­at­ively impacts social, cog­nit­ive, and physiolo­gic­al pro­cessing (Baumeister, Twenge, & Nuss, 2002; Blackhart, Eckel, & Tice, 2007; DeWall, Deckman, Pond & Bonser, 2011). Exclusion leads to exper­i­ences of neg­at­ive affect, decreases in mood, lowered self-esteem, and feel­ings of isol­a­tion (Leary & Cottrell, 2013; Maner, DeWall, Baumeister, & Schaller, 2007). If social exclu­sion occurs chron­ic­ally, the reper­cus­sions of exclu­sion com­pound and become more severe over time (Richman 2013; Williams, 2007). Even young chil­dren are sub­ject to the neg­at­ive effects of social exclu­sion. Socially excluded middle school chil­dren report more neg­at­ive emo­tions and have decreased feel­ings of belong­ing when com­pared to their included coun­ter­parts (Abrams, Weick, Thomas, Colbe, & Franklin, 2011; Wölfer & Scheithauer, 2013). Four- to six-year-old chil­dren exclude each oth­er fre­quently, and being excluded has a neg­at­ive influ­ence on their future social beha­vi­ors (Fanger, Frankel, & Frazen, 2012; Stenseng, Belsky, Skalicka, & Wichstrøm, 2014). Due to social exclusion’s doc­u­mented harm­ful con­sequences across the lifespan, it is import­ant for children’s over­all well­being to find a way to mit­ig­ate its effects. This post will explore some of the main strategies chil­dren use to mit­ig­ate such effects.

How do chil­dren ameli­or­ate the con­sequences of social exclu­sion? One effect­ive strategy involves the excluded child rees­tab­lish­ing a social con­nec­tion (Maner et al., 2007).  Connecting with oth­ers sat­is­fies children’s need to belong and reduces neg­at­ive affect. To use this strategy, chil­dren must find poten­tial social part­ners with whom they are likely to have pos­it­ive inter­ac­tions. If they think future inter­ac­tions with the per­son who excluded them are likely, chil­dren may seek to recon­nect with the excluder through the use of ingra­ti­at­ing beha­vi­or (e.g., mim­icry or con­form­ing to another’s opin­ions). In oth­er cases, such as when recon­nect­ing with the excluder is unlikely, chil­dren may look for new approach­able social part­ners or con­texts with which to form pos­it­ive rela­tion­ships (Molden & Maner, 2013).

Young children’s responses to exclu­sion sup­port the use of both strategies. Five-year-olds who are excluded by group mem­bers imit­ate oth­er in-group mem­bers with more fidel­ity than chil­dren who were not excluded (Watson-Jones, Whitehouse, & Legare, 2015). Imitation is a type of flat­tery, so by mim­ick­ing the beha­vi­or of poten­tial social part­ners, chil­dren sig­nal that they will be a good per­son with whom to inter­act (Over & Carpenter, 2009). Excluded chil­dren also demon­strate their open­ness to new social inter­ac­tion in oth­er wa­­ys. For instance, 5‑year-olds who are excluded have been shown to engage in more men­tal­iz­ing and to attend to the feel­ings of oth­ers more often than included chil­dren (White et al., 2016). Even wit­ness­ing exclu­sion leads chil­dren to stra­tegic­ally seek social part­ners. After observing a peer exper­i­ence exclu­sion, chil­dren have been shown to dis­play beha­vi­ors that facil­it­ate social con­nec­tion, includ­ing imit­at­ing oth­ers more fre­quently, draw­ing more affil­i­at­ive pic­tures, and sit­ting phys­ic­ally closer to oth­ers (Marinovic, Wahl, & Träuble, 2017; Over & Carpenter, 2009; Song, Over, & Carpenter, 2015).

Less work has examined oth­er strategies chil­dren may use to reduce the harm­ful effects of social exclu­sion, par­tic­u­larly when they have, or believe that they have, restric­ted means by which to rees­tab­lish a social con­nec­tion. When the per­ceived like­li­hood of social recon­nec­tion is low, excluded people may react aggress­ively in order to estab­lish feel­ings of con­trol over their own lives (Wesselman & Williams, 2013). For instance, adults respond to social exclu­sion in anti­so­cial ways when they are unlikely to recon­nect with oth­ers (Maner & Molden, 2013). Indeed, adults have been shown to behave more aggress­ively and to engage in less proso­cial beha­vi­or after being excluded (DeWall & Twenge, 2013; Twenge, Baumeister, Tice, & Stucke, 2001). Some recent research has explored children’s aggress­ive beha­vi­or after exclu­sion and has found sim­il­ar evid­ence for the use of an aggress­ive strategy: chil­dren who were already high in aggres­sion demon­strated increases in aggres­sion fol­low­ing exclu­sion (Fanger, Frankel, & Frazen, 2012; Ostrov, 2010).

A final strategy to avoid or alle­vi­ate the harm­ful effects of social exclu­sion involves avoid­ing social inter­ac­tions with people who are likely to exclude you. It is pos­sible, and thus reas­on­able to infer, that people who have excluded you in the past would be likely to exclude you in the future, so you could cir­cum­vent the exper­i­ence of social exclu­sion by refrain­ing from inter­act­ing with them in the first place. Using this strategy requires excluded chil­dren to track social excluders and remem­ber pre­vi­ous inter­ac­tions. Our lab, the Lab for Early Social Cognition at the University of Maryland College Park, is cur­rently work­ing on a series of exper­i­ments to estab­lish if and when chil­dren can use this strategy to effect­ively reduce the odds of exper­i­en­cing social exclu­sion in the future.

Overall, social exclu­sion is harm­ful and can lead to dev­ast­ing effects, the con­sequences of which apply to both adults and young chil­dren. It is thus essen­tial to under­stand when chil­dren begin to exper­i­ence instances of social exclu­sion and to estab­lish how they can respond in order to pre­vent harm to them­selves. This work may also have implic­a­tions for the con­struc­tion and imple­ment­a­tion of inter­ven­tions designed to help chil­dren reduce instances of social exclu­sion that they may carry with them into adulthood.



Abrams, D., Weick, M., Thomas, D., Colbe, H., & Franklin, K. M. (2011). On-line ostra­cism affects chil­dren dif­fer­ently from adoles­cents and adults. The British Journal of Developmental Psychology, 29(Pt 1), 110–123. http://doi.org/10.1348/026151010X494089

Baumiester, R.F. & Leary, M.R. (1995). The need to belong: Desire for inter­per­son­al attach­ments as a fun­da­ment­al human motiv­a­tion. Psychological Bulletin, 117(3), 497–529.

Baumeister, R.F., Twenge, J.M., & Nuss, C.K. (2002). Effects of social exclu­sion on cog­nit­ive pro­cesses: anti­cip­ated alone­ness reduces intel­li­gent thought. Journal of per­son­al­ity and social psy­cho­logy, 83(4), 817.

Blackhart, G.C., Eckel, L.A., & Tice, D.M. (2007). Salivary cortisol in response to acute social rejec­tion and accept­ance by peers. Biological psy­cho­logy, 75(3), 267–276. doi: 10.1016/j.biopsycho.2007.03.005

DeWall, N.C., Deckman, T., Pond, R.S., Bonser, I. (2011). Belongingness as a core per­son­al­ity trait: How social exclu­sion influ­ences social func­tion­ing and per­son­al­ity expression.

Journal of Personality, 79(6), 1281- 1314. doi: 10.1111/j.1467–6494.2010.00695.x

DeWall, C.N & Twenge, J.M. (2013). Rejection and aggres­sion: Explaining the para­dox. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (3–8). Oxford: Oxford University Press.

Fanger, S.M., Frankel, L.A., & Hazen, N. (2012). Peer exclu­sion in preschool children’s play: Naturalistic obser­va­tions in a play­ground set­ting. Merrill-Palmer Quarterly, 58(2), 224–254.

Leary, M.R. & Cottrell, C.A. (2013). Evolutionary per­spect­ives on inter­per­son­al accept­ance and rejec­tion. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (9–19). Oxford: Oxford University Press.

Maner, J.K., DeWall, C.N, Baumeister, R.F., & Schaller, M. (2007). Does social exclu­sion motiv­ate inter­per­son­al recon­nec­tion? Resolving the “por­cu­pine prob­lem.” Journal of Personality and Social Psychology, 92(1), 42–55. doi: 10.1037/0022–3514.92.1.42.

Molden, D.C. & Maner, J.K. (2013). How and when exclu­sion motiv­ates social recon­nec­tion. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (121–131). Oxford: Oxford University Press.

Marinovic, V. & Träuble, B. (2018). Vicarious social exclu­sion and memory in young chil­dren. Developmental Psychology, 54(11), 2067–2076. doi: 10.1037/dev0000593

Marinovic, V., Wahl, S., & & Träuble, B. (2017). “Next to you” – Young chil­dren sit closer to a per­son fol­low­ing vicari­ous ostra­cism. Journal of Experimental Child Psychology, 156, 179–185. doi: 10.1016/j.jecp.2016.11.011

Over, H., & Carpenter, M. (2009). Priming third-party ostra­cism increases affil­i­at­ive imit­a­tion in chil­dren. Developmental Science, 12(3), 1–8. doi: 10.1111/j.1467–7687.2008.00820.x

Ostrov, J. (2010). Prospective asso­ci­ations between peer vic­tim­iz­a­tion and aggres­sion. Child Development, 81(6), 1670–1677.

Richman, L.S. (2013). The multi-motive mod­el of responses to rejection-related exper­i­ences. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (9–19). Oxford: Oxford University Press.

Song, R., Over, H., & Carpenter, M. (2015). Children draw more affil­i­at­ive pic­tures fol­low­ing prim­ing with third-party ostra­cism. Developmental Psychology, 51(6), 831–840. doi: 10.1037/a0039176

Stenseng, F., Belsky, J., Skalicka, V. & Wichstrom, L. (2014). Social exclu­sion pre­dicts impaired self-regulation: A 2‑year lon­git­ud­in­al pan­el study includ­ing the trans­ition from preschool to school. Journal of Personality, 83(2), 213–220. doi: 10.1111/jopy.12096

Twenge, J.M., Baumeister, R.F., Tice, D.M., & Stucke, T.S. (2001). If you can’t join them, beat them: Effects of social exclu­sion on aggress­ive beha­vi­or. Journal of Personality and Social Psychology, 81(6), 1058–1069). doi: 10.1037/0022–3514.81.6.1058.

Watson-Jones, R.E., Whitehouse, H., & Legare, C.H. (2015). In-group ostra­cism increases high-fidelity imit­a­tion in early child­hood. Psychological Science, 27(1), 34–42. doi: 10.1177/0956797615607205

Wesselman, E.D., & Williams, K.D. (2013). Ostracism and stages of cop­ing. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (20–30). Oxford: Oxford University Press.

White, L.O., Klein, A.M., von Klitzing, K., Graneist, A., Otto, Y., Hill, J., Over, H., Fonagy,P., & Crowley, M.J. (2016). Putting ostra­cism into per­spect­ive: Young chil­dren tell more men­tal­ist­ic stor­ies after exclu­sion, but not when anxious. Frontiers in Psychology, 7, 1–15. doi: 10.3389/fpsyg.2016.01926

Williams, K.D. (2007). Ostracism. Annual Review of Psychology, 58, 425–452. doi: 10.1146/annurev.psych.58.110405.085641.

Wölfer, R., & Scheithauer, H. (2013). Ostracism in child­hood and adoles­cence: Emotional, cog­nit­ive, and beha­vi­or­al effects of social exclu­sion. Social Influence, 8(4), 217–236. doi: 10.1080/15534510.2012.706233

Understanding others’ minds: Social context matters

Paula Fischer — PhD Candidate, Cognitive Development Centre, Department of Cognitive Science, Central European University

Imagine that you are walk­ing with your friend through the forest, and sud­denly you find yourselves next to a bush filled with red ber­ries. Let’s sup­pose that you know a lot about dif­fer­ent plants, and you imme­di­ately recog­nise that these ber­ries are not only red ber­ries, but that they are also dan­ger­ous. In fact, they are pois­on­ous. However, you can see the sparkle in your friend’s eyes, and that he is already reach­ing towards the ber­ries to replen­ish his energy levels after the long walk. What do you do? Well, if you would like to save the life of your friend, or at least pre­vent him from an unpleas­ant exper­i­ence, you would warn him. You would do this because you under­stand that he believes that these ber­ries are good to eat, and you know that he wouldn’t go for these ber­ries if he knew that they were dangerous.

From this example and oth­er every­day exper­i­ences, we can see that humans pos­sess highly soph­ist­ic­ated abil­it­ies to ‘read’ oth­ers’ minds. This abil­ity, called the Theory of Mind (ToM), enables us to attrib­ute men­tal states to oth­ers, and to make pre­dic­tions and draw infer­ences from their beha­vi­or and actions to their men­tal states. It is there­fore essen­tial for social inter­ac­tions, because it under­pins our being able to effect­ively coordin­ate and com­mu­nic­ate with oth­ers. Researchers have been invest­ig­at­ing this ability’s char­ac­ter­ist­ics for dec­ades, and much of this research has focused on when and how it devel­ops. In this post, I will pro­pose that one aven­ue for mak­ing pro­gress in resolv­ing open ques­tions about the devel­op­ment of ToM can be made by appeal­ing to when we use ToM.

Since Dennett (1978) poin­ted out that attrib­ut­ing true beliefs to oth­ers can­not be empir­ic­ally dis­tin­guished from agents simply mak­ing pre­dic­tions about the actions of oth­ers on the basis of their own know­ledge and beliefs about the world, the con­ven­tion­al test for ToM became prob­ing false belief (FB) under­stand­ing. One typ­ic­al way to test for the under­stand­ing of false beliefs in chil­dren is the location-change task (Wimmer & Perner 1983; Baron-Cohen, Leslie & Frith, 1985). In such a stand­ard false belief task, par­ti­cipants are exposed to a story in which the main char­ac­ter has a false belief regard­ing a loc­a­tion of an object (as a second char­ac­ter changed its loc­a­tion while she was absent). When to expli­citly indic­ate where the first char­ac­ter will look for the object, chil­dren typ­ic­ally fail to take into account her false belief before the age of 4, answer­ing (or point­ing) towards the new (actu­al) loc­a­tion of the object (Wimmer & Perner, 1983, Perner, Leekam & Wimmer, 1987).

There has been an ongo­ing debate as to wheth­er the abil­ity to under­stand oth­ers’ (false) beliefs is early devel­op­ing, or wheth­er it devel­ops only from around the age of 4 with the emer­gence of oth­er abil­it­ies, e.g. exec­ut­ive func­tion and lan­guage (see for example Slade & Ruffmann, 2005). Two main lines of research have col­lec­ted evid­ence either for or against these state­ments. One line of research which uses impli­cit meas­ures of false-belief under­stand­ing, mostly influ­enced by Leslie’s the­ory on pre­tence (Leslie, 1987), sug­gests that infants are sens­it­ive to oth­ers’ beliefs from very early on. For example, Onishi and Baillargeon (2005) found evid­ence of false-belief under­stand­ing in 15-month-olds using a viol­a­tion of expect­a­tions paradigm (see Scott & Baillargeon, 2017 for a review on this research). The oth­er line of research instead sug­gests that full-blown ToM devel­ops only after the age of 4. This line of research attempts to explain pos­it­ive find­ings with young­er infants by appeal­ing to either low level cues (e.g. Heyes, 2014), or a min­im­al ToM account (Apperly & Butterfill 2009) which pro­poses that an early devel­op­ing sys­tem is rich enough to rep­res­ent belief-like states only (but not beliefs per se).

How can this puzzle regard­ing early mind read­ing be solved? One may ask: if there is a con­cep­tu­al change around the age of 4, then what exactly hap­pens around that time that allows or trig­gers such change? I will sug­gest that focus­ing on why ToM is cru­cial in sev­er­al aspects of our every­day social lives (from lan­guage devel­op­ment and com­mu­nic­a­tion, to cooper­a­tion and altru­ist­ic beha­viour) may provide a means of answer­ing this question.

Can the basic abil­ity to track oth­ers’ men­tal states con­trib­ute to lan­guage acquis­i­tion? Some exper­i­ment­al evid­ence sup­ports the hypo­thes­is that, from a rel­at­ively early age, infants are sens­it­ive to semant­ic incon­gru­ity. That is, they under­stand when an object is labelled incon­gru­ently from its real mean­ing (e.g. Friedrich & Friederici, 2005; 2008). A study by Forgács and col­leagues (2018) invest­ig­ated wheth­er infants would track such semant­ic incon­gru­it­ies by oth­ers’ per­spect­ives. They meas­ured 14-months-olds event-related poten­tial (ERP) sig­nals, and found that infants show N400 activ­a­tion (a well-established neuro­psy­cho­lo­gic­al indic­at­or of semant­ic incon­gru­ity) not only when objects are incon­gru­ently labelled from their own view­point, but also from their com­mu­nic­at­ive partner’s point of view (see also Kutas & Federmeier, 2011; Kutas & Hillyard, 1980). These find­ings sug­gest that infants track the men­tal states of social part­ners, keep such attrib­uted rep­res­ent­a­tions updated, and use them to assess oth­ers’ semant­ic pro­cessing. This study can fur­ther be taken as indic­at­ing that rep­res­ent­a­tion­al capa­cit­ies (such as those required for belief ascrip­tion) are present at 14-month-olds in a com­mu­nic­at­ive context.

Such belief attri­bu­tion in sim­il­arly young infants can also be observed in ostensive-communicative infer­en­tial con­texts. In a study by Tauzin and Gergely (2018), infants’ look­ing time was meas­ured dur­ing the obser­va­tion of unfa­mil­i­ar com­mu­nic­at­ive agents; chil­dren needed to inter­pret the turn-taking exchange of vari­able tone sequences, which was indic­at­ive of com­mu­nic­at­ive trans­fer of goal rel­ev­ant inform­a­tion from a know­ledge­able to a naïve agent. In their exper­i­ments, infants observed the fol­low­ing inter­ac­tion: one of the agents placed a ball in a cer­tain loc­a­tion, and later saw the ball mov­ing to a dif­fer­ent loc­a­tion. The oth­er agent, who had not observed the location-switch, later tried to retrieve the ball. Based on their look­ing times, infants only expec­ted the ball-retrieving agent to go to where the ball really was if the first agent (who observed the location-switch) com­mu­nic­ated the trans­fer. Based on these find­ings, the authors sug­ges­ted that 13-months-old infants recog­nised these turn-taking exchanges as com­mu­nic­at­ive inform­a­tion trans­fer, sug­gest­ing that they can attrib­ute communication-based beliefs to oth­er agents if they can infer the rel­ev­ant inform­a­tion that is being transmitted.

Besides play­ing a role in chil­dren com­ing to under­stand import­ant aspects of com­mu­nic­a­tion, ToM may play a cru­cial part in cooper­a­tion and altru­ist­ic beha­viour. The ques­tion as to how ToM relates to, for instance, instru­ment­al help­ing, has received rel­at­ively little atten­tion. One of the first stud­ies prob­ing the rela­tion­ship between false belief under­stand­ing and help­ing comes from Buttelmann, Carpenter and Tomasello (2009). During their exper­i­ments, infants observed a prot­ag­on­ist strug­gling to open a box in order to obtain a toy. In the crit­ic­al part of the exper­i­ment the toy was moved by anoth­er agent from its ini­tial box to a dif­fer­ent box. The prot­ag­on­ist either observed this move, or had left the room.  When the main prot­ag­on­ist had left the room and then tried to open the box which ini­tially con­tained the toy, infants spon­tan­eously helped him by indic­at­ing that he should try to open the altern­at­ive box instead. However, when the main prot­ag­on­ist observed the location-switch, infants helped him open the ini­tial box. This sug­gests that by 18 months of age, help­ing beha­viour is guided by the beliefs of the helpee. This study, amongst oth­ers (see also Matsui & Miura, 2008), sup­port the hypo­thes­is that rep­res­ent­ing oth­ers’ men­tal states is a key fea­ture for help­ing and cooper­at­ing, and that infants are cap­able of tak­ing into account oth­ers’ beliefs when help­ing spon­tan­eously from very early on.

The abil­ity to rep­res­ent oth­ers’ men­tal states plays a cru­cial part in our social lives. Understanding what oth­ers think is import­ant not only for high-level cooper­at­ive or com­pet­it­ive prob­lem solv­ing, but even in smal­ler day-to-day social inter­ac­tions when we need to act fast (e.g., pre­vent­ing our friends from com­ing to harm dur­ing a walk). The stud­ies dis­cussed here sug­gest that from a rel­at­ively early age, humans are able to adjust their help­ing beha­viour on the basis of oth­ers’ beliefs, and the beliefs of oth­ers may shape children’s under­stand­ing of com­mu­nic­at­ive epis­odes. Future research may do well to keep in mind that when it comes to ToM, social con­text seems to matter.



Apperly, I. A., & Butterfill, S. A. (2009). Do humans have two sys­tems to track beliefs and belief-like states?. Psychological review116(4), 953.

Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the aut­ist­ic child have a “the­ory of mind”?. Cognition21(1), 37–46.

Buttelmann, D., Carpenter, M., & Tomasello, M. (2009). Eighteen-month-old infants show false belief under­stand­ing in an act­ive help­ing paradigm. Cognition112(2), 337–342.

Dennett, D. C. (1978). Beliefs about beliefs [P&W, SR&B]. Behavioral and Brain sci­ences1(4), 568–570.

Forgács, B., Parise, E., Csibra, G., Gergely, G., Jacquey, L., & Gervain, J. (2018). Fourteen-month-old infants track the lan­guage com­pre­hen­sion of com­mu­nic­at­ive part­ners. Developmental sci­ence, e12751.

Friedrich, M., & Friederici, A. D. (2005). Lexical prim­ing and semant­ic integ­ra­tion reflec­ted in the event-related poten­tial of 14-month-olds. Neuroreport16(6), 653–656.

Friedrich, M., & Friederici, A. D. (2008). Neurophysiological cor­rel­ates of online word learn­ing in 14-month-old infants. Neuroreport19(18), 1757–1761.

Heyes, C. (2014). Submentalizing: I am not really read­ing your mind. Perspectives on Psychological Science9(2), 131–143.

Kutas, M., & Federmeier, K. D. (2011). Thirty years and count­ing: Finding mean­ing in the N400 com­pon­ent of the event-related brain poten­tial (ERP). Annual Review of Psychology, 62, 621–647.

Kutas, M., & Hillyard, S. A. (1980). Reading sense­less sen­tences: Brain poten­tials reflect semant­ic incon­gru­ity. Science, 207(4427), 203–205.

Leslie, A. M. (1987). Pretense and rep­res­ent­a­tion: The ori­gins of” the­ory of mind.”. Psychological review94(4), 412

Matsui, T., & Miura, Y. (2008). Pro-social motive pro­motes early under­stand­ing of false belief.

Onishi, K. H., & Baillargeon, R. (2005). Do 15-month-old infants under­stand false beliefs?. sci­ence308(5719), 255–258.

Perner, J., Leekam, S. R., & Wimmer, H. (1987). Three-year-olds’ dif­fi­culty with false belief: The case for a con­cep­tu­al defi­cit. British journ­al of devel­op­ment­al psy­cho­logy5(2), 125–137.

Scott, R. M., & Baillargeon, R. (2017). Early false-belief under­stand­ing. Trends in Cognitive Sciences21(4), 237–249.

Tauzin, T., & Gergely, G. (2018). Communicative mind-reading in pre­verbal infants. Scientific reports8(1), 9534.

Slade, L., & Ruffman, T. (2005). How lan­guage does (and does not) relate to the­ory of mind: A lon­git­ud­in­al study of syn­tax, semantics, work­ing memory and false belief. British Journal of Developmental Psychology23(1), 117–141.

Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and con­strain­ing func­tion of wrong beliefs in young chil­dren’s under­stand­ing of decep­tion. Cognition13(1), 103–128.




The Modularity of the Motor System

Myrto Mylopoulos — Department of Philosophy and Institute of Cognitive Science, Carleton University

The extent to which the mind is mod­u­lar is a found­a­tion­al con­cern in cog­nit­ive sci­ence. Much of this debate has centered on the ques­tion of the degree to which input sys­tems, i.e., sens­ory sys­tems such as vis­ion, are mod­u­lar (see, e.g., Fodor 1983; Pylyshyn 1999; MacPherson 2012; Firestone & Scholl 201; Burnston 2017; Mandelbaum 2017). By con­trast, research­ers have paid far less atten­tion to the ques­tion of the extent to which our main out­put sys­tem, i.e., the motor sys­tem, qual­i­fies as such.

This is not to say that the lat­ter ques­tion has gone without acknow­ledge­ment. Indeed, in his clas­sic essay Modularity of Mind, Fodor (1983)—a pion­eer in think­ing about this topic—writes: “It would please me if the kinds of argu­ments that I shall give for the mod­u­lar­ity of input sys­tems proved to have applic­a­tion to motor sys­tems as well. But I don’t pro­pose to invest­ig­ate that pos­sib­il­ity here” (Fodor 1983, p.42).

I’d like to take some steps towards doing so in this post.

To start, we need to say a bit more about what mod­u­lar­ity amounts to. A cent­ral fea­ture of mod­u­lar systems—and the one on which I fill focus here—is their inform­a­tion­al encap­su­la­tion. Informational encap­su­la­tion con­cerns the rangeof inform­a­tion that is access­ible to a mod­ule in com­put­ing the func­tion that maps the inputs it receives to the out­puts it yields. A sys­tem is inform­a­tion­ally encap­su­lated to the degree that it lacks access to inform­a­tion stored out­side the sys­tem in the course of pro­cessing its inputs. (cf. Robbins 2009, Fodor 1983).

Importantly, inform­a­tion­al encap­su­la­tion is a rel­at­ive notion. A sys­tem may be inform­a­tion­ally encap­su­lated with respect to some inform­a­tion, but not with respect to oth­er inform­a­tion. When a sys­tem is inform­a­tion­ally encap­su­lated with respect to the states of what Fodor called “the cent­ral system”—those states famil­i­ar to us as pro­pos­i­tion­al atti­tude states like beliefs and intentions—this is referred to as cog­nit­iveimpen­et­rab­il­ityor, what I will refer to here as cog­nit­ive imper­meab­il­ity. In char­ac­ter­iz­ing the notion of cog­nit­ive per­meab­il­ity more pre­cisely, one must be care­ful not to pre­sup­pose that it is per­cep­tu­al sys­tems only that are at issue. For a neut­ral char­ac­ter­iz­a­tion, I prefer the fol­low­ing: A sys­tem is cog­nit­ively per­meable if and only if the func­tion it com­putes is sens­it­ive to the con­tent of a sub­ject S’s men­tal states, includ­ing S’s inten­tions, beliefs, and desires. In the fam­ous Müller-Lyer illu­sion, the visu­al sys­tem lacks access to the subject’s belief that the two lines are identic­al in length in com­put­ing the rel­at­ive size of the stim­u­lui, so it is cog­nit­ively imper­meable rel­at­ive to that belief.

On this char­ac­ter­iz­a­tion of cog­nit­ive per­meab­il­ity, the motor sys­tem is clearly cog­nit­ively per­meable in vir­tue of its com­pu­ta­tions, and cor­res­pond­ing out­puts, being sys­tem­at­ic­ally sens­it­ive to the con­tent of an agent’s inten­tions. The evid­ence for this is every inten­tion­al action you’ve ever per­formed. Perhaps the uncon­tro­ver­sial nature of this fact has pre­cluded fur­ther invest­ig­a­tion of cog­nit­ive per­meab­il­ity in the motor sys­tem. But there are at least two inter­est­ing ques­tions to explore here. First, since cog­nit­ive per­meab­il­ity, just like inform­a­tion­al encap­su­la­tion, comes in degrees, we should ask to what extent is the motor sys­tem cog­nit­ively per­meable. Are there inter­est­ing lim­it­a­tions that can be drawn out? (Spoiler: yes.) Second, inso­far as there are such lim­it­a­tions, we should ask the extent to which they are fixed. Can they be mod­u­lated in inter­est­ing ways by the agent? (Spoiler: yes.)

Experimental res­ults sug­gest that there are indeed inter­est­ing lim­it­a­tions to the cog­nit­ive per­meab­il­ity of the motor sys­tem. This is per­haps most clearly shown by appeal to exper­i­ment­al work employ­ing visuo­mo­tor rota­tion tasks (see also Shepherd 2017 for an import­ant dis­cus­sion of such work with which I am broadly sym­path­et­ic). In such tasks, the par­ti­cipant is instruc­ted to reach for a tar­get on a com­puter screen. They do not see their hand, but they receive visu­al feed­back from a curs­or that rep­res­ents the tra­ject­ory of their reach­ing move­ment. On some tri­als, the exper­i­menters intro­duce a bias to the visu­al feed­back from the curs­or by rotat­ing it rel­at­ive to the actu­al tra­ject­ory of their unseen move­ment dur­ing the reach­ing task. For example, a bias might be intro­duced such that the visu­al feed­back from the curs­or rep­res­ents the tra­ject­ory of their reach as being rotated 45°clockwise from the actu­al tra­ject­ory of their arm move­ment. This manip­u­la­tion allows exper­i­menters to determ­ine how the motor sys­tem will com­pensate for the con­flict between the visu­al feed­back that is pre­dicted on the basis of the motor com­mands it is execut­ing, and the visu­al feed­back the agent actu­ally receives from the curs­or. The main find­ing is that the motor sys­tem gradu­ally adapts to the bias in a way that res­ults in the recal­ib­ra­tion of the move­ments it out­puts such that they show “drift” in the dir­ec­tion oppos­itethat of the rota­tion, thus redu­cing the mis­match between the visu­al feed­back and the pre­dicted feedback.

Figure 1. A: A typ­ic­al set-up for a visuo­mo­tor rota­tion task. B: Typical error feed­back when a coun­ter­clock­wise dir­ec­tion­al bias is intro­duced. (Source: Krakauer 2009)

In the paradigm just described, par­ti­cipants do not form an inten­tion to adopt a com­pens­at­ory strategy; the adapt­a­tion the motor sys­tem exhib­its is purely the res­ult of impli­cit learn­ing mech­an­isms that gov­ern its out­put. But in a vari­ant of this paradigm (Mazzoni & Krakauer 2006), par­ti­cipants are instruc­ted to adopt an expli­cit “cheat­ing” strategy—that is, to form intentions—to counter the angu­lar bias intro­duced by the exper­i­menters. This is achieved by pla­cing a neigh­bour­ing tar­get (Tn) at a 45°angle from the prop­er tar­get (Tp) in the dir­ec­tion oppos­itethe bias (e.g., if the bias is 45°counterclockwise from the Tp, the Tn is placed 45°clockwise from the Tp), such that if par­ti­cipants aim for the Tn, the bias will be com­pensated for, and the curs­or will hit the Tp, thus sat­is­fy­ing the primary task goal.

In this set-up, reach­ing errors related to the Tp are almost com­pletely elim­in­ated at first. The agent hits the Tp (accord­ing to the feed­back from the curs­or) as a res­ult of form­ing the inten­tion to aim for the stra­tegic­ally placed Tn. But as par­ti­cipants con­tin­ue to per­form the task on fur­ther tri­als, some­thing inter­est­ing hap­pens: their move­ments once again gradu­ally start to show drift, but this time towardsthe Tn and away from the Tp. What this res­ult is thought to reflect is yet anoth­er impli­cit pro­cess of adap­tion by the motor sys­tem, which aims to cor­rect for the dif­fer­ence between the aimed for loc­a­tion (Tn) and the visu­al feed­back (in the dir­ec­tion of the Tp).

Two fur­ther details are import­ant for our pur­poses: First, when par­ti­cipants are instruc­ted to stop using the strategy of aim­ing for the Tn (in order to hit the Tp) and return their aim to the Tp “[s]ubstantial and long-lasting” (Mazzoni & Krakauer 2006, p.3643) afteref­fects are observed, mean­ing the motor sys­tem per­sists in aim­ing to reduce the dif­fer­ence between the visu­al feed­back and the earli­er aimed for loc­a­tion. Second, in a sep­ar­ate study by Taylor & Ivry (2011) using a very sim­il­ar paradigm wherein par­ti­cipants had sig­ni­fic­antly more tri­als per block (320), par­ti­cipants did even­tu­ally cor­rect for the sec­ond­ary adap­tion by the motor sys­tem and reverse the dir­ec­tion of their move­ment, but only gradu­ally, and by means of adopt­ing expli­cit aim­ing strategies to coun­ter­act the drift.

On the basis of these res­ults, we can draw at least three inter­est­ing con­clu­sions about cog­nit­ive per­meab­il­ity and the motor sys­tem:  First, although it is clearly sens­it­ive to the con­tent of the prox­im­al inten­tions that it takes as input (in this case the inten­tion to aim for the Tn), it is not always, or only weakly so, to the distal inten­tions that those very prox­im­al inten­tions serve—in this case the inten­tion to hit the Tp. If this is cor­rect, it may be that the motor sys­tem lacks sens­it­iv­ity to the struc­ture of prac­tic­al reas­on­ing that often guides an agent’s present action in the back­ground. In this case, the motor sys­tem seems not to register that the agent intends to hit the Tp by way ofaim­ing and reach­ing for the Tn.

Second, giv­en that afteref­fects per­sist for some time even once the expli­cit aim­ing strategy (and there­fore the inten­tion to aim for the Tn) has been aban­doned, we may con­clude that the motor sys­tem is only sens­it­ive to the con­tent of prox­im­al inten­tions to a lim­ited degree in that it takes time for it to prop­erly update its per­form­ance rel­at­ive to the agent’s cur­rent prox­im­al inten­tion. The impli­cit adapt­a­tion, indexed to the earli­er inten­tion, can­not be over­rid­den immediately.

Third, this degree of sens­it­iv­ity is not fixed, but rather can vary over time as the res­ult of an agent’s inter­ven­tions, as determ­ined in Taylor & Ivry’s study, where the drift was even­tu­ally reversed after a suf­fi­ciently large num­ber of tri­als wherein the agent con­tinu­ously adjus­ted their aim­ing strategy.

To close, I’d like to out­line what I take to be a couple of import­ant upshots of the pre­ced­ing dis­cus­sion for neigh­bour­ing philo­soph­ic­al debates:

  1. Recent dis­cus­sions of skilled action have sought to determ­ine “how far down” action con­trol is intel­li­gent (see, e.g., Fridland 2014, 2017; Levy 2017; Shepherd 2017). And, on at least some views, this is a func­tion of the degree to which the motor sys­tem is sens­it­ive to the con­tent of an agent’s inten­tions. Here we see that this sens­it­iv­ity is some­times lim­ited, but can also improve over time. In my view, this reveals anoth­er import­ant dimen­sion of the motor system’s intel­li­gence that goes bey­ond mere sens­it­iv­ity, and that per­tains to its abil­ity to adapt to an agent’s present goals through learn­ing pro­cesses that exhib­it a reas­on­able degree of both sta­bil­ity and flexibility.
  2. Recently, action the­or­ists have turned their atten­tion to solv­ing the so-called “inter­face prob­lem”, that is, the prob­lem of how inten­tions and motor rep­res­ent­a­tions suc­cess­fully coordin­ate giv­en their (argu­ably) dif­fer­ent rep­res­ent­a­tion­al formats (see, e.g., Butterfill & Sinigaglia 2014; Burnston 2017; Fridland 2017; Mylopoulos & Pacherie 2017, 2018; Shepherd 2017; Ferretti & Caiani 2018). The pre­ced­ing dis­cus­sion may sug­gest a more lim­ited degree of inter­fa­cing than one might have thought—obtaining only between an agent’s most prox­im­al inten­tions and the motor sys­tem. It may also sug­gest that suc­cess­ful inter­fa­cing depends on both the learn­ing mechanism(s) of the motor sys­tem (for max­im­al smooth­ness and sta­bil­ity) as well as a con­tinu­ous inter­play between its out­puts and the agent’s own prac­tic­al reas­on­ing for how best to achieve their goals (for max­im­al flexibility).


Burnston, D. (2017). Interface prob­lems in the explan­a­tion of action. Philosophical Explorations, 20(2), 242–258.

Butterfill, S. A. & Sinigaglia, C. (2014). Intention and motor rep­res­ent­a­tion in pur­pos­ive action. Philosophy and Phenomenological Research, 88, 119–145.

Ferretti, G. & Caiani, S.Z. (2018). Solving the inter­face prob­lem without trans­la­tion: the same format thes­is. Pacific Philosophical Quarterly,doi: 10.1111/papq.12243

Fodor, J. (1983). The mod­u­lar­ity of mind: An essay on fac­ulty psy­cho­logy. Cambridge: The MIT Press.

Fridland, E. (2014). They’ve lost con­trol: Reflections on skill. Synthese, 91(12), 2729–2750.

Fridland, E. (2017). Skill and motor con­trol: intel­li­gence all the way down. Philosophical Studies, 174(6), 1539–1560.

Krakauer J. W. (2009). Motor learn­ing and con­sol­id­a­tion: the case of visuo­mo­tor rota­tion. Advances in exper­i­ment­al medi­cine and bio­logy629, 405–21.

Levy, N. (2017). Embodied savoir-faire: knowledge-how requires motor rep­res­ent­a­tions. Synthese, 194(2), 511–530.

MacPherson, F. (2012). Cognitive pen­et­ra­tion of col­our exper­i­ence: Rethinking the debate in light of an indir­ect mech­an­ism. Philosophy and Phenomenological Research,84(1). 24–62.

Mazzoni, P. & Krakauer, J. W. (2006). An impli­cit plan over­rides an expli­cit strategy dur­ing visuo­mo­tor adapt­a­tion. The Journal of Neuroscience, 26(14): 3642–3645.

Mylopoulos, M. & Pacherie, E. (2017).  Intentions and motor rep­res­ent­a­tions: The inter­face chal­lenge. Review of Philosophy and Psychology, 8(2), pp. 317–336.

Mylopoulos, M. & Pacherie, E. (2018). Intentions: The dynam­ic hier­arch­ic­al mod­el revis­ited. WIREs Cognitive Science, doi: 10.1002/wcs.1481

Shepherd, J. (2017). Skilled action and the double life of inten­tion. Philosophy and Phenomenological Research, doi:10.1111/phpr.12433

Taylor, J.A. and Ivry, R.B. (2011). Flexible cog­nit­ive strategies dur­ing motor learn­ing. PLoS Computational Biology7(3), p.e1001096.

The Cinderella of the Senses: Smell as a Window into Mind and Brain?

Ann-Sophie Barwich — Visiting Assistant Professor in the Cognitive Science Program at Indiana University Bloomington

Smell is the Cinderella of our Senses. Traditionally dis­missed as com­mu­nic­at­ing merely sub­ject­ive feel­ings and bru­tish sen­sa­tions, the sense of smell nev­er attrac­ted crit­ic­al atten­tion in philo­sophy or sci­ence. The char­ac­ter­ist­ics of odor per­cep­tion and its neur­al basis are key to under­stand­ing the mind through the brain, however.

This claim might sound sur­pris­ing. Human olfac­tion acquired a rather poor repu­ta­tion through­out most of Western intel­lec­tu­al his­tory. “Of all the senses it is the one which appears to con­trib­ute least to the cog­ni­tions of the human mind,” com­men­ted the French philo­soph­er of the Enlightenment, Étienne Bonnot de Condillac, in 1754. Immanuel Kant (1798) even called smell “the most ungrate­ful” and “dis­pens­able” of the senses. Scientists were not more pos­it­ive in their judg­ment either. Olfaction, Charles Darwin con­cluded (1874), was “of extremely slight ser­vice” to man­kind. Further, state­ments about people who paid atten­tion to smell fre­quently mixed with pre­ju­dice about sex and race: Women, chil­dren, and non-white races — essen­tially all groups long excluded from the ration­al­ity of white men — were found to show increased olfact­ory sens­it­iv­ity (Classen et al. 1994). Olfaction, there­fore, did not appear to be a top­ic of reput­able aca­dem­ic invest­ment — until recently.

Scientific research on smell was cata­pul­ted into main­stream neur­os­cience almost overnight with the dis­cov­ery of the olfact­ory recept­or genes by Linda Buck and Richard Axel in 1991. It turned out that the olfact­ory recept­ors con­sti­tute the largest pro­tein gene fam­ily in most mam­mali­an gen­omes (except for dol­phins), exhib­it­ing a pleth­ora of prop­er­ties sig­ni­fic­ant for structure-function ana­lys­is of pro­tein beha­vi­or (Firestein 2001; Barwich 2015). Finally, the recept­or gene dis­cov­ery provided tar­geted access to probe odor sig­nal­ing in the brain (Mombaerts et al. 1996; Shepherd 2012). Excitement soon kicked in, and hopes rose high to crack the cod­ing prin­ciples of the olfact­ory sys­tem in no time. Because the olfact­ory path­way has a not­able char­ac­ter­ist­ic, one that Ramon y Cajal high­lighted as early as 1901/02: Olfactory sig­nals require only two syn­apses to go straight into the core cor­tex (form­ing almost imme­di­ate con­nec­tions with the amy­g­dala and hypo­thal­am­us)! To put this into per­spect­ive, in vis­ion two syn­apses won’t get you even out of the ret­ina. You can fol­low the rough tra­ject­ory of an olfact­ory sig­nal in Figure 1 below.

Three dec­ades later and the big rev­el­a­tion still is on hold. A lot of pre­ju­dice and neg­at­ive opin­ion about the human sense of smell have been debunked (Shepherd 2004; Barwich 2016; McGann 2017). But the olfact­ory brain remains a mys­tery to date. It appears to dif­fer markedly in its neur­al prin­ciples of sig­nal integ­ra­tion from vis­ion, audi­tion, and soma­to­sen­sa­tion (Barwich 2018; Chen et al. 2014). The back­ground to this insight is a remark­able piece of con­tem­por­ary his­tory of sci­ence. (Almost all act­ors key to the mod­ern molecu­lar devel­op­ment of research on olfac­tion are still alive and act­ively con­duct­ing research.)

Olfaction — unlike oth­er sens­ory sys­tems — does not main­tain a topo­graph­ic organ­iz­a­tion of stim­u­lus rep­res­ent­a­tion in its primary cor­tex (Stettler and Axel 2009; Sosulski et al. 2011). That’s neuralese for: We actu­ally do not know how the brain organ­izes olfact­ory inform­a­tion so that it can tell what kind of per­cep­tu­al object or odor image an incom­ing sig­nal encodes. You won’t find a map of stim­u­lus rep­res­ent­a­tion in the brain, such that chem­ic­al groups like ketones would sit next to alde­hydes or per­cep­tu­al cat­egor­ies like rose were right next to lav­ender. Instead, axons from the mitral cells in the olfact­ory bulb (the first neur­al sta­tion of olfact­ory pro­cessing at the front­al lobe of the brain) pro­ject to all kinds of areas in the piri­form cor­tex (the largest domain of the olfact­ory cor­tex, pre­vi­ously assumed to be involved in odor object form­a­tion). In place of a map, you find a mosa­ic (Figure 1).

What does this tell us about smell per­cep­tion and the brain in gen­er­al? Theories of per­cep­tion, in effect, always have been the­or­ies of vis­ion. Concepts ori­gin­ally derived from vis­ion were made fit to apply to what’s usu­ally side­lined as “the oth­er senses.” This tend­ency per­meates neur­os­cience as well as philo­sophy (Matthen 2005). However, it is a deeply prob­lem­at­ic strategy for two reas­ons. First, oth­er sens­ory mod­al­it­ies (smell, taste, and touch but also the hid­den senses of proprio­cep­tion and intero­cep­tion) do not res­on­ate entirely with the struc­ture of the visu­al sys­tem (Barwich 2014; Keller 2017; Smith 2017b). Second, we may have nar­rowed our invest­ig­at­ive lens and over­looked import­ant aspects also of the visu­al sys­tem that can be “redis­covered” if we took a closer look at smell and oth­er mod­al­it­ies. Insight into the com­plex­ity of cross-modal inter­ac­tions, espe­cially in food stud­ies, sug­gests that much already (Smith 2012; Spence and Piqueras-Fiszman 2014). So the real ques­tion we should ask is:

How would the­or­ies of per­cep­tion dif­fer if we exten­ded our per­spect­ive on the senses; in par­tic­u­lar, to include fea­tures of olfaction?

Two things stand out already. The first con­cerns the­or­ies of the brain, the oth­er the per­meable bor­der between pro­cesses of per­cep­tion and cognition.

First, when it comes to the prin­ciples of neur­al organ­iz­a­tion, not everything in vis­ion that appears crys­tal clear really is. The corner­stone of visu­al topo­graphy has been called into ques­tion more recently by the prom­in­ent neur­os­cient­ist Margaret Livingstone (who, not coin­cid­ent­ally, trained with David Hubel: one half of the fam­ous duo of Hubel and Wiesel (2004) whose find­ings led to the paradigm of neur­al topo­graphy in vis­ion research in the first place). Livingstone et al. (2017) found that the spa­tially dis­crete activ­a­tion pat­terns in the fusi­form face area of macaques were con­tin­gent upon exper­i­ence — both in their devel­op­ment and, inter­est­ingly, partly also their main­ten­ance. In oth­er words, learn­ing is more fun­da­ment­al to the arrange­ment of neur­al sig­nals in visu­al inform­a­tion pro­cessing and integ­ra­tion than pre­vi­ously thought. The spa­tially dis­crete pat­terns of the visu­al sys­tem may con­sti­tute more of a devel­op­ment­al byproduct than simply a genet­ic­ally pre­de­ter­mined Bauplan. From this per­spect­ive, fig­ur­ing out the con­nectiv­ity that under­pins non-topographic and asso­ci­at­ive neur­al sig­nal­ing, such as in olfac­tion, offers a com­ple­ment­ary mod­el to determ­ine the gen­er­al prin­ciples of brain organization.

Second, emphas­is on exper­i­ence and asso­ci­at­ive pro­cessing in per­cep­tu­al object form­a­tion (e.g., top-down effects in learn­ing) also mir­rors cur­rent trends in cog­nit­ive neur­os­cience. Smell has long been neg­lected from main­stream the­or­ies of per­cep­tion pre­cisely because of the char­ac­ter­ist­ic prop­er­ties that make it sub­ject to strong con­tex­tu­al and cog­nit­ive biases. Consider a wine taster, who exper­i­ences wine qual­ity dif­fer­ently by focus­ing on dis­tinct cri­ter­ia of obser­va­tion­al like­ness in com­par­is­on with a layper­son. She can point to subtle fla­vor notes that the layper­son may have missed but, after pay­ing atten­tion, is also able to per­ceive (e.g., a light oak note). Such influ­ence of atten­tion and learn­ing on per­cep­tion, ran­ging from nor­mal per­cep­tion to the acquis­i­tion of per­cep­tu­al expert­ise, is con­stitutive of odor and its phe­nomen­o­logy (Wilson and Stevenson 2006; Barwich 2017; Smith 2017a). Notably, the under­ly­ing biases (influ­enced by semant­ic know­ledge and famili­ar­ity) are increas­ingly stud­ied as con­stitutive determ­in­ants of brain pro­cesses in recent cog­nit­ive neur­os­cience; espe­cially in for­ward mod­els or mod­els of pre­dict­ive cod­ing where the brain is said to cope with the pleth­ora of sens­ory data by anti­cip­at­ing stim­u­lus reg­u­lar­it­ies on the basis of pri­or exper­i­ence (e.g., Friston 2010; Graziano 2015). While advoc­ates of these the­or­ies have centered their work on vis­ion, olfac­tion now serves as an excel­lent mod­el to fur­ther the premise of the brain as oper­at­ing on the basis of fore­cast­ing mech­an­isms (Barwich 2018); blur­ring the bound­ary between per­cep­tu­al and cog­nit­ive pro­cesses with the impli­cit hypo­thes­is that per­cep­tion is ulti­mately shaped by experience.

These are ongo­ing devel­op­ments. Unknown as yet is how the brain makes sense of scents. What is becom­ing increas­ingly clear is that the­or­iz­ing about the senses neces­sit­ates a mod­ern­ized per­spect­ive that admits oth­er mod­al­it­ies and their dimen­sions. We can­not explain the mul­ti­tude of per­cep­tu­al phe­nom­ena with vis­ion alone. To think oth­er­wise is not only hubris but sheer ignor­ance. Smell is less evid­ent in its con­cep­tu­al bor­ders and clas­si­fic­a­tion, its mech­an­isms of per­cep­tu­al con­stancy and vari­ation. It thus requires new philo­soph­ic­al think­ing, one that reex­am­ines tra­di­tion­al assump­tions about stim­u­lus rep­res­ent­a­tion and the con­cep­tu­al sep­ar­a­tion of per­cep­tion and judg­ment. However, a prop­er under­stand­ing of smell — espe­cially in its con­tex­tu­al sens­it­iv­ity to cog­nit­ive influ­ences — can­not suc­ceed without also tak­ing an in-depth look at its neur­al under­pin­nings. Differences in cod­ing, con­cern­ing both recept­or and neur­al levels of the sens­ory sys­tems, mat­ter to how incom­ing inform­a­tion is real­ized as per­cep­tu­al impres­sions in the mind, along with the ques­tion of what these per­cep­tions are and com­mu­nic­ate in the first place.

Olfaction is just one prom­in­ent example of how mis­lead­ing his­tor­ic intel­lec­tu­al pre­dilec­tions about human cog­ni­tion can be. Neuroscience fun­da­ment­ally opened up pos­sib­il­it­ies regard­ing its meth­ods and out­look, in par­tic­u­lar over the past two dec­ades. It is about time that we adjust our some­what older philo­soph­ic­al con­jec­tures of mind and brain accordingly.


Barwich, AS. 2014. “A Sense So Rare: Measuring Olfactory Experiences and Making a Case for a Process Perspective on Sensory Perception.” Biological Theory9(3): 258–268.

Barwich, AS. 2015. “What is so spe­cial about smell? Olfaction as a mod­el sys­tem in neuro­bi­o­logy.” Postgraduate Medical Journal92: 27–33.

Barwich, AS. 2016. “Making Sense of Smell.” The Philosophers’ Magazine73: 41–47.

Barwich, AS. 2017. “Up the Nose of the Beholder? Aesthetic Perception in Olfaction as a Decision-Making Process.” New Ideas in Psychology47: 157–165.

Barwich, AS. 2018. “Measuring the World: Towards a Process Model of Perception.” In: Everything Flows: Towards a Processual Philosophy of Biology. (D Nicholson, and J Dupré, eds). Oxford University Press, pp. 337–356.

Buck, L, and R Axel. 1991. “A nov­el mul­ti­gene fam­ily may encode odor­ant recept­ors: a molecu­lar basis for odor recog­ni­tion.” Cell65(1): 175–187.

Cajal R y. 1988[1901/02]. “Studies on the Human Cerebral Cortex IV: Structure of the Olfactory Cerebral Cortex of Man and Mammals.” In: Cajal on the Cerebral Cortex. An Annotated Translation of the Complete Writings, ed. by J DeFelipe and EG Jones. Oxford University Press.

Chen, CFF, Zou, DJ, Altomare, CG, Xu, L, Greer, CA, and S Firestein. 2014. “Nonsensory target-dependent organ­iz­a­tion of piri­form cor­tex.” Proceedings of the National Academy of Sciences111(47): 16931–16936.

Classen, C, Howes, D, and A Synnott. 1994.Aroma: The cul­tur­al his­tory of smell.Routledge.

Condillac, E B d. 1930 [1754]. Condillac’s treat­ise on the sen­sa­tions. (MGS Carr, trans). The Favil Press.

Darwin, C. 1874. The des­cent of man and selec­tion in rela­tion to sex(Vol. 1). Murray.

Firestein, S. 2001. “How the olfact­ory sys­tem makes sense of scents.” Nature413(6852): 211.

Friston, K. 2010. “The free-energy prin­ciple: a uni­fied brain the­ory?” Nature Reviews Neuroscience11(2): 127.

Graziano, MS, and TW Webb. 2015. “The atten­tion schema the­ory: a mech­an­ist­ic account of sub­ject­ive aware­ness.” Frontiers in Psychology6: 500.

Hubel, DH, and TN Wiesel. 2004. Brain and visu­al per­cep­tion: the story of a 25-year col­lab­or­a­tion. Oxford University Press.

Kant, I. 2006 [1798]. Anthropology from a prag­mat­ic point of view(RB Louden, ed). Cambridge University Press.

Keller, A. 2017. Philosophy of Olfactory Perception. Springer.

Livingstone, MS, Vincent, JL, Arcaro, MJ, Srihasam, K, Schade, PF, and T Savage. 2017. “Development of the macaque face-patch sys­tem.”Nature Communications8: 14897.

Matthen, M. 2005. Seeing, doing, and know­ing: A philo­soph­ic­al the­ory of sense per­cep­tion. Clarendon Press.

McGann, JP. 2017. “Poor human olfac­tion is a 19th-century myth.” Science356(6338): eaam7263.

Mombaerts, P, Wang, F, Dulac, C, Chao, SK, Nemes, A, Mendelsohn, Edmondson, J, and R Axel. 1996. “Visualizing an olfact­ory sens­ory map”  Cell87(4): 675–686.

Shepherd, GM. 2004. “The human sense of smell: are we bet­ter than we think?” PLoS Biology2(5): e146.

Shepherd, GM. 2012. Neurogastronomy: how the brain cre­ates fla­vor and why it mat­ters. Columbia University Press.

Smith, BC. 2012. “Perspective: com­plex­it­ies of fla­vour.” Nature486(7403): S6-S6.

Smith BC. 2017a. “Beyond Liking: The True Taste of a Wine?” The World of Wine58: 138–147.

Smith, BC. 2017b. “Human Olfaction, Crossmodal Perception, and Consciousness.” Chemical Senses 42(9): 793–795.

Spence, C, and B Piqueras-Fiszman. 2014. The per­fect meal: the multi­s­ens­ory sci­ence of food and din­ing.John Wiley & Sons.

Sosulski, DL, Bloom, ML, Cutforth, T, Axel, R, and SR Datta. 2011. “Distinct rep­res­ent­a­tions of olfact­ory inform­a­tion in dif­fer­ent cor­tic­al centres.” Nature472(7342): 213.

Stettler, DD, and R Axel. 2009. “Representations of odor in the piri­form cor­tex.” Neuron63(6): 854–864.

Wilson, DA, and RJ Stevenson. 2006.Learning to smell: olfact­ory per­cep­tion from neuro­bi­o­logy to beha­vi­or.JHU Press.

Representing the Self in Predictive Processing

Elmarie Venter — PhD can­did­ate, Department of Philosophy, Ruhr-Universität Bochum

Who do you think you are? Or, less con­front­a­tion­ally, what ingredi­ents (e.g. memor­ies, beliefs, desires) go into the mod­el of your self? In this post, I explore dif­fer­ent con­cep­tions of how the self is rep­res­en­ted in the pre­dict­ive pro­cessing (PP) frame­work. At the core of PP is the notion that the brain is in the busi­ness of mak­ing pre­dic­tions about the world, and that the brain is primar­ily an organ that func­tions to min­im­ize pre­dic­tion error (i.e. the dif­fer­ence between pre­dic­tions about the state of the world and the observed state of the world) (Clark, 2017, p.727). Predictive pro­cessing neces­sit­ates mod­el­ing the causes of our sens­ory per­turb­a­tions and since agents them­selves are also such causes, a self-model is required under PP. The intern­al mod­els of the self will include “…rep­res­ent­a­tions of the agent’s own body and its tra­ject­or­ies and inter­ac­tions with oth­er causes in the world” (Hohwy & Michael, 2017, p.367).

In this post I will dis­cuss accounts of how the self is mod­elled under two PP camps: Conservative PP and Radical PP. Broadly speak­ing, Conservative PP holds that the mind is infer­en­tially secluded from the envir­on­ment — the body also forms part of the extern­al envir­on­ment. All pre­dic­tion error min­im­iz­a­tion occurs behind an ‘evid­en­tiary bound­ary’ which implies that the brain recon­structs the state of the world (Hohwhy, 2016, p.259). In con­trast, Radical PP holds that rep­res­ent­a­tions of the world are a mat­ter of embod­ied and embed­ded cog­ni­tion (Dolega, 2017, p.6). Perceiving my self, oth­er agents, and the world, is not a pro­cess of recon­struc­tion but rather a coupled pro­cess between per­cep­tion and action. How does the view of a self-model align with these ver­sions of pre­dict­ive pro­cessing? I will argue that Radical PP’s account of self-modelling is prefer­able because it avoids two key con­cerns that arise from Conservative PP’s mod­el­ing of the self.

On the side of Conservative PP, Hohwy & Michael (2017) con­ceive of the self-model as one that cap­tures “…rep­res­ent­a­tions of the agent’s own body…” as well as hid­den, endo­gen­ous causes, such as “…char­ac­ter traits, biases, reac­tion pat­terns, affec­tions, stand­ing beliefs, desires, inten­tions, base-level intern­al states, and so on” (Hohwy & Michael, 2017, p.369). On this view, the self is just anoth­er set of causes that is modeled in order to min­im­ize pre­dic­tion error. This view likens the mod­el of the self to mod­els of the envir­on­ment and oth­er people (and their men­tal states), and is in line with the Conservative PP account advoc­ated by Hohwy (2016) under which there is an ‘evid­en­tiary bound­ary’ between mind and world, behind which pre­dic­tion error min­im­iz­a­tion takes place. Any parts of our body “…that are not func­tion­ally sens­ory organs are bey­ond the bound­ary… [and are] just the kinds of states that should be modeled in intern­al, hier­arch­ic­al mod­els of a (pre­dic­tion error min­im­iz­a­tion) sys­tem.” (Hohwy, 2016, p.269).

As I see it, Conservative PP’s self-modeling (as described by Hohwy & Michael (2017)) is prob­lem­at­ic in two ways:

1) Our access to inform­a­tion about our own body is neg­lected by Conservative PP. Agents typ­ic­ally have access to cer­tain inform­a­tion about their body that is immune to error through misid­en­ti­fic­a­tion; this immunity does not extend to inform­a­tion about the world and oth­er agents.

2) Conservative PP ignores the marked dif­fer­ence in how we rep­res­ent ourselves and oth­er agents. Other agents can only enter our inten­tion­al states as part of the con­tent, where­as we ourselves can also enter our inten­tion­al states in anoth­er way.

In deal­ing with these con­cerns I pro­pose that the self is rep­res­en­ted along two dimen­sions: as-subject and as-object (a dis­tinc­tion that can be traced back to Wittgenstein’s Blue Book, and which can be fleshed out by appeal to debates on ref­er­ence and inten­tion­al­ity). The fun­da­ment­al idea here is that there is a cer­tain kind of error — in identi­fy­ing the per­son that some­thing is true of (e.g. a bod­ily pos­i­tion or a men­tal state) — that can occur when identi­fy­ing the self as-object which can­not occur in identi­fy­ing the self as-subject (Longuenesse, 2017, p.20; Evans 1982). Imagine that I per­ceive a cof­fee mug in front of me, and once I have seen it I reach out my hand to grasp the mug in order to drink from it. Now envi­sion a sim­il­ar situ­ation, in which I am act­ing like this while at the same time look­ing at myself in a mir­ror. In the lat­ter situ­ation I have two sources of inform­a­tion for obtain­ing know­ledge about myself grasp­ing the cup of cof­fee. One source of inform­a­tion is proprio­cept­ive and kin­es­thet­ic, and there­fore provides me with inform­a­tion about myself from the inside. The oth­er source of inform­a­tion is visu­al, and provides me with inform­a­tion from the out­side. The lat­ter source could provide me with inform­a­tion about the actions of oth­er agents as well, where­as the former can only be a source of inform­a­tion about my own self.

Since I am rep­res­en­ted in the con­tent of my visu­al exper­i­ence in the mir­ror scen­ario, I can mis­rep­res­ent myself as the inten­tion­al object of that very visu­al exper­i­ence. I could be mis­taken with respect to whom I am see­ing in the mir­ror grasp­ing the cof­fee mug; I may mis­takenly believe that I am in fact observing someone else grasp­ing the cup. No such mis­take is pos­sible in the con­trast case, in which I gain inform­a­tion about grasp­ing the mug from a proprio­cept­ive and kin­es­thet­ic source. A more rad­ic­al example of this dis­tinc­tion between self as-object and as-subject comes from indi­vidu­als with soma­to­pa­raphrenia. Such indi­vidu­als do not identi­fy some parts of their body as their own, e.g. they may believe that their arm belongs to someone else, but they are not mis­taken about who is identi­fy­ing their arm as belong­ing to someone else (Kang, 2016; Vallar & Ronchi, 2009). Recanati (2007, pp.147–148) spells out this dif­fer­ence by dis­tin­guish­ing between the con­tent and mode of an inten­tion­al state: “The con­tent is a rela­tiv­ized pro­pos­i­tion, true at a per­son, and the intern­al mode determ­ines the per­son rel­at­ive to which that rela­tiv­ized con­tent is eval­u­ated: myself”. With this dis­tinc­tion in mind, the prob­lems with Conservative PP becomes clear: the agent and their body are not rep­res­en­ted in the same way as any oth­er distal state in the world. Instead of the agent and their body only form­ing part of the con­tent of an inten­tion­al state (as Hohwy & Michael’s account would imply), they enter the state through the mode of per­cep­tion as well.

Clark (2017, p.729) provides an ana­logy that illus­trates the first prob­lem with self-modeling under Conservative PP: “The pre­dict­ing brain seems to be in some­what the same pre­dic­a­ment as the imprisoned agents in Plato’s “allegory of the cave”.” That is, under Conservative PP, distal states can only be inferred by the secluded brain, just as the pris­on­ers in the cave can only infer what the shad­ows on the walls are shad­ows of. The con­sequence of this is that we have no dir­ect (and, there­fore, error-immune) access to our own bod­ies. However, as has been illus­trated above, the self enters inten­tion­al states through mode (per­ceiv­ing, ima­gin­ing, remem­ber­ing, etc.) as well as con­tent, and this provides us with cer­tain inform­a­tion that is immune from error. In con­trast, Radical PP does not con­ceive of the body as a distal object. Instead, the agent’s body plays an act­ive role in determ­in­ing the sens­ory inform­a­tion that we have access to; it plays a fun­da­ment­al role in how we sample, and act in, the world. This act­ive role is such that cer­tain inform­a­tion is avail­able to us error free – even if I am mis­taken about anoth­er agent grasp­ing the cup, I can­not be mis­taken that it is me that is see­ing someone grasp the cup. In this sense, Radical PP provides us with a prefer­able story about how whole embod­ied agents are mod­els of the envir­on­ment and min­im­ize pre­dic­tion error through a vari­ety of adapt­ive strategies (Clark, 2017,  p.742).

 The two dimen­sions of self can also shed light on the second con­cern with Conservative PP because this dis­tinc­tion illus­trates how we per­ceive and inter­act with oth­er agents. As dis­cussed above, the self as-object enters inten­tion­al states as part of the con­tent, and the self as-subject enters such states through mode. The world, includ­ing oth­er agents and their men­tal states, only ever form part of the con­tent of our inten­tion­al states. Referring back to the example spelled out above: anoth­er agent can only ever play the same role in per­cep­tion as I do in the mir­ror case, i.e. as con­tent of the inten­tion­al struc­ture. I do not have access to oth­er agents “from the inside,” how­ever. For instance, I do not have the same access to the reas­ons behind oth­ers’ actions (are they grasp­ing the cup to drink from it, to clear it from the table, to see if there is still cof­fee in it?), nor do I have access to wheth­er the oth­er agent will suc­cess­fully grasp the mug (is their grip wide enough, do they have enough strength in their wrist?). There is thus a dimen­sion of the self to which one has priv­ileged access. We only have access to oth­er agents through per­cep­tu­al infer­ence (i.e. by observing their beha­vi­or and infer­ring its causes), where­as we have both per­cep­tu­al and act­ive infer­en­tial access to our own beha­viours. Though Conservative PP pro­ponents main­tain that the secluded brain only has per­cep­tu­al infer­en­tial access to our own body (Hohwy, 2016, p.276), there is some­thing markedly dif­fer­ent in what enables us to mod­el the causes of our own beha­vi­or and men­tal states to that of oth­er agents. I have proprio­cept­ive, kin­es­thet­ic, and intero­cept­ive access to inform­a­tion about myself; I only have extero­cept­ive inform­a­tion about oth­er agents.

 For Conservative PP, the body (and by exten­sion, the self) is just anoth­er object in the world that receives com­mands to act in ser­vice of pre­dic­tion error min­im­iz­a­tion. I have high­lighted two con­cerns about this view: the body is treated as a distal object, and the body (and self) placed on the same side of the evid­en­tiary bound­ary as oth­er agents. This means that the dimen­sion of self which is immune to error through misid­en­ti­fic­a­tion is not accom­mod­ated, and the marked dif­fer­ence in our access to inform­a­tion about our own states and those of oth­er agents is ignored. Radical PP, how­ever, avoids both con­cerns by tak­ing into account the two rep­res­ent­a­tion­al dimen­sions of the self and employ­ing an embod­ied approach to cog­ni­tion. The Radical PP account there­fore provides a more refined ver­sion of self-modeling. My beliefs, desires, and bod­ily shape can all be inferred in the mod­el of self-as-object, but self-as-subject cap­tures the part of the self that is not inferred: it con­tains inform­a­tion about me and my body from the inside, which is an essen­tial part of who we think we are.


Clark, A., 2017. Busting Out: Predictive Brains, Embodied Minds, and The Puzzle of The Evidentiary Veil. Noûs, 51(4): 727–753.

Dolega, K., 2017. Moderate Predictive Processing. In T. Metzinger & W. Wiese (Eds.) Philosophy and Predictive Processing. Frankfurt Am Main: MIND Group.

Evans, J., 1982. The Varieties of Reference. Oxford: Clarendon Press.

Friston, K. J. and Stephan, K. E., 2007. Free-energy and the Brain. Synthese, 159(3): 417–458.

Hohwy, J., 2016. The Self-Evidencing Brain. Noûs, 50(2): 259–285.

Hohwy, J. and Michael, J., 2017. Why Should Any Body Have A Self? In F. de Vignemont & A. Alsmith (Eds.) The Subject’s Matter: Self-Consciousness And The Body. Cambridge, Massachusetts: MIT Press.

Kang, S. P., 2016. Somatoparaphrenia, the Body Swap Illusion, and Immunity to Error through Misidentification. The Journal of Philosophy, 113(9): 463–471.

Longuenesse, B., 2017. I, Me, Mine: Back To Kant, And Back Again. New York: Oxford University Press.

Michael, J. and De Bruin, L., 2015. How Direct is Social Perception. Consciousness and Cognition, 36: 373–375.

Recanati, F., 2007. Perspectival Thought: A Plea For (Moderate) Relativism. Clarendon Press.

Thompson, E. and Varela, F. J., 2001. Radical Embodiment: Neural Dynamics and Consciousness. Trends in Cognitive Sciences, 5(10): 418–425.

Vallar, G. and Ronchi, R., 2009. Somatoparaphrenia: A Body Delusion. A Review of the Neuropsychological Literature. Experimental Brain Research, 192(3): 533–551.

Wittgenstein, L. 1960. Blue Book. Oxford: Blackwell.


The frustrating family of pain

Sabrina Coninx — PhD can­did­ate, Department of Philosophy, Ruhr-Universität Bochum

What is pain? At first glance this ques­tion seems straight­for­ward — almost every­one knows what it feels like to be in pain. We have all felt that shoot­ing sen­sa­tion when hit­ting the funny bone, or the dull throb of a head­ache after a stress­ful day. There is also much com­mon ground with­in the sci­entif­ic com­munity with respect to this ques­tion. Typically, pain is taken to be best defined as a cer­tain kind of men­tal phe­nomen­on exper­i­enced by sub­jects as pain. For instance, this cor­res­ponds to the (still widely accep­ted) defin­i­tion of pain giv­en by the International Association for the Study of Pain (1986). Most cog­nit­ive sci­ent­ists are not merely inter­ested in know­ing that vari­ous phe­nom­en­al exper­i­ences qual­i­fy as pain from a first-person per­spect­ive, how­ever. Instead, pain research­ers primar­ily focus on search­ing for neces­sary and suf­fi­cient con­di­tions for pain, such that a the­ory can be developed which allows for inform­at­ive dis­crim­in­a­tions and ideally far-reaching gen­er­al­iz­a­tions. Pain has proven to be a sur­pris­ingly frus­trat­ing object of research in this regard. In the fol­low­ing, I will out­line one of the main reas­ons for this frus­tra­tion, namely the lack of a suf­fi­cient and neces­sary neur­al cor­rel­ate for pain. Subsequently, I will briefly review three solu­tions to this chal­lenge, arguing that the third is the most prom­ising option.

Neuroscientifically speak­ing, pain is typ­ic­ally under­stood as an integ­rated phe­nomen­on which emerges with the inter­ac­tion of sim­ul­tan­eously act­ive neur­al struc­tures that are widely dis­trib­uted across cor­tic­al and sub­cor­tic­al areas (e.g. Apkarian et al., 2005; Peyron et al., 1999). Interestingly, and per­haps sur­pris­ingly, the activ­a­tion of these neur­al struc­tures is neither suf­fi­cient nor neces­sary for the exper­i­ence of pain (Wartolowska, 2011). Those neur­al struc­tures that are highly cor­rel­ated with the exper­i­ence of pain are not pain-specific (e.g. Apkarian, Bushnell, & Schweinhardt, 2013), and even the activ­a­tion of the entire neur­al net­work is not suf­fi­cient for pain. For instance, itch and pain are pro­cessed in the same ana­tom­ic­ally defined net­work (Mochizuki & Kakigi, 2015). There also does not seem to be any neur­al struc­ture whose activ­a­tion is neces­sary for pain (Tracey, 2011). Even patients with sub­stan­tial lesions in those neur­al struc­tures that are often regarded as most cent­ral for pain pro­cessing are still able to exper­i­ence pain (e.g. Starr et al., 2009).

Figure 1 Human brain processing pain, retrieved from Apkarian et al. (2005). Original picture caption: Cortical and subcortical regions involved in pain perception, their inter-connectivity and ascending pathways. Location of brain regions involved in pain perception are color-coded in a schematic drawing and in an example MRI. (a) Schematic shows the regions, their inter-connectivity and afferent pathways. The schematic is modified from Price (2000) to include additional brain areas and connections. (b) The areas corresponding to those shown in the schematic are shown in an anatomical MRI, on a coronal slice and three sagittal slices as indicated at the coronal slice. The six areas used in meta-analysis are primary and secondary somatosensory cortices (SI, SII, red and orange), anterior cingulate (ACC, green), insula (blue), thalamus (yellow), and prefrontal cortex (PC, purple). Other regions indicated include: primary and supplementary motor cortices (M1 and SMA), posterior parietal cortex (PPC), posterior cingulate (PCC), basal ganglia (BG, pink), hypothalamus (HT), amygdala (AMYG), parabrachial nuclei (PB), and periaqueductal grey (PAG).

Given the dif­fi­culties of char­ac­ter­iz­ing pain by appeal to unique neur­al struc­tures or a spe­cial­ized net­work, some research­ers have attemp­ted to char­ac­ter­ize pain by appeal to neur­osig­na­tures. ‘Neurosignature’ refers to the spatio-temporal activ­ity pat­tern gen­er­ated by a net­work of inter­act­ing neur­al struc­tures (Melzack, 2001). Thus, neur­osig­na­tures are less about the mere involve­ment of an ana­tom­ic­ally defined neur­al net­work, but rather about how involved struc­tures are activ­ated and how their activ­ity is coordin­ated (Reddan & Wager, 2017). Most inter­est­ingly, it has been shown that the neur­osig­na­ture of pain dif­fers from the neur­osig­na­ture of oth­er soma­to­sensory stim­u­la­tions, such as itch and warmth (Forster & Handwerker, 2014; Wager et al., 2013).

Unfortunately, dif­fer­ent kinds of pain sub­stan­tially dif­fer with respect to their under­ly­ing neur­osig­na­tures. For instance, neur­osig­na­tures found in patients with chron­ic pain sub­stan­tially dif­fer from those of healthy sub­jects exper­i­en­cing acute pain (Apkarian, Baliki, & Geha, 2009), because the cent­ral nervous sys­tem of sub­jects who live in per­sist­ing pain is con­tinu­ously reor­gan­ized as the brain’s mor­pho­logy, plas­ti­city and chem­istry change over time (Kuner & Flor, 2016; Schmidt-Wilcke, 2015). At most, there­fore, we can state that a par­tic­u­lar coordin­a­tion of neur­al activ­ity is suf­fi­cient to dis­tin­guish a par­tic­u­lar kind of pain from cer­tain non-pain phe­nom­ena. However, there seems to be no single neur­osig­na­ture that is neces­sary for pain to emerge.

We have arrived at the dilemma that makes pain such a frus­trat­ing object of research. On one hand, research­ers mostly agree that all and only pains are best defined by means of them being sub­ject­ively exper­i­enced as pains. On the oth­er hand, cog­nit­ive sci­ent­ists are unable to identi­fy a single set of neur­al pro­cesses that cap­ture the cir­cum­stances under which all and only pains are exper­i­enced as such. Thus, the sci­entif­ic com­munity has been unable to provide an inform­at­ive and gen­er­al­iz­able account of pain. Two solu­tions to this dilemma have been offered in the literature.

The first solu­tion involves relin­quish­ing the notion of pain as a cer­tain kind of phe­nom­en­al exper­i­ence, which is an ante­cedence for most cog­nit­ive sci­ent­ists. Instead, neur­os­cientif­ic data alone are sup­posed to be the primary cri­terion for the iden­ti­fic­a­tion of pain (e.g. Hardcastle, 2015). This solu­tion there­fore elim­in­ates the first part of the dilemma. There are two main prob­lems faced by this solu­tion. Firstly, neur­al data do not reveal the func­tion of neur­al struc­tures, net­works or sig­na­tures by them­selves. The func­tion of these neur­al char­ac­ter­ist­ics are only revealed by their being cor­rel­ated with some sort of addi­tion­al data (which, in the case of pain, is typ­ic­ally the subject’s qual­i­fic­a­tion of their own exper­i­ence as pain). Thus, remov­ing the sub­ject­ive aspect from pain is ana­log­ous to bit­ing the hand that feeds you. Secondly, ser­i­ous eth­ic­al prob­lems arise when sub­ject­ive exper­i­ence is no longer treated as the decis­ive cri­terion for the iden­ti­fic­a­tion of pain. Because neur­al data may dif­fer from the sub­ject­ive qual­i­fic­a­tion, this approach may lead to a rejec­tion of med­ic­al sup­port for patients that under­go a phe­nom­en­al exper­i­ence of pain. This is a con­sequence that the major­ity of con­tem­por­ary research­ers are — for good reas­ons — unwill­ing to take (Davis et al., 2018).

As a second solu­tion, one might relin­quish the idea that it is pos­sible to devel­op a single the­ory of pain. Instead, research­ers should focus on the devel­op­ment of sep­ar­ate the­or­ies for sep­ar­ate kinds of pain (see, for instance, Jennifer Corns, 2016, 2017). An ana­logy might illus­trate this approach. The gem class ‘jade’ is uni­fied due to the appar­ent prop­er­ties of the respect­ive stones, such as col­or and tex­ture. However, in sci­entif­ic terms the class of jade is com­posed of jadeite and neph­rite, which are of dif­fer­ent chem­ic­al com­pos­i­tions. Thus, it is pos­sible to devel­op a the­ory that enables a dis­tinct char­ac­ter­iz­a­tion with far-reaching gen­er­al­iz­a­tions for either jadeite or neph­rite, but not for jade itself (which lacks a suf­fi­cient and neces­sary chem­ic­al com­pos­i­tion). Similarly, this solu­tion to the pain dilemma holds that all and only pains are uni­fied due to their phe­nom­en­al exper­i­ence as pain, but they can­not be cap­tured in terms of a single sci­entif­ic the­ory. Instead, we need a mul­ti­pli­city of the­or­ies for pain which refer to those sub­classes that reveal a neces­sary and suf­fi­cient neur­al profile.

This solu­tion avoids the meth­od­o­lo­gic­al and eth­ic­al prob­lems faced by the first solu­tion because it is com­pat­ible with pains being defined as a cer­tain sub­ject­ive men­tal phe­nomen­on. However, because this solu­tion denies that it is pos­sible to devel­op a single the­ory of pain, the phe­nomen­on that the sci­entif­ic com­munity is inter­ested in study­ing could not thereby be com­pletely accoun­ted for. If we did devel­op mul­tiple the­or­ies of pain (one for acute pain and one for chron­ic pain, say), it is far from clear that these the­or­ies could explain why all and only pains are sub­ject­ively exper­i­enced as pain. At best, this might explain why cer­tain cases are acute or chron­ic pains, but not why they are both pains. What is miss­ing is a the­or­et­ic­al link that con­nects the dif­fer­ent kinds of pain that, accord­ing to this solu­tion, emerge only as inde­pend­ent neur­al phe­nom­ena in sep­ar­ated the­or­ies. In terms of the pre­vi­ous ana­logy, we need some­thing which plays the role of the resemb­lances in chem­ic­al com­pos­i­tion between jadeite and neph­rite that explains why both of them appear as ‘jade’.

I would like to offer a third solu­tion to the dilemma which avoids the con­cerns faced by the first solu­tion, and which provides the miss­ing the­or­et­ic­al link required by the more prom­ising second solu­tion. This is to hold a fam­ily resemb­lance the­ory of pain. The idea of fam­ily resemb­lance comes from Ludwig Wittgenstein (1953) (although he devel­ops this idea with respect to the mean­ing of con­cepts rather than the prop­er­ties of nat­ur­al phe­nom­ena). A fam­ily resemb­lance the­ory of pain takes the phe­nom­en­al char­ac­ter of pain to uni­fy all and only pains; one’s own sub­ject­ive exper­i­ence of pain as such is the cri­terion of iden­ti­fic­a­tion that picks out mem­bers of the ‘fam­ily’ of pain. Moreover, the fam­ily resemb­lance the­ory of pain denies the pres­ence of an under­ly­ing suf­fi­cient and neces­sary neur­al con­di­tion for pain; there is no neur­al pro­cess that dis­tinct­ively and essen­tially char­ac­ter­izes pain. Thus, the sub­ject­ive qual­i­fic­a­tion iden­ti­fies all and only cases of pain, although they do not share any fur­ther neces­sary or suf­fi­cient neur­al fea­ture. Nonetheless, a fam­ily resemb­lance the­ory fur­ther claims that it is still pos­sible to devel­op a sci­en­tific­ally use­ful neurally-based the­ory of pain that accounts for the phe­nomen­on that the sci­entif­ic com­munity is inter­ested in.

For this third solu­tion, all and only those phe­nom­ena that are exper­i­enced as pain are con­nec­ted through a struc­ture of sys­tem­at­ic resemb­lances that hold between their diver­gent neur­al pro­files. For instance, con­sider, again, acute and chron­ic pain. Both are exper­i­enced as pain, and they are sub­stan­tially dif­fer­ent from each oth­er from a neur­al per­spect­ive when dir­ectly com­pared. However, the trans­form­a­tion from acute to chron­ic pain is a gradu­al pro­cess, whereby the respect­ive dur­a­tion of pain cor­rel­ates with the extent of dif­fer­ences in their neur­al pro­file (Apkarian, Baliki, & Geha, 2009). Thus, the neur­al pro­cess of a pain’s first occur­rence is rel­at­ively sim­il­ar to its second occur­rence, which itself only slightly dif­fers from its third occur­rence, and so forth, until it has trans­formed into some com­pletely dif­fer­ent neur­al phe­nomen­on. This con­nec­tion of resemb­lances over time enables us, how­ever, to explain why sub­jects exper­i­ence all of these kinds of pain as pain: acute and chron­ic pain are bound togeth­er under the fam­ily resemb­lance the­ory through the resemb­lance rela­tions that hold between the vari­ety of pains that con­nect them.

Moreover, the fam­ily resemb­lance the­ory motiv­ates the invest­ig­a­tion of pain’s resemb­lance rela­tions which might prove the­or­et­ic­ally as well as prac­tic­ally use­ful. In fur­ther devel­op­ing research pro­jects of this kind, it appears plaus­ible that, for instance, pains that are more sim­il­ar to each oth­er are more respons­ive to the same kind of treat­ment, even though they do not share a neces­sary and suf­fi­cient neur­al core prop­erty. Understanding the gradu­al trans­ition with­in the resemb­lance rela­tions that lead from acute to chron­ic pain might also offer new pos­sib­il­it­ies of inter­ven­tion. Thus, instead of devel­op­ing a sep­ar­ate the­ory for dif­fer­ent kinds of pain, this third approach motiv­ates the invest­ig­a­tion of the diversity of neur­al pro­files that occur with­in the fam­ily of pain and of the exact struc­ture of their resemb­lance rela­tions, and indeed first steps in this dir­ec­tion are already being taken (e.g. Roy & Wager, 2017).

In sum, when it comes to men­tal phe­nom­ena, such as pain, the under­ly­ing neur­al sub­strate reaches a com­plex­ity and diversity which pre­vents the iden­ti­fic­a­tion of neces­sary and suf­fi­cient neur­al con­di­tions. The fam­ily of pain there­fore con­sti­tutes a frus­trat­ing research object. However, we do not need to throw out the baby with the bathwa­ter and relin­quish the defin­i­tion of pain as a cer­tain kind of men­tal phe­nomen­on, or the idea of a sci­en­tific­ally use­ful the­ory of pain. Of course, a fam­ily resemb­lance the­ory will be lim­ited with respect to its dis­crim­in­at­ive and pre­dict­ive value, since it acknow­ledges that there is no neces­sary or suf­fi­cient neur­al sub­strate for pain. However, it is the most reduct­ive the­ory of pain that can be developed in accord­ance with recent empir­ic­al data, and which can account for the fact that all and only pains are exper­i­enced as pain.



Apkarian, A. V, Bushnell, M. C., Treede, R.-D., & Zubieta, J.-K. (2005). Human brain mech­an­isms of pain per­cep­tion and reg­u­la­tion in health and dis­ease. European Journal of Pain, 9(4), 463–484.

Apkarian, A. V., Baliki, M. N., & Geha, P. Y. (2009). Towards a the­ory of chron­ic pain. Progress in Neurobiology, 87(2), 81–97.

Apkarian, A. V., Bushnell, M. C., & Schweinhardt, P. (2013). Representation of pain in the brain. In S. B. McMahon, M. Koltzenburg, I. Tracey, & D. C. Turk (Eds.), Wall and Melzack’s Textbook of Pain (6th ed., pp. 111–128). Philadelphia: Elsevier Ltd.

Corns, J. (2016). Pain elim­in­ativ­ism: sci­entif­ic and tra­di­tion­al. Synthese, 193(9), 2949–2971.

Corns, J. (2017). Introduction: pain research: where we are and why it mat­ters. In J. Corns (Ed.), The Routledge Handbook of Philosophy of Pain (pp. 1–13). London; New York: Routledge.

Davis, K. D., Flor, H., Greely, H. T., Iannetti, G. D., Mackey, S., Ploner, M., Pustilnik, A., Tracey, I., Treede, R.-F., & Wager, T. D. (2018). Brain ima­ging tests for chron­ic pain: med­ic­al, leg­al and eth­ic­al issues and recom­mend­a­tions. Nature Reviews Neurology, in press.

Forster, C., & Handwerker, H. O. (2014). Central nervous pro­cessing of itch and pain. In E. E. Carstens & T. Akiyama (Eds.), Itch: Mechanisms and Treatment (pp. 409–420). Boca Raton (FL): CRC Press/Taylor & Francis.

Hardcastle, V. G. (2015). Perception of pain. In M. Matthen (Ed.), The Oxford Handbook of Philosophy of Perception (pp. 530–542). Oxford: Oxford University Press.

IASP Subcommitte on Classification. (1986). Pain terms: a cur­rent list with defin­i­tions and notes on usage. Pain, 24(sup­pl. 1), 215–221.

Kuner, R., & Flor, H. (2016). Structural plas­ti­city and reor­gan­iz­a­tion in chron­ic pain. Nature Reviews Neuroscience, 18(1), 20–30.

Melzack, R. (2001). Pain and the neur­omat­rix in the brain. Journal of Dental Education, 65(12), 1378–1382.

Mochizuki, H., & Kakigi, R. (2015). Central mech­an­isms of itch. Clinical Neurophysiology, 126(9), 1650–1660.

Peyron, R., García-Larrea, L., Grégoire, M. C., Costes, N., Convers, P., Lavenne, F., Maugière, F., Michel, D., & Laurent, B. (1999). Haemodynamic brain responses to acute pain in humans. Sensory and atten­tion­al net­works. Brain, 122(9), 1765–1779.

Reddan, M. C., & Wager, T. D. (2017). Modeling pain using fMRI: from regions to bio­mark­ers. Neuroscience Bulletin, 34(1), 208–215.

Roy, M., & Wager, T. D. (2017). Neuromatrix the­ory of pain. In J. Corns (Ed.), Routledge Handbook of Philosophy of Pain (pp. 87–97). London; New York: Routledge.

Schmidt-Wilcke, T. (2015). Neuroimaging of chron­ic pain. Best Practice and Research: Clinical Rheumatology, 29(1), 29–41.

Starr, C. J., Sawaki, L., Wittenberg, G. F., Burdette, J. H., Oshiro, Y., Quevedo, A. S., & Coghill, R. C. (2009). Roles of the insu­lar cor­tex in the mod­u­la­tion of pain: insights from brain lesions. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 29(9), 2684–2694.

Tracey, I. (2011). Can neuroima­ging stud­ies identi­fy pain endophen­o­types in humans? Nature Reviews. Neurology, 7(3), 173–181.

Wager, T. D., Atlas, L. Y., Lindquist, M. A., Roy, M., Woo, C.-W., & Kross, E. (2013). An fMRI-based neur­o­lo­gic sig­na­ture of phys­ic­al pain. The New England Journal of Medicine, 368(15), 1388–1397.

Wartolowska, K. (2011). How neuroima­ging can help us to visu­al­ize and quanti­fy pain? European Journal of Pain Supplements, 5(2), 323–327.

Wittgenstein, L. (1953). Philosophical invest­ig­a­tions. G. E. M. Anscombe & R. Rhees (Eds.). Oxford: Blackwell Publishing.

Can a visual experience be biased?

by Jessie Munton — Junior Research Fellow at St John’s College, Cambridge

Beliefs and judge­ments can be biased: my expect­a­tions of someone with a London accent might be biased by my pre­vi­ous expos­ure to Londoners or ste­reo­types about them; my con­fid­ence that my friend will get the job she is inter­view­ing for may be biased by my loy­alty; and my sus­pi­cion that it will rain tomor­row may be biased by my expos­ure to weath­er in Cambridge over the past few days. What about visu­al exper­i­ences? Can visu­al exper­i­ences be biased?

That’s the ques­tion I explore in this blog post. In par­tic­u­lar, I’ll ask wheth­er a visu­al exper­i­ence could be biased, in the sense of exem­pli­fy­ing forms of racial pre­ju­dice. I’ll sug­gest that the answer to this ques­tion is a tent­at­ive “yes”, and that that presents some nov­el chal­lenges to how we think of both bias and visu­al perception.

According to a very simplist­ic way of think­ing about visu­al per­cep­tion, it presents the world to us just as it is: it puts us dir­ectly in touch with our envir­on­ment, in a man­ner that allows it to play a unique, pos­sibly found­a­tion­al epi­stem­ic role. Perception in gen­er­al, and visu­al exper­i­ence with it, is some­times treated as a kind of giv­en: a sourceof evid­ence that is immune to the sorts of ration­al flaws that beset our cog­nit­ive responses to evid­ence. This approach encour­ages us to think of visu­al exper­i­ence as a neut­ral cor­rect­ive to the kinds of flaws that can arise in belief, such as bias or pre­ju­dice: there is no room­in the pro­cesses that gen­er­ate visu­al exper­i­ence for the kinds of influ­ence that cause belief to be biased or prejudiced.

But there is a ten­sion between that view and cer­tain facts about the sub­per­son­al pro­cesses that sup­port visu­al per­cep­tion in creatures like ourselves. In par­tic­u­lar, our visu­al sys­tem faces an under­de­termin­a­tion chal­lenge: the light sig­nals received by the ret­ina fail, on their own, to determ­ine a unique extern­al stim­u­lus (Scholl 2005). To resolve the res­ult­ing ambi­gu­ity, the visu­al sys­tem must rely on pri­or inform­a­tion about the envir­on­ment, and likely stim­uli with­in it. But those pri­ors are not fixed and immut­able: the visu­al sys­tem updates them in light of pre­vi­ous exper­i­ence (Chalk et al 2010, Chun &Turk-Browne 2008). In this way, the visu­al sys­tem learns from the idio­syn­crat­ic course that the indi­vidu­al takes through the world.

Equally, the visu­al sys­tem is over­whelmed with pos­sible input: the inform­a­tion avail­able from the envir­on­ment at any one moment far sur­passes what the brain can pro­cess (Summerfield & Egner 2009). It must select­ively attend to cer­tain objects or areas with­in the visu­al field, in order to pri­or­it­ise the highest value inform­a­tion. Preexisting expect­a­tions and pri­or­it­ies determ­ine the sali­ence of inform­a­tion with­in a giv­en scene. The nature and con­tent of the visu­al exper­i­ence you are hav­ing at any moment in part depends on the rel­at­ive value you place on the inform­a­tion in your environment.

We per­ceive the world, then, in light of our pri­or expect­a­tions, and past expos­ure to it. Those pro­cesses of learn­ing and adapt­a­tion, of devel­op­ing skills that fit a par­tic­u­lar envir­on­ment­al con­text, leave visu­al per­cep­tion vul­ner­able to a kind of visu­al coun­ter­part to bias: we do not come to the world each time with fresh eyes. If we did, we would see less accur­ately and effi­ciently than we do.

Cognitive biases often emerge as a response to par­tic­u­lar envir­on­ment­al pres­sures: they per­sist because they lend some advant­age in cer­tain cir­cum­stances, but come at the expense of sens­it­iv­ity to cer­tain oth­er inform­a­tion (Kahneman & Tversky 1973). Similarly, the capa­city of the visu­al sys­tem to devel­op an expert­ise with­in a par­tic­u­lar con­text can restrict its sens­it­iv­ity to cer­tain sorts of inform­a­tion. We can see this kind of struc­ture in the spe­cial­ist abil­it­ies we devel­op to see faces.

You might nat­ur­ally think that we per­ceive high-level fea­tures of faces, such as the emo­tion they dis­play or the racial cat­egory they belong to, not dir­ectly, but only in vir­tue of, or per­haps via some kind of sub­per­son­al infer­ence from, their lower-level fea­tures: the arrange­ment of facial fea­tures, for instance, or the col­or and shad­ing that let us pick out those fea­tures. In fact, there’s good evid­ence that we per­ceive the social cat­egory of a face, or the emo­tion it dis­plays dir­ectly. For instance, we demon­strate “visu­al adapt­a­tion” to facial emo­tion: after see­ing a series of angry faces, a neut­ral face appears happy. And those adapt­a­tion effects are spe­cif­ic to the gender and race of the face,suggesting that these cat­egor­ies of faces may be coded for my dif­fer­ent neur­al pop­u­la­tions (Jaquet, Rhodes, & Hayward 2007, 2008; Jacquet & Rodes 2005, Little, DeBruine, & Jones 2005).

Moreover, our skills at face per­cep­tion seem to be sys­tem­at­ic­ally arranged along racial lines: most people are bet­ter at recog­niz­ing own-race and dominant-race faces, (Meissner & Brigham 2001), the res­ult of a pro­cess of spe­cial­isa­tion that emerges over the first 9 months of life as infants gradu­ally lose the capa­city to recog­nize faces of dif­fer­ent or non-dominant races (Kelly et al. 2007). A White adult in a major­ity white soci­ety will gen­er­ally be bet­ter at recog­niz­ing oth­er white faces than Black or Asian faces, for instance, where­as a Black per­son liv­ing in a major­ity Black soci­ety will con­versely be less good at recog­niz­ing White than Black faces.  This extends to the iden­ti­fic­a­tion of emo­tion from faces, as well as their recog­ni­tion: sub­jects are more accur­ate at identi­fy­ing the emo­tion dis­played on dom­in­ant or same-race faces than other-race faces (Elfenbeim & Ambady 2002).

One way of under­stand­ing this pro­file of skills is to think of faces as arranged with­in a mul­ti­di­men­sion­al “face space” depend­ing on their sim­il­ar­ity to one anoth­er. We hone our per­cep­tu­al capa­cit­ies with­in that area of face space to which we have most expos­ure. That area of face space becomes, in effect, stretched, allow­ing for finer grained dis­tinc­tions between faces. (Valentine 1991; Valentine, Lewis and Hills 2016). The great­er “dis­tance” between faces in the area of face space in which we are most spe­cial­ized renders those faces more mem­or­able and easi­er to dis­tin­guish from one anoth­er. Another way of think­ing of this is in terms of “norm-based cod­ing” (Rhodes and Leopold 2011): faces are encoded rel­at­ive to the aver­age face encountered. Faces fur­ther from the norm suf­fer in terms of our visu­al sens­it­iv­ity to the inform­a­tion they carry.

On the one hand, it isn’t hard to see how this kind of facial expert­ise could help us extract max­im­al inform­a­tion from the faces we most fre­quently encounter.  But the impact of this “same-race face effect” more gen­er­ally is poten­tially highly prob­lem­at­ic: a White per­son in a major­ity White soci­ety will be less likely to accur­ately recog­nise a Black indi­vidu­al, and less able to accur­ately per­ceive their emo­tions from their face. That diminu­tion of sens­it­iv­ity to faces of dif­fer­ent races paves the way for a range of down­stream impacts. Since the visu­al sys­tem fails to advert­ise this dif­fer­en­tial sens­it­iv­ity, the indi­vidu­al is liable to reas­on as though they have read their emo­tions with equal per­spicu­ity, and to draw con­clu­sions on that basis (that the indi­vidu­al feels less per­haps, when the emo­tion in ques­tion is simply visu­ally obscure to them). Relatedly, the lack of inform­a­tion extrac­ted per­cep­tu­ally from the face makes it more likely that the indi­vidu­al will fill that short­fall of inform­a­tion by draw­ing on ste­reo­types about the rel­ev­ant group: that Black people are aggress­ive, for instance, (Shapiro et al. 2009; Brooks and Freeman 2017). And restric­tions on the abil­ity to accur­ately recall cer­tain faces will bring with them social costs for those individuals.

Compare this visu­al bias to someone writ­ing a report about two indi­vidu­als, one White and one Black. The report about the White per­son is detailed and accur­ate, whilst the report on the Black per­son is much spars­er, lack­ing inform­a­tion rel­ev­ant to down­stream tasks. In such a case, we would reas­on­ably regard the report writer as biased, par­tic­u­larly if their report writ­ing reflec­ted this kind of dis­crep­ancy between White and Black tar­gets more gen­er­ally. If the visu­al sys­tem dis­plays a struc­tur­ally sim­il­ar bias in the inform­a­tion it provides us with, should we regard it, too, as biased?

To answer that ques­tion, we need to have an account of what it is for any­thingto be biased, be it a visu­al exper­i­ence, a belief, or a dis­pos­i­tion to behave or reas­on in some way or oth­er. We use ‘bias’ in many dif­fer­ent ways. In par­tic­u­lar, we need to dis­tin­guish here what I call form­al bias from pre­ju­di­cial bias. In cer­tain con­texts, a bias may be rel­at­ively neut­ral. A ship might be delib­er­ately giv­en a bias to list towards the port side, for instance, by uneven dis­tri­bu­tion of bal­last. Similarly, any sys­tem that resolves ambi­gu­ity in incom­ing sig­nal on the basis of inform­a­tion it has encountered in the past is biased by that pri­or inform­a­tion. But that’s a bias that, for the most part, enhances rather than detracts from the accur­acy of the res­ult­ing judge­ments or rep­res­ent­a­tions.  We could call biases of this kind form­al biases. 

Bias also has anoth­er, more col­lo­qui­al usage, accord­ing to which it picks out some­thing dis­tinct­ively neg­at­ive, because it indic­ates an unfairor dis­pro­por­tion­atejudge­ment, a judge­ment sub­ject to an influ­ence that is dis­tinct­ively ille­git­im­ate in some way. Bias in this sense often involves undue influ­ence by demo­graph­ic cat­egor­ies, for instance. We might describe an admis­sions pro­cess as biased in this way if it dis­pro­por­tion­ately excludes working-class can­did­ates, or women, or people with red hair. We can call bias of this kind pre­ju­di­cial bias. 

The visu­al sys­tem is clearly cap­able of exhib­it­ing the first kind of bias. As a sys­tem that sys­tem­at­ic­ally learns from past exper­i­ences in order to effect­ively pri­or­it­ise and pro­cess new inform­a­tion, it is a form­ally biased sys­tem. Similarly, the same-race face effect in face per­cep­tion involves the sys­tem­at­ic neg­lect of cer­tain inform­a­tion as the res­ult of task-specific expert­ise. That renders it an instance of form­al bias.

To decide wheth­er this also con­sti­tutes an instance of pre­ju­di­cial bias, we need to ask: is that neg­lect of inform­a­tion ille­git­im­ate? And if so, on what grounds? Two dif­fi­culties present them­selves at this junc­ture. The first is that we are, for the most part, not used to assess­ing the pro­cesses involved in visu­al per­cep­tion as legit­im­ate, or ille­git­im­ate (though that has come under increas­ing pres­sure recently, in par­tic­u­lar in Siegel (2017).) We need to devel­op a new set of tools for this kind of cri­tique. The second dif­fi­culty is the way in which form­al bias, includ­ing the devel­op­ment of per­cep­tu­al expert­ise of the kind demon­strated in the same race face effect, is a vir­tue of visu­al per­cep­tion. It makes visu­al per­cep­tion not just effi­cient, but pos­sible. Acknowledging that can seem to restrict our abil­ity to con­demn the bias in ques­tion as not just form­al, but prejudicial.

This throws us up against the ques­tion: what is the rela­tion­ship between form­al and pre­ju­di­cial bias? Formal bias is often a vir­tue: it allows for the more effi­cient extrac­tion of inform­a­tion, by draw­ing on rel­ev­ant post inform­a­tion. Prejudicial bias on the oth­er hand is a vice: it lim­its the sub­jects’ sens­it­iv­ity to rel­ev­ant inform­a­tion in a way that seems intu­it­ively prob­lem­at­ic. What are the cir­cum­stances under which the vir­tue of form­al bias becomes the vice of pre­ju­di­cial bias?

In part, this seems to depend on the con­text in which the pro­cess in ques­tion is deployed, and the task at hand. The vir­tues of form­al biases rely on sta­bil­ity in both the individual’s envir­on­ment and goals: that’s when reli­ance on past inform­a­tion and expert­ise developed via con­sist­ent expos­ure to cer­tain stim­uli is help­ful. The same-race face effect devel­ops as the visu­al sys­tem learns to extract inform­a­tion from those faces it most fre­quently encoun­ters. The res­ult­ing expert­ise can­not adapt at the same pace as our chan­ging, com­plex social goals across a range of con­texts. As a res­ult, this kind of form­al per­cep­tu­al expert­ise res­ults in a loss of import­ant inform­a­tion in cer­tain con­texts: an instance of pre­ju­di­cial bias. If that’s right, then the dis­tinc­tion between form­al and pre­ju­di­cial bias isn’t one that can be iden­ti­fied just by look­ing at a par­tic­u­lar cog­nit­ive pro­cess in isol­a­tion, but only by look­ing at that pro­cess across a dynam­ic set of con­texts and tasks.



Brooks, J. A., & Freeman, J. B. (2017). Neuroimaging of per­son per­cep­tion: A social-visual inter­face. Neuroscience Letters. https://doi.org/10.1016/j.neulet.2017.12.046

Chalk, M. S., & Series, A. R. (2010). Rapidly learned stim­u­lus expect­a­tions alter per­cep­tion of motion.Journal of Vision, 108(2), 1–18.

Chun, M. M., & Turk-Browne, N. B. (2008). Associative Learning Mechanisms in Vision. Oxford University Press. Retrieved from http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195305487.001.0001/acprof-9780195305487-chapter‑7

Elfenbein, H. A., & Ambady, N. (2003). When famili­ar­ity breeds accur­acy: cul­tur­al expos­ure and facial emo­tion recog­ni­tion. Journal of Personality and Social Psychology,85(2), 276.

Jaquet, E., & Rhodes, G. (2008). Face afteref­fects indic­ate dis­so­ci­able, but not dis­tinct, cod­ing of male and female faces. Journal of Experimental Psychology. Human Perception and Performance, 34(1), 101–112. https://doi.org/10.1037/0096–1523.34.1.101

Jaquet, E., Rhodes, G., & Hayward, W. G. (2007). Opposite afteref­fects for Chinese and Caucasian faces are select­ive for social cat­egory inform­a­tion and not just phys­ic­al face dif­fer­ences. The Quarterly Journal of Experimental Psychology, 60(11), 1457–1467. https://doi.org/10.1080/17470210701467870

Jaquet, E., Rhodes, G., & Hayward, W. G. (2008). Race-contingent afteref­fects sug­gest dis­tinct per­cep­tu­al norms for dif­fer­ent race faces. Visual Cognition, 16(6), 734–753. https://doi.org/10.1080/13506280701350647

Kahneman, D., & Tversky, A. (1973). On the psy­cho­logy of pre­dic­tion. Psychological Review,80, 237–251.

Kelly, D. J., Quinn, P. C., Slater, A. M., Lee, K., Ge, L., & Pascalis, O. (2007). The other-race effect devel­ops dur­ing infancy: Evidence of per­cep­tu­al nar­row­ing. Psychological Science, 18(12), 1084–1089.

Little, A. C., DeBruine, L. M., & Jones, B. C. (2005). Sex-contingent face after-effects sug­gest dis­tinct neur­al pop­u­la­tions code male and female faces. Proceedings of the Royal Society B: Biological Sciences, 272(1578), 2283–2287. https://doi.org/10.1098/rspb.2005.3220

Meissner, C. A., & Brigham, J. C. (2001). Thirty years of invest­ig­at­ing the own-race bias in memory for faces: A meta-analytic review. Pyschology, Public Policy and Law,7(1), 3–35.

Rhodes, G., & Leopold, D. A. (2011). Adaptive Norm-Based Coding of Face Identity. https://doi.org/10.1093/oxfordhb/9780199559053.013.0014

Scholl, B. J. (2005). Innateness and (Bayesian) visu­al per­cep­tion. In P. Carruthers (Ed.), The Innate Mind: Structure and Contents(p. 34). New York: Oxford University Press.

Shapiro, J. R., Ackerman, J. M., Neuberg, S. L., Maner, J. K., Becker, D. V., & Kenrick, D. T. (2009). Following in the Wake of Anger: When Not Discriminating Is Discriminating. Personality & Social Psychology Bulletin, 35(10), 1356–1367. https://doi.org/10.1177/0146167209339627

Siegel, S. (2017). The Rationality of Perception.Oxford University Press

Summerfield, C., & Egner, T. (2009). Expectation (and atten­tion) in visu­al cog­ni­tion. Trends in Cognitive Science, 13(9), 403–409.

Valentine, T. (1991). A uni­fied account of the effects of dis­tinct­ive­ness, inver­sion, and race in face recog­ni­tion. The Quarterly Journal of Experimental Psychology, 43(2), 161–204.

Valentine, T., Lewis, M. B., & Hills, P. J. (2016). Face-space: A uni­fy­ing concept in face recog­ni­tion research. The Quarterly Journal of Experimental Psychology, 69(10), 1996–2019.