Reading Minds & Reading Maps

Do non­hu­man anim­als know they’re not alone? Of course they must know there are lots of things in the world around them – rocks, water, trees, oth­er creatures and what have you. But do they know that they inhab­it a world pop­u­lated by minded creatures – that the anim­als around them see and know things, that they have beliefs, inten­tions and desires? Can they attrib­ute men­tal states to oth­er anim­als, and use those attri­bu­tions to pre­dict or explain their beha­viour? If so, then they’re what philo­soph­ers and psy­cho­lo­gists call ‘mindread­ers’.

Whether anim­als are mindread­ers has been a con­tested ques­tion in com­par­at­ive cog­ni­tion for around forty years (begin­ning with Premack & Woodruff, 1978), and it remains con­tro­ver­sial. My interest in this post is not so much wheth­er anim­als are mindread­ers but rather, if anim­als are mindread­ers, what kind of mindread­ers might they be? The motiv­at­ing thought is this: even if anim­als do rep­res­ent and reas­on about the men­tal states of oth­ers, their under­stand­ing of men­tal those states might be some­what dif­fer­ent from ours.

The idea that anim­als might have a lim­ited or ‘min­im­al’ under­stand­ing of men­tal states has been explored in a num­ber of places (see, for instance, Bermúdez, 2011; Butterfill & Apperly, 2013; Call & Tomasello, 2008). These pro­pos­als dif­fer, but they have in com­mon the idea that anim­als don’t con­strue men­tal states as rep­res­ent­a­tions – that is, as states which rep­res­ent the world, and which can do so accur­ately or inac­cur­ately. If these pro­pos­als are right, anim­als might be able to rep­res­ent oth­ers as hav­ing fact­ive men­tal states like see­ing or know­ing, but would not be able to make sense of anoth­er agent hav­ing a false belief, or any state that mis­rep­res­ents the world.

Recent work on mindread­ing in chim­pan­zees puts pres­sure on this sort of pro­pos­al. Christopher Krupenye and col­leagues (Krupenye, Kano, Hirata, Call, & Tomasello, 2016) found that chim­pan­zees were able to pre­dict the beha­viour of a human with a false belief. It’s not uncon­tro­ver­sial (see Andrews, 2018 for dis­cus­sion), but for the sake of argu­ment let’s say that this is indeed evid­ence that chimps under­stand false beliefs, as states that mis­rep­res­ent the world. Does that mean that chimps’ under­stand­ing of men­tal states is essen­tially the same as our own?

I’ve argued that it doesn’t. That’s because there are import­ant ways in which mindread­ers might dif­fer from one anoth­er, even if they rep­res­ent men­tal states as rep­res­ent­a­tion­al. To see that, let’s think a bit more about rep­res­ent­a­tions. A rep­res­ent­a­tion has a con­tent – how it rep­res­ents the world as being – which can be accur­ate or inac­cur­ate. The sen­tence ‘Santa is in the chim­ney’ is a rep­res­ent­a­tion whose con­tent is that Santa is in the chim­ney.It’s accur­ate if Santa is in the chim­ney, and inac­cur­ate if he’s some­where else. But a rep­res­ent­a­tion also has a format– it exploits a par­tic­u­lar rep­res­ent­a­tion­al sys­tem in order to rep­res­ent what it rep­res­ents. ‘Santa is in the chim­ney’ is a rep­res­ent­a­tion with a sen­ten­tial, lin­guist­ic format. But we could rep­res­ent the same con­tent in a num­ber of oth­er formats. For instance, we might rep­res­ent it pictori­ally by draw­ing Santa in the chim­ney, as in Figure 1. Or we might draw up a map rep­res­ent­ing the same thing, as in Figure 2.

Given that rep­res­ent­a­tions may dif­fer with respect to the rep­res­ent­a­tion­al format they exploit, mindread­ers might dif­fer with respect to the rep­res­ent­a­tion­al format they take men­tal states to have. Some might treat beliefs as some­thing like ‘sen­tences in the head’. Others might treat them as more picture-like. Still oth­ers might be what I’ve called ‘mindmap­pers’ (Boyle, 2019) – they might take lit­er­ally the idea that a belief is a ‘map of the neigh­bour­ing space by which we steer’ (Ramsey, 1931).

This mat­ters, because the rep­res­ent­a­tion­al format one takes men­tal states to have has a sig­ni­fic­ant impact on one’s mindread­ing abil­it­ies – because dif­fer­ent rep­res­ent­a­tion­al formats them­selves dif­fer from one anoth­er in sys­tem­at­ic ways.

Take maps. As I’m using the term, a map makes use of a lex­icon of icons, each of which stands for a par­tic­u­lar (type of) thing, which it com­bines accord­ing to the prin­ciple of spa­tial iso­morph­ism. Simply put, by pla­cing two icons in a par­tic­u­lar spa­tial rela­tion­ship on a map, one thereby rep­res­ents that the two things denoted by the icons stand in an iso­morph­ic spa­tial rela­tion­ship in real­ity. That’s all there is to it.

If you want to rep­res­ent the spa­tial lay­out of a num­ber of objects in a par­tic­u­lar region of space, there are lots of advant­ages to using a map: it’s a very nat­ur­al and user-friendly way to rep­res­ent that kind of inform­a­tion. A single map can con­tain an awful lot of inform­a­tion about the spa­tial lay­out of a region. To con­vey the con­tent of a map in lan­guage would usu­ally require a large and unwieldy set of sen­tences (or a very lengthy sen­tence). And updat­ing inform­a­tion in a map without intro­du­cing incon­sist­ency is easy to do. Updating the rep­res­en­ted loc­a­tion of an object by mov­ing an icon thereby also updates the rep­res­en­ted rela­tion­ships between that object and everything else on the map, keep­ing the whole con­sist­ent. If one rep­res­en­ted all of this spa­tial inform­a­tion sen­ten­tially, it would be easy to intro­duce incon­sist­en­cies. (See Camp, 2007 for a fuller dis­cus­sion of maps’ rep­res­ent­a­tion­al features.)

For all that, maps are an extremely lim­it­ing rep­res­ent­a­tion­al format: all they can really rep­res­ent is the spa­tial lay­out of objects in a region. If you want to rep­res­ent that Christmas is com­ing, that the goose is get­ting fat, or that Santa is really your dad, a map would be a poor format to choose. These are not the kinds of con­tents that a map can express. For that kind of thing, you need a more express­ively power­ful format – like language.

The point is that the dis­tinct­ive strengths and weak­nesses of rep­res­ent­a­tion­al formats will show up in their mindread­ing abil­it­ies and beha­viour. Humans can ascribe an appar­ently unlim­ited range of beliefs – beliefs about Santa’s true iden­tity, about death and resur­rec­tion, about pos­sible presents with no known loc­a­tion. I think this is good evid­ence that we take men­tal states to be lin­guist­ic, or at least to have a format which mir­rors language’s express­ive power.

But anim­als might not be like us in that respect: they might think of beliefs as maps in the head. If they do, they would be able to cap­ture what oth­ers think about where things are to be found, but they wouldn’t be able to make sense of beliefs about object iden­tit­ies or about non-spatial prop­er­ties – and nor could they make sense of someone hav­ing a belief about an object whilst hav­ing no belief about its loc­a­tion. To my know­ledge, wheth­er anim­als can rep­res­ent these non-spatial beliefs has not been invest­ig­ated. So, it remains an open empir­ic­al ques­tion wheth­er they treat beliefs as map-like, lin­guist­ic, or hav­ing some oth­er format. But it’s a ques­tion worth invest­ig­at­ing. If anim­als con­strued men­tal states as hav­ing a non-linguistic format, there would remain a sig­ni­fic­ant sense in which anim­als’ mindread­ing abil­it­ies differed qual­it­at­ively from ours.

 

References

Andrews, K. (2018). Do chim­pan­zees reas­on about belief? In K. Andrews & J. Beck (Eds.), The Routledge Handbook of Philosophy of Animal Minds. Abingdon: Routledge.

Bermúdez, J. L. (2011). The force-field puzzle and mindread­ing in non-human prim­ates. Review of Philosophy and Psychology, 2(3), 397–410. https://doi.org/10.1007/s13164-011‑0077‑9

Boyle, A. (2019). Mapping the minds of oth­ers. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-019–00434‑z

Butterfill, S. A., & Apperly, I. A. (2013). How to con­struct a min­im­al the­ory of mind. Mind & Language, 28(5), 606–637.

Call, J., & Tomasello, M. (2008). Does the chim­pan­zee have a the­ory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192.

Camp, E. (2007). Thinking with maps. Philosophical Perspectives, 21, 145–182.

Krupenye, C., Kano, F., Hirata, S., Call, J., & Tomasello, M. (2016). Great apes anti­cip­ate that oth­er indi­vidu­als will act accord­ing to false beliefs. Science, 354(6308), 110–114. https://doi.org/10.1126/science.aaf8110

Premack, D., & Woodruff, G. (1978). Does the chim­pan­zee have a the­ory of mind? Behavioral and Brain Sciences, 4, 515–526.

Ramsey, F. P. (1931). The Foundations of Mathematics. London: Kegan Paul.

Foraging in the Global Workspace: The Central Executive Reconsidered

David L Barack, Postdoctoral Research Fellow, Salzman Lab, Columbia University

What is the cent­ral exec­ut­ive? In cog­nit­ive psy­cho­logy, exec­ut­ive func­tion­ing con­cerns the com­pu­ta­tion­al pro­cesses that con­trol cog­ni­tion, includ­ing the dir­ec­tion of atten­tion, action selec­tion, decision mak­ing, task switch­ing, and oth­er such func­tions. In cog­nit­ive sci­ence, the cent­ral pro­cessor is some­times modeled after the CPU of a von Neumann archi­tec­ture, the mod­ule of a com­pu­ta­tion­al sys­tem that makes calls to memory, executes trans­form­a­tions in line with algorithms over the retrieved data, and then writes back to memory the res­ults of these trans­form­a­tions. On my account of the mind, the cent­ral pro­cessor pos­sesses the psy­cho­lo­gic­al func­tions that are part of exec­ut­ive func­tion­ing. I will refer to this com­bined con­struct of a cent­ral pro­cessor that per­forms exec­ut­ive func­tions as the cent­ral executive.

The cent­ral exec­ut­ive has a range of prop­er­ties, but for this post, I will focus on domain gen­er­al­ity, inform­a­tion­al access­ib­il­ity, and infer­en­tial rich­ness. By domain gen­er­al, I mean that the cent­ral exec­ut­ive con­tains inform­a­tion from dif­fer­ent mod­al­it­ies (such as vis­ion, audi­tion, etc.). By inform­a­tion­ally access­ible, I mean both that the cent­ral executive’s algorithms have access to inform­a­tion out­side of the cent­ral exec­ut­ive and that inform­a­tion con­tained in these algorithms is access­ible by oth­er pro­cesses, wheth­er also part of the cent­ral exec­ut­ive or part of input or out­put spe­cif­ic sys­tems. By infer­en­tially rich, I mean that the inform­a­tion in the cent­ral exec­ut­ive is poten­tially com­bined with any oth­er piece of inform­a­tion to res­ult in new beliefs. The func­tions of the cent­ral exec­ut­ive may or may not be conscious.

Three con­cepts at the heart of my mod­el of the cent­ral exec­ut­ive will provide the resources to begin to explain these three prop­er­ties: intern­al search, a glob­al work­space, and foraging.

The first concept is intern­al search. Newell fam­ously said that search is at the heart of cog­ni­tion (Newell 1994), a pos­i­tion with which much mod­ern cog­nit­ive neur­os­cience agrees (Behrens, Muller et al. 2018; Bellmund, Gärdenfors et al. 2018). Search is the pro­cess of trav­el­ing through some space (phys­ic­al or abstract, such as concept space or the inter­net) in order to loc­ate a goal, and intern­al search refers to a search that occurs with­in the organ­ism. Executive func­tions, I con­tend, are types of search.

The second concept in my ana­lys­is is the glob­al work­space. Search requires some space through which to occur. In the case of cog­ni­tion, search occurs in the glob­al work­space: a com­pu­ta­tion­al space in which dif­fer­ent data struc­tures are loc­ated and com­pete for com­pu­ta­tion­al resources and oper­a­tions. The glob­al work­space is a notion that ori­gin­ated in cog­nit­ive the­or­ies of con­scious­ness (Baars 1993) but has recently been applied to cog­ni­tion (Schneider 2011). The glob­al work­space can be con­cep­tu­al­ized in dif­fer­ent ways. The glob­al work­space could be some­thing like a hard drive that stores data but to which many dif­fer­ent oth­er parts of the sys­tem (such as the brain) sim­ul­tan­eously have access. Or, it could be some­thing like an arena where dif­fer­ent data struc­tures lit­er­ally roam around and inter­act with com­pu­ta­tion­al oper­a­tions (like a lit­er­al imple­ment­a­tion of a pro­duc­tion archi­tec­ture; see Newell 1994; Simon 1999). The cent­ral exec­ut­ive is partly con­sti­tuted by intern­al search through a glob­al workspace.

The third and final concept in my ana­lys­is is for­aging. Foraging is a spe­cial type of dir­ec­ted search for resources under ignor­ance. Specifically, for­aging is the goal-directed search for resources in non-exclusive, iter­ated, accept-or-reject decision con­texts (Barack and Platt 2017;Barack ms). I con­tend that cent­ral exec­ut­ive pro­cesses involve for­aging (and hence this third concept is a spe­cial case of the first concept, intern­al search). While cent­ral exec­ut­ive pro­cesses may not lit­er­ally make decisions, the ana­logy is apt. The intern­al search through the glob­al work­space is dir­ec­ted: a par­tic­u­lar goal is sought, which in the case of the cent­ral exec­ut­ive is going to be defined by some sort of loss func­tion that the sys­tem is attempt­ing to min­im­ize. This search is non-exclusive, as oper­a­tions on data that are fore­gone can be executed at a later time. The search is iter­ated, as the same oper­a­tion can be per­formed repeatedly. Finally, the oper­a­tions of the cent­ral exec­ut­ive are accept-or-reject in the sense that com­pu­ta­tion­al oper­a­tions per­formed on data struc­tures either occur or they do not in a one-at-a-time, seri­al fashion.

The ana­lys­is of the cent­ral exec­ut­ive as foraging-type searches through an intern­al, glob­al work­space may shed light on the three key prop­er­ties men­tioned earli­er: domain gen­er­al­ity, inform­a­tion­al access­ib­il­ity, and infer­en­tial richness.

First, domain gen­er­al­ity is provided for by the glob­al work­space and unres­tric­ted search. This work­space is neut­ral with regard to the sub­ject mat­ter of the data struc­tures it con­tains, and so is domain gen­er­al. The search pro­cesses that oper­ate in that work­space are also unres­tric­ted in their sub­ject matter—those pro­cesses can oper­ate over any data that matches the input con­straints for the pro­duc­tion sys­tem. (While they may be unres­tric­ted in their sub­ject mat­ter, they are restric­ted by the con­straints on the data imposed by the pro­duc­tion system’s trig­ger­ing con­di­tions.) The unres­tric­ted sub­ject mat­ter of the glob­al work­space and the unres­tric­ted nature of the pro­duc­tion pro­cesses both con­trib­ute to the domain gen­er­al nature of the cent­ral exec­ut­ive. This ana­lys­is of domain gen­er­al­ity sug­gests two types of such gen­er­al­ity should be dis­tin­guished. There are con­straints on what type of con­tent (per­cep­tu­al, motor­ic, gen­er­al, etc.) can be con­tained in stored data struc­tures, and there are con­straints on the type of con­tent that can trig­ger a trans­form­a­tion. A domain gen­er­al work­space can con­tain domain spe­cif­ic pro­duc­tions, for example.

Second, inform­a­tion­al access­ib­il­ity reflects the glob­al workspace’s struc­ture. In order to be a glob­al work­space, dif­fer­ent modality- or domain-specific mod­ules must have access to the work­space. But this access means that there must be con­nec­tions to the work­space. Other aspects of inform­a­tion­al access remain to be explained. In par­tic­u­lar, while the glob­al work­space may be widely inter­con­nec­ted, that does not entail that mod­ules have access to inform­a­tion in spe­cif­ic algorithms in the work­space. The pres­ence of a work­space merely insures some of the needed archi­tec­tur­al fea­tures for such access are present.

Third, infer­en­tial rich­ness res­ults from this intern­al for­aging through the work­space. Foraging com­pu­ta­tions are optim­al in that they min­im­ize or max­im­ize some func­tion under uncer­tainty. Such optim­al­ity implies that the executed com­pu­ta­tion reflects the best data at hand, regard­less of its con­tent. Any such data can be util­ized to determ­ine the oper­a­tion that is actu­ally executed at a giv­en time. This explan­a­tion of infer­en­tial rich­ness is not quite the sort described by Quine (Quine 1960)or Fodor (Fodor 1983), who envi­sion infer­en­tial rich­ness as the poten­tial for any piece of inform­a­tion to influ­ence any oth­er. But with enough simple foraging-like com­pu­ta­tions and enough time, this poten­tial wide­spread influ­ence can be approximated.

These com­ments have been spec­u­lat­ive, but I hope I have provided an out­line of a sketch for a new mod­el of the cent­ral exec­ut­ive. Obviously much more con­cep­tu­al and the­or­et­ic­al work needs to be done, and many objections—perhaps most fam­ously those of Fodor, who des­paired of a sci­entif­ic account of such cent­ral processes—remain to be addressed. I intend on flesh­ing out these ideas in a series of essays. Regardless, I think that there is much more prom­ise in a sci­entif­ic explan­a­tion of these cru­cial, cent­ral psy­cho­lo­gic­al pro­cesses than has been pre­vi­ously appreciated.

 

References:

Baars, B. J. (1993). A cog­nit­ive the­ory of con­scious­ness, Cambridge University Press.

Barack, D. L. and M. L. Platt (2017). Engaging and Exploring: Cortical Circuits for Adaptive Foraging Decisions. Impulsivity, Springer: 163–199.

Barack, D. L. (ms). “Information Harvesting: Reasoning as Foraging in the Space of Propositions.”

Behrens, T. E., T. H. Muller, J. C. Whittington, S. Mark, A. B. Baram, K. L. Stachenfeld and Z. Kurth-Nelson (2018). “What is a cog­nit­ive map? Organizing know­ledge for flex­ible beha­vi­or.” Neuron100(2): 490–509.

Bellmund, J. L., P. Gärdenfors, E. I. Moser and C. F. J. S. Doeller (2018). “Navigating cog­ni­tion: Spatial codes for human think­ing.”  362(6415): eaat6766.

Fodor, J. A. (1983). The mod­u­lar­ity of mind: An essay on fac­ulty psy­cho­logy, MIT press.

Newell, A. (1994). Unified Theories of Cognition, Harvard University Press.

Quine, W. V. O. (1960). Word and object, MIT press.

Schneider, S. (2011). The lan­guage of thought, The MIT Press.

Simon, H. (1999). Production sys­tems. The MIT Encyclopedia of the Cognitive Sciences: 676–677.

The Modularity of the Motor System

Myrto Mylopoulos — Department of Philosophy and Institute of Cognitive Science, Carleton University

The extent to which the mind is mod­u­lar is a found­a­tion­al con­cern in cog­nit­ive sci­ence. Much of this debate has centered on the ques­tion of the degree to which input sys­tems, i.e., sens­ory sys­tems such as vis­ion, are mod­u­lar (see, e.g., Fodor 1983; Pylyshyn 1999; MacPherson 2012; Firestone & Scholl 201; Burnston 2017; Mandelbaum 2017). By con­trast, research­ers have paid far less atten­tion to the ques­tion of the extent to which our main out­put sys­tem, i.e., the motor sys­tem, qual­i­fies as such.

This is not to say that the lat­ter ques­tion has gone without acknow­ledge­ment. Indeed, in his clas­sic essay Modularity of Mind, Fodor (1983)—a pion­eer in think­ing about this topic—writes: “It would please me if the kinds of argu­ments that I shall give for the mod­u­lar­ity of input sys­tems proved to have applic­a­tion to motor sys­tems as well. But I don’t pro­pose to invest­ig­ate that pos­sib­il­ity here” (Fodor 1983, p.42).

I’d like to take some steps towards doing so in this post.

To start, we need to say a bit more about what mod­u­lar­ity amounts to. A cent­ral fea­ture of mod­u­lar systems—and the one on which I fill focus here—is their inform­a­tion­al encap­su­la­tion. Informational encap­su­la­tion con­cerns the rangeof inform­a­tion that is access­ible to a mod­ule in com­put­ing the func­tion that maps the inputs it receives to the out­puts it yields. A sys­tem is inform­a­tion­ally encap­su­lated to the degree that it lacks access to inform­a­tion stored out­side the sys­tem in the course of pro­cessing its inputs. (cf. Robbins 2009, Fodor 1983).

Importantly, inform­a­tion­al encap­su­la­tion is a rel­at­ive notion. A sys­tem may be inform­a­tion­ally encap­su­lated with respect to some inform­a­tion, but not with respect to oth­er inform­a­tion. When a sys­tem is inform­a­tion­ally encap­su­lated with respect to the states of what Fodor called “the cent­ral system”—those states famil­i­ar to us as pro­pos­i­tion­al atti­tude states like beliefs and intentions—this is referred to as cog­nit­iveimpen­et­rab­il­ityor, what I will refer to here as cog­nit­ive imper­meab­il­ity. In char­ac­ter­iz­ing the notion of cog­nit­ive per­meab­il­ity more pre­cisely, one must be care­ful not to pre­sup­pose that it is per­cep­tu­al sys­tems only that are at issue. For a neut­ral char­ac­ter­iz­a­tion, I prefer the fol­low­ing: A sys­tem is cog­nit­ively per­meable if and only if the func­tion it com­putes is sens­it­ive to the con­tent of a sub­ject S’s men­tal states, includ­ing S’s inten­tions, beliefs, and desires. In the fam­ous Müller-Lyer illu­sion, the visu­al sys­tem lacks access to the subject’s belief that the two lines are identic­al in length in com­put­ing the rel­at­ive size of the stim­u­lui, so it is cog­nit­ively imper­meable rel­at­ive to that belief.

On this char­ac­ter­iz­a­tion of cog­nit­ive per­meab­il­ity, the motor sys­tem is clearly cog­nit­ively per­meable in vir­tue of its com­pu­ta­tions, and cor­res­pond­ing out­puts, being sys­tem­at­ic­ally sens­it­ive to the con­tent of an agent’s inten­tions. The evid­ence for this is every inten­tion­al action you’ve ever per­formed. Perhaps the uncon­tro­ver­sial nature of this fact has pre­cluded fur­ther invest­ig­a­tion of cog­nit­ive per­meab­il­ity in the motor sys­tem. But there are at least two inter­est­ing ques­tions to explore here. First, since cog­nit­ive per­meab­il­ity, just like inform­a­tion­al encap­su­la­tion, comes in degrees, we should ask to what extent is the motor sys­tem cog­nit­ively per­meable. Are there inter­est­ing lim­it­a­tions that can be drawn out? (Spoiler: yes.) Second, inso­far as there are such lim­it­a­tions, we should ask the extent to which they are fixed. Can they be mod­u­lated in inter­est­ing ways by the agent? (Spoiler: yes.)

Experimental res­ults sug­gest that there are indeed inter­est­ing lim­it­a­tions to the cog­nit­ive per­meab­il­ity of the motor sys­tem. This is per­haps most clearly shown by appeal to exper­i­ment­al work employ­ing visuo­mo­tor rota­tion tasks (see also Shepherd 2017 for an import­ant dis­cus­sion of such work with which I am broadly sym­path­et­ic). In such tasks, the par­ti­cipant is instruc­ted to reach for a tar­get on a com­puter screen. They do not see their hand, but they receive visu­al feed­back from a curs­or that rep­res­ents the tra­ject­ory of their reach­ing move­ment. On some tri­als, the exper­i­menters intro­duce a bias to the visu­al feed­back from the curs­or by rotat­ing it rel­at­ive to the actu­al tra­ject­ory of their unseen move­ment dur­ing the reach­ing task. For example, a bias might be intro­duced such that the visu­al feed­back from the curs­or rep­res­ents the tra­ject­ory of their reach as being rotated 45°clockwise from the actu­al tra­ject­ory of their arm move­ment. This manip­u­la­tion allows exper­i­menters to determ­ine how the motor sys­tem will com­pensate for the con­flict between the visu­al feed­back that is pre­dicted on the basis of the motor com­mands it is execut­ing, and the visu­al feed­back the agent actu­ally receives from the curs­or. The main find­ing is that the motor sys­tem gradu­ally adapts to the bias in a way that res­ults in the recal­ib­ra­tion of the move­ments it out­puts such that they show “drift” in the dir­ec­tion oppos­itethat of the rota­tion, thus redu­cing the mis­match between the visu­al feed­back and the pre­dicted feedback.

Figure 1. A: A typ­ic­al set-up for a visuo­mo­tor rota­tion task. B: Typical error feed­back when a coun­ter­clock­wise dir­ec­tion­al bias is intro­duced. (Source: Krakauer 2009)

In the paradigm just described, par­ti­cipants do not form an inten­tion to adopt a com­pens­at­ory strategy; the adapt­a­tion the motor sys­tem exhib­its is purely the res­ult of impli­cit learn­ing mech­an­isms that gov­ern its out­put. But in a vari­ant of this paradigm (Mazzoni & Krakauer 2006), par­ti­cipants are instruc­ted to adopt an expli­cit “cheat­ing” strategy—that is, to form intentions—to counter the angu­lar bias intro­duced by the exper­i­menters. This is achieved by pla­cing a neigh­bour­ing tar­get (Tn) at a 45°angle from the prop­er tar­get (Tp) in the dir­ec­tion oppos­itethe bias (e.g., if the bias is 45°counterclockwise from the Tp, the Tn is placed 45°clockwise from the Tp), such that if par­ti­cipants aim for the Tn, the bias will be com­pensated for, and the curs­or will hit the Tp, thus sat­is­fy­ing the primary task goal.

In this set-up, reach­ing errors related to the Tp are almost com­pletely elim­in­ated at first. The agent hits the Tp (accord­ing to the feed­back from the curs­or) as a res­ult of form­ing the inten­tion to aim for the stra­tegic­ally placed Tn. But as par­ti­cipants con­tin­ue to per­form the task on fur­ther tri­als, some­thing inter­est­ing hap­pens: their move­ments once again gradu­ally start to show drift, but this time towardsthe Tn and away from the Tp. What this res­ult is thought to reflect is yet anoth­er impli­cit pro­cess of adap­tion by the motor sys­tem, which aims to cor­rect for the dif­fer­ence between the aimed for loc­a­tion (Tn) and the visu­al feed­back (in the dir­ec­tion of the Tp).

Two fur­ther details are import­ant for our pur­poses: First, when par­ti­cipants are instruc­ted to stop using the strategy of aim­ing for the Tn (in order to hit the Tp) and return their aim to the Tp “[s]ubstantial and long-lasting” (Mazzoni & Krakauer 2006, p.3643) afteref­fects are observed, mean­ing the motor sys­tem per­sists in aim­ing to reduce the dif­fer­ence between the visu­al feed­back and the earli­er aimed for loc­a­tion. Second, in a sep­ar­ate study by Taylor & Ivry (2011) using a very sim­il­ar paradigm wherein par­ti­cipants had sig­ni­fic­antly more tri­als per block (320), par­ti­cipants did even­tu­ally cor­rect for the sec­ond­ary adap­tion by the motor sys­tem and reverse the dir­ec­tion of their move­ment, but only gradu­ally, and by means of adopt­ing expli­cit aim­ing strategies to coun­ter­act the drift.

On the basis of these res­ults, we can draw at least three inter­est­ing con­clu­sions about cog­nit­ive per­meab­il­ity and the motor sys­tem:  First, although it is clearly sens­it­ive to the con­tent of the prox­im­al inten­tions that it takes as input (in this case the inten­tion to aim for the Tn), it is not always, or only weakly so, to the distal inten­tions that those very prox­im­al inten­tions serve—in this case the inten­tion to hit the Tp. If this is cor­rect, it may be that the motor sys­tem lacks sens­it­iv­ity to the struc­ture of prac­tic­al reas­on­ing that often guides an agent’s present action in the back­ground. In this case, the motor sys­tem seems not to register that the agent intends to hit the Tp by way ofaim­ing and reach­ing for the Tn.

Second, giv­en that afteref­fects per­sist for some time even once the expli­cit aim­ing strategy (and there­fore the inten­tion to aim for the Tn) has been aban­doned, we may con­clude that the motor sys­tem is only sens­it­ive to the con­tent of prox­im­al inten­tions to a lim­ited degree in that it takes time for it to prop­erly update its per­form­ance rel­at­ive to the agent’s cur­rent prox­im­al inten­tion. The impli­cit adapt­a­tion, indexed to the earli­er inten­tion, can­not be over­rid­den immediately.

Third, this degree of sens­it­iv­ity is not fixed, but rather can vary over time as the res­ult of an agent’s inter­ven­tions, as determ­ined in Taylor & Ivry’s study, where the drift was even­tu­ally reversed after a suf­fi­ciently large num­ber of tri­als wherein the agent con­tinu­ously adjus­ted their aim­ing strategy.

To close, I’d like to out­line what I take to be a couple of import­ant upshots of the pre­ced­ing dis­cus­sion for neigh­bour­ing philo­soph­ic­al debates:

  1. Recent dis­cus­sions of skilled action have sought to determ­ine “how far down” action con­trol is intel­li­gent (see, e.g., Fridland 2014, 2017; Levy 2017; Shepherd 2017). And, on at least some views, this is a func­tion of the degree to which the motor sys­tem is sens­it­ive to the con­tent of an agent’s inten­tions. Here we see that this sens­it­iv­ity is some­times lim­ited, but can also improve over time. In my view, this reveals anoth­er import­ant dimen­sion of the motor system’s intel­li­gence that goes bey­ond mere sens­it­iv­ity, and that per­tains to its abil­ity to adapt to an agent’s present goals through learn­ing pro­cesses that exhib­it a reas­on­able degree of both sta­bil­ity and flexibility.
  2. Recently, action the­or­ists have turned their atten­tion to solv­ing the so-called “inter­face prob­lem”, that is, the prob­lem of how inten­tions and motor rep­res­ent­a­tions suc­cess­fully coordin­ate giv­en their (argu­ably) dif­fer­ent rep­res­ent­a­tion­al formats (see, e.g., Butterfill & Sinigaglia 2014; Burnston 2017; Fridland 2017; Mylopoulos & Pacherie 2017, 2018; Shepherd 2017; Ferretti & Caiani 2018). The pre­ced­ing dis­cus­sion may sug­gest a more lim­ited degree of inter­fa­cing than one might have thought—obtaining only between an agent’s most prox­im­al inten­tions and the motor sys­tem. It may also sug­gest that suc­cess­ful inter­fa­cing depends on both the learn­ing mechanism(s) of the motor sys­tem (for max­im­al smooth­ness and sta­bil­ity) as well as a con­tinu­ous inter­play between its out­puts and the agent’s own prac­tic­al reas­on­ing for how best to achieve their goals (for max­im­al flexibility).

References:

Burnston, D. (2017). Interface prob­lems in the explan­a­tion of action. Philosophical Explorations, 20(2), 242–258.

Butterfill, S. A. & Sinigaglia, C. (2014). Intention and motor rep­res­ent­a­tion in pur­pos­ive action. Philosophy and Phenomenological Research, 88, 119–145.

Ferretti, G. & Caiani, S.Z. (2018). Solving the inter­face prob­lem without trans­la­tion: the same format thes­is. Pacific Philosophical Quarterly,doi: 10.1111/papq.12243

Fodor, J. (1983). The mod­u­lar­ity of mind: An essay on fac­ulty psy­cho­logy. Cambridge: The MIT Press.

Fridland, E. (2014). They’ve lost con­trol: Reflections on skill. Synthese, 91(12), 2729–2750.

Fridland, E. (2017). Skill and motor con­trol: intel­li­gence all the way down. Philosophical Studies, 174(6), 1539–1560.

Krakauer J. W. (2009). Motor learn­ing and con­sol­id­a­tion: the case of visuo­mo­tor rota­tion. Advances in exper­i­ment­al medi­cine and bio­logy629, 405–21.

Levy, N. (2017). Embodied savoir-faire: knowledge-how requires motor rep­res­ent­a­tions. Synthese, 194(2), 511–530.

MacPherson, F. (2012). Cognitive pen­et­ra­tion of col­our exper­i­ence: Rethinking the debate in light of an indir­ect mech­an­ism. Philosophy and Phenomenological Research,84(1). 24–62.

Mazzoni, P. & Krakauer, J. W. (2006). An impli­cit plan over­rides an expli­cit strategy dur­ing visuo­mo­tor adapt­a­tion. The Journal of Neuroscience, 26(14): 3642–3645.

Mylopoulos, M. & Pacherie, E. (2017).  Intentions and motor rep­res­ent­a­tions: The inter­face chal­lenge. Review of Philosophy and Psychology, 8(2), pp. 317–336.

Mylopoulos, M. & Pacherie, E. (2018). Intentions: The dynam­ic hier­arch­ic­al mod­el revis­ited. WIREs Cognitive Science, doi: 10.1002/wcs.1481

Shepherd, J. (2017). Skilled action and the double life of inten­tion. Philosophy and Phenomenological Research, doi:10.1111/phpr.12433

Taylor, J.A. and Ivry, R.B. (2011). Flexible cog­nit­ive strategies dur­ing motor learn­ing. PLoS Computational Biology7(3), p.e1001096.

The Cinderella of the Senses: Smell as a Window into Mind and Brain?

Ann-Sophie Barwich — Visiting Assistant Professor in the Cognitive Science Program at Indiana University Bloomington

Smell is the Cinderella of our Senses. Traditionally dis­missed as com­mu­nic­at­ing merely sub­ject­ive feel­ings and bru­tish sen­sa­tions, the sense of smell nev­er attrac­ted crit­ic­al atten­tion in philo­sophy or sci­ence. The char­ac­ter­ist­ics of odor per­cep­tion and its neur­al basis are key to under­stand­ing the mind through the brain, however.

This claim might sound sur­pris­ing. Human olfac­tion acquired a rather poor repu­ta­tion through­out most of Western intel­lec­tu­al his­tory. “Of all the senses it is the one which appears to con­trib­ute least to the cog­ni­tions of the human mind,” com­men­ted the French philo­soph­er of the Enlightenment, Étienne Bonnot de Condillac, in 1754. Immanuel Kant (1798) even called smell “the most ungrate­ful” and “dis­pens­able” of the senses. Scientists were not more pos­it­ive in their judg­ment either. Olfaction, Charles Darwin con­cluded (1874), was “of extremely slight ser­vice” to man­kind. Further, state­ments about people who paid atten­tion to smell fre­quently mixed with pre­ju­dice about sex and race: Women, chil­dren, and non-white races — essen­tially all groups long excluded from the ration­al­ity of white men — were found to show increased olfact­ory sens­it­iv­ity (Classen et al. 1994). Olfaction, there­fore, did not appear to be a top­ic of reput­able aca­dem­ic invest­ment — until recently.

Scientific research on smell was cata­pul­ted into main­stream neur­os­cience almost overnight with the dis­cov­ery of the olfact­ory recept­or genes by Linda Buck and Richard Axel in 1991. It turned out that the olfact­ory recept­ors con­sti­tute the largest pro­tein gene fam­ily in most mam­mali­an gen­omes (except for dol­phins), exhib­it­ing a pleth­ora of prop­er­ties sig­ni­fic­ant for structure-function ana­lys­is of pro­tein beha­vi­or (Firestein 2001; Barwich 2015). Finally, the recept­or gene dis­cov­ery provided tar­geted access to probe odor sig­nal­ing in the brain (Mombaerts et al. 1996; Shepherd 2012). Excitement soon kicked in, and hopes rose high to crack the cod­ing prin­ciples of the olfact­ory sys­tem in no time. Because the olfact­ory path­way has a not­able char­ac­ter­ist­ic, one that Ramon y Cajal high­lighted as early as 1901/02: Olfactory sig­nals require only two syn­apses to go straight into the core cor­tex (form­ing almost imme­di­ate con­nec­tions with the amy­g­dala and hypo­thal­am­us)! To put this into per­spect­ive, in vis­ion two syn­apses won’t get you even out of the ret­ina. You can fol­low the rough tra­ject­ory of an olfact­ory sig­nal in Figure 1 below.

Three dec­ades later and the big rev­el­a­tion still is on hold. A lot of pre­ju­dice and neg­at­ive opin­ion about the human sense of smell have been debunked (Shepherd 2004; Barwich 2016; McGann 2017). But the olfact­ory brain remains a mys­tery to date. It appears to dif­fer markedly in its neur­al prin­ciples of sig­nal integ­ra­tion from vis­ion, audi­tion, and soma­to­sen­sa­tion (Barwich 2018; Chen et al. 2014). The back­ground to this insight is a remark­able piece of con­tem­por­ary his­tory of sci­ence. (Almost all act­ors key to the mod­ern molecu­lar devel­op­ment of research on olfac­tion are still alive and act­ively con­duct­ing research.)

Olfaction — unlike oth­er sens­ory sys­tems — does not main­tain a topo­graph­ic organ­iz­a­tion of stim­u­lus rep­res­ent­a­tion in its primary cor­tex (Stettler and Axel 2009; Sosulski et al. 2011). That’s neuralese for: We actu­ally do not know how the brain organ­izes olfact­ory inform­a­tion so that it can tell what kind of per­cep­tu­al object or odor image an incom­ing sig­nal encodes. You won’t find a map of stim­u­lus rep­res­ent­a­tion in the brain, such that chem­ic­al groups like ketones would sit next to alde­hydes or per­cep­tu­al cat­egor­ies like rose were right next to lav­ender. Instead, axons from the mitral cells in the olfact­ory bulb (the first neur­al sta­tion of olfact­ory pro­cessing at the front­al lobe of the brain) pro­ject to all kinds of areas in the piri­form cor­tex (the largest domain of the olfact­ory cor­tex, pre­vi­ously assumed to be involved in odor object form­a­tion). In place of a map, you find a mosa­ic (Figure 1).

What does this tell us about smell per­cep­tion and the brain in gen­er­al? Theories of per­cep­tion, in effect, always have been the­or­ies of vis­ion. Concepts ori­gin­ally derived from vis­ion were made fit to apply to what’s usu­ally side­lined as “the oth­er senses.” This tend­ency per­meates neur­os­cience as well as philo­sophy (Matthen 2005). However, it is a deeply prob­lem­at­ic strategy for two reas­ons. First, oth­er sens­ory mod­al­it­ies (smell, taste, and touch but also the hid­den senses of proprio­cep­tion and intero­cep­tion) do not res­on­ate entirely with the struc­ture of the visu­al sys­tem (Barwich 2014; Keller 2017; Smith 2017b). Second, we may have nar­rowed our invest­ig­at­ive lens and over­looked import­ant aspects also of the visu­al sys­tem that can be “redis­covered” if we took a closer look at smell and oth­er mod­al­it­ies. Insight into the com­plex­ity of cross-modal inter­ac­tions, espe­cially in food stud­ies, sug­gests that much already (Smith 2012; Spence and Piqueras-Fiszman 2014). So the real ques­tion we should ask is:

How would the­or­ies of per­cep­tion dif­fer if we exten­ded our per­spect­ive on the senses; in par­tic­u­lar, to include fea­tures of olfaction?

Two things stand out already. The first con­cerns the­or­ies of the brain, the oth­er the per­meable bor­der between pro­cesses of per­cep­tion and cognition.

First, when it comes to the prin­ciples of neur­al organ­iz­a­tion, not everything in vis­ion that appears crys­tal clear really is. The corner­stone of visu­al topo­graphy has been called into ques­tion more recently by the prom­in­ent neur­os­cient­ist Margaret Livingstone (who, not coin­cid­ent­ally, trained with David Hubel: one half of the fam­ous duo of Hubel and Wiesel (2004) whose find­ings led to the paradigm of neur­al topo­graphy in vis­ion research in the first place). Livingstone et al. (2017) found that the spa­tially dis­crete activ­a­tion pat­terns in the fusi­form face area of macaques were con­tin­gent upon exper­i­ence — both in their devel­op­ment and, inter­est­ingly, partly also their main­ten­ance. In oth­er words, learn­ing is more fun­da­ment­al to the arrange­ment of neur­al sig­nals in visu­al inform­a­tion pro­cessing and integ­ra­tion than pre­vi­ously thought. The spa­tially dis­crete pat­terns of the visu­al sys­tem may con­sti­tute more of a devel­op­ment­al byproduct than simply a genet­ic­ally pre­de­ter­mined Bauplan. From this per­spect­ive, fig­ur­ing out the con­nectiv­ity that under­pins non-topographic and asso­ci­at­ive neur­al sig­nal­ing, such as in olfac­tion, offers a com­ple­ment­ary mod­el to determ­ine the gen­er­al prin­ciples of brain organization.

Second, emphas­is on exper­i­ence and asso­ci­at­ive pro­cessing in per­cep­tu­al object form­a­tion (e.g., top-down effects in learn­ing) also mir­rors cur­rent trends in cog­nit­ive neur­os­cience. Smell has long been neg­lected from main­stream the­or­ies of per­cep­tion pre­cisely because of the char­ac­ter­ist­ic prop­er­ties that make it sub­ject to strong con­tex­tu­al and cog­nit­ive biases. Consider a wine taster, who exper­i­ences wine qual­ity dif­fer­ently by focus­ing on dis­tinct cri­ter­ia of obser­va­tion­al like­ness in com­par­is­on with a layper­son. She can point to subtle fla­vor notes that the layper­son may have missed but, after pay­ing atten­tion, is also able to per­ceive (e.g., a light oak note). Such influ­ence of atten­tion and learn­ing on per­cep­tion, ran­ging from nor­mal per­cep­tion to the acquis­i­tion of per­cep­tu­al expert­ise, is con­stitutive of odor and its phe­nomen­o­logy (Wilson and Stevenson 2006; Barwich 2017; Smith 2017a). Notably, the under­ly­ing biases (influ­enced by semant­ic know­ledge and famili­ar­ity) are increas­ingly stud­ied as con­stitutive determ­in­ants of brain pro­cesses in recent cog­nit­ive neur­os­cience; espe­cially in for­ward mod­els or mod­els of pre­dict­ive cod­ing where the brain is said to cope with the pleth­ora of sens­ory data by anti­cip­at­ing stim­u­lus reg­u­lar­it­ies on the basis of pri­or exper­i­ence (e.g., Friston 2010; Graziano 2015). While advoc­ates of these the­or­ies have centered their work on vis­ion, olfac­tion now serves as an excel­lent mod­el to fur­ther the premise of the brain as oper­at­ing on the basis of fore­cast­ing mech­an­isms (Barwich 2018); blur­ring the bound­ary between per­cep­tu­al and cog­nit­ive pro­cesses with the impli­cit hypo­thes­is that per­cep­tion is ulti­mately shaped by experience.

These are ongo­ing devel­op­ments. Unknown as yet is how the brain makes sense of scents. What is becom­ing increas­ingly clear is that the­or­iz­ing about the senses neces­sit­ates a mod­ern­ized per­spect­ive that admits oth­er mod­al­it­ies and their dimen­sions. We can­not explain the mul­ti­tude of per­cep­tu­al phe­nom­ena with vis­ion alone. To think oth­er­wise is not only hubris but sheer ignor­ance. Smell is less evid­ent in its con­cep­tu­al bor­ders and clas­si­fic­a­tion, its mech­an­isms of per­cep­tu­al con­stancy and vari­ation. It thus requires new philo­soph­ic­al think­ing, one that reex­am­ines tra­di­tion­al assump­tions about stim­u­lus rep­res­ent­a­tion and the con­cep­tu­al sep­ar­a­tion of per­cep­tion and judg­ment. However, a prop­er under­stand­ing of smell — espe­cially in its con­tex­tu­al sens­it­iv­ity to cog­nit­ive influ­ences — can­not suc­ceed without also tak­ing an in-depth look at its neur­al under­pin­nings. Differences in cod­ing, con­cern­ing both recept­or and neur­al levels of the sens­ory sys­tems, mat­ter to how incom­ing inform­a­tion is real­ized as per­cep­tu­al impres­sions in the mind, along with the ques­tion of what these per­cep­tions are and com­mu­nic­ate in the first place.

Olfaction is just one prom­in­ent example of how mis­lead­ing his­tor­ic intel­lec­tu­al pre­dilec­tions about human cog­ni­tion can be. Neuroscience fun­da­ment­ally opened up pos­sib­il­it­ies regard­ing its meth­ods and out­look, in par­tic­u­lar over the past two dec­ades. It is about time that we adjust our some­what older philo­soph­ic­al con­jec­tures of mind and brain accordingly.

References:

Barwich, AS. 2014. “A Sense So Rare: Measuring Olfactory Experiences and Making a Case for a Process Perspective on Sensory Perception.” Biological Theory9(3): 258–268.

Barwich, AS. 2015. “What is so spe­cial about smell? Olfaction as a mod­el sys­tem in neuro­bi­o­logy.” Postgraduate Medical Journal92: 27–33.

Barwich, AS. 2016. “Making Sense of Smell.” The Philosophers’ Magazine73: 41–47.

Barwich, AS. 2017. “Up the Nose of the Beholder? Aesthetic Perception in Olfaction as a Decision-Making Process.” New Ideas in Psychology47: 157–165.

Barwich, AS. 2018. “Measuring the World: Towards a Process Model of Perception.” In: Everything Flows: Towards a Processual Philosophy of Biology. (D Nicholson, and J Dupré, eds). Oxford University Press, pp. 337–356.

Buck, L, and R Axel. 1991. “A nov­el mul­ti­gene fam­ily may encode odor­ant recept­ors: a molecu­lar basis for odor recog­ni­tion.” Cell65(1): 175–187.

Cajal R y. 1988[1901/02]. “Studies on the Human Cerebral Cortex IV: Structure of the Olfactory Cerebral Cortex of Man and Mammals.” In: Cajal on the Cerebral Cortex. An Annotated Translation of the Complete Writings, ed. by J DeFelipe and EG Jones. Oxford University Press.

Chen, CFF, Zou, DJ, Altomare, CG, Xu, L, Greer, CA, and S Firestein. 2014. “Nonsensory target-dependent organ­iz­a­tion of piri­form cor­tex.” Proceedings of the National Academy of Sciences111(47): 16931–16936.

Classen, C, Howes, D, and A Synnott. 1994.Aroma: The cul­tur­al his­tory of smell.Routledge.

Condillac, E B d. 1930 [1754]. Condillac’s treat­ise on the sen­sa­tions. (MGS Carr, trans). The Favil Press.

Darwin, C. 1874. The des­cent of man and selec­tion in rela­tion to sex(Vol. 1). Murray.

Firestein, S. 2001. “How the olfact­ory sys­tem makes sense of scents.” Nature413(6852): 211.

Friston, K. 2010. “The free-energy prin­ciple: a uni­fied brain the­ory?” Nature Reviews Neuroscience11(2): 127.

Graziano, MS, and TW Webb. 2015. “The atten­tion schema the­ory: a mech­an­ist­ic account of sub­ject­ive aware­ness.” Frontiers in Psychology6: 500.

Hubel, DH, and TN Wiesel. 2004. Brain and visu­al per­cep­tion: the story of a 25-year col­lab­or­a­tion. Oxford University Press.

Kant, I. 2006 [1798]. Anthropology from a prag­mat­ic point of view(RB Louden, ed). Cambridge University Press.

Keller, A. 2017. Philosophy of Olfactory Perception. Springer.

Livingstone, MS, Vincent, JL, Arcaro, MJ, Srihasam, K, Schade, PF, and T Savage. 2017. “Development of the macaque face-patch sys­tem.”Nature Communications8: 14897.

Matthen, M. 2005. Seeing, doing, and know­ing: A philo­soph­ic­al the­ory of sense per­cep­tion. Clarendon Press.

McGann, JP. 2017. “Poor human olfac­tion is a 19th-century myth.” Science356(6338): eaam7263.

Mombaerts, P, Wang, F, Dulac, C, Chao, SK, Nemes, A, Mendelsohn, Edmondson, J, and R Axel. 1996. “Visualizing an olfact­ory sens­ory map”  Cell87(4): 675–686.

Shepherd, GM. 2004. “The human sense of smell: are we bet­ter than we think?” PLoS Biology2(5): e146.

Shepherd, GM. 2012. Neurogastronomy: how the brain cre­ates fla­vor and why it mat­ters. Columbia University Press.

Smith, BC. 2012. “Perspective: com­plex­it­ies of fla­vour.” Nature486(7403): S6-S6.

Smith BC. 2017a. “Beyond Liking: The True Taste of a Wine?” The World of Wine58: 138–147.

Smith, BC. 2017b. “Human Olfaction, Crossmodal Perception, and Consciousness.” Chemical Senses 42(9): 793–795.

Spence, C, and B Piqueras-Fiszman. 2014. The per­fect meal: the multi­s­ens­ory sci­ence of food and din­ing.John Wiley & Sons.

Sosulski, DL, Bloom, ML, Cutforth, T, Axel, R, and SR Datta. 2011. “Distinct rep­res­ent­a­tions of olfact­ory inform­a­tion in dif­fer­ent cor­tic­al centres.” Nature472(7342): 213.

Stettler, DD, and R Axel. 2009. “Representations of odor in the piri­form cor­tex.” Neuron63(6): 854–864.

Wilson, DA, and RJ Stevenson. 2006.Learning to smell: olfact­ory per­cep­tion from neuro­bi­o­logy to beha­vi­or.JHU Press.

Can a visual experience be biased?

by Jessie Munton — Junior Research Fellow at St John’s College, Cambridge

Beliefs and judge­ments can be biased: my expect­a­tions of someone with a London accent might be biased by my pre­vi­ous expos­ure to Londoners or ste­reo­types about them; my con­fid­ence that my friend will get the job she is inter­view­ing for may be biased by my loy­alty; and my sus­pi­cion that it will rain tomor­row may be biased by my expos­ure to weath­er in Cambridge over the past few days. What about visu­al exper­i­ences? Can visu­al exper­i­ences be biased?

That’s the ques­tion I explore in this blog post. In par­tic­u­lar, I’ll ask wheth­er a visu­al exper­i­ence could be biased, in the sense of exem­pli­fy­ing forms of racial pre­ju­dice. I’ll sug­gest that the answer to this ques­tion is a tent­at­ive “yes”, and that that presents some nov­el chal­lenges to how we think of both bias and visu­al perception.

According to a very simplist­ic way of think­ing about visu­al per­cep­tion, it presents the world to us just as it is: it puts us dir­ectly in touch with our envir­on­ment, in a man­ner that allows it to play a unique, pos­sibly found­a­tion­al epi­stem­ic role. Perception in gen­er­al, and visu­al exper­i­ence with it, is some­times treated as a kind of giv­en: a sourceof evid­ence that is immune to the sorts of ration­al flaws that beset our cog­nit­ive responses to evid­ence. This approach encour­ages us to think of visu­al exper­i­ence as a neut­ral cor­rect­ive to the kinds of flaws that can arise in belief, such as bias or pre­ju­dice: there is no room­in the pro­cesses that gen­er­ate visu­al exper­i­ence for the kinds of influ­ence that cause belief to be biased or prejudiced.

But there is a ten­sion between that view and cer­tain facts about the sub­per­son­al pro­cesses that sup­port visu­al per­cep­tion in creatures like ourselves. In par­tic­u­lar, our visu­al sys­tem faces an under­de­termin­a­tion chal­lenge: the light sig­nals received by the ret­ina fail, on their own, to determ­ine a unique extern­al stim­u­lus (Scholl 2005). To resolve the res­ult­ing ambi­gu­ity, the visu­al sys­tem must rely on pri­or inform­a­tion about the envir­on­ment, and likely stim­uli with­in it. But those pri­ors are not fixed and immut­able: the visu­al sys­tem updates them in light of pre­vi­ous exper­i­ence (Chalk et al 2010, Chun &Turk-Browne 2008). In this way, the visu­al sys­tem learns from the idio­syn­crat­ic course that the indi­vidu­al takes through the world.

Equally, the visu­al sys­tem is over­whelmed with pos­sible input: the inform­a­tion avail­able from the envir­on­ment at any one moment far sur­passes what the brain can pro­cess (Summerfield & Egner 2009). It must select­ively attend to cer­tain objects or areas with­in the visu­al field, in order to pri­or­it­ise the highest value inform­a­tion. Preexisting expect­a­tions and pri­or­it­ies determ­ine the sali­ence of inform­a­tion with­in a giv­en scene. The nature and con­tent of the visu­al exper­i­ence you are hav­ing at any moment in part depends on the rel­at­ive value you place on the inform­a­tion in your environment.

We per­ceive the world, then, in light of our pri­or expect­a­tions, and past expos­ure to it. Those pro­cesses of learn­ing and adapt­a­tion, of devel­op­ing skills that fit a par­tic­u­lar envir­on­ment­al con­text, leave visu­al per­cep­tion vul­ner­able to a kind of visu­al coun­ter­part to bias: we do not come to the world each time with fresh eyes. If we did, we would see less accur­ately and effi­ciently than we do.

Cognitive biases often emerge as a response to par­tic­u­lar envir­on­ment­al pres­sures: they per­sist because they lend some advant­age in cer­tain cir­cum­stances, but come at the expense of sens­it­iv­ity to cer­tain oth­er inform­a­tion (Kahneman & Tversky 1973). Similarly, the capa­city of the visu­al sys­tem to devel­op an expert­ise with­in a par­tic­u­lar con­text can restrict its sens­it­iv­ity to cer­tain sorts of inform­a­tion. We can see this kind of struc­ture in the spe­cial­ist abil­it­ies we devel­op to see faces.

You might nat­ur­ally think that we per­ceive high-level fea­tures of faces, such as the emo­tion they dis­play or the racial cat­egory they belong to, not dir­ectly, but only in vir­tue of, or per­haps via some kind of sub­per­son­al infer­ence from, their lower-level fea­tures: the arrange­ment of facial fea­tures, for instance, or the col­or and shad­ing that let us pick out those fea­tures. In fact, there’s good evid­ence that we per­ceive the social cat­egory of a face, or the emo­tion it dis­plays dir­ectly. For instance, we demon­strate “visu­al adapt­a­tion” to facial emo­tion: after see­ing a series of angry faces, a neut­ral face appears happy. And those adapt­a­tion effects are spe­cif­ic to the gender and race of the face,suggesting that these cat­egor­ies of faces may be coded for my dif­fer­ent neur­al pop­u­la­tions (Jaquet, Rhodes, & Hayward 2007, 2008; Jacquet & Rodes 2005, Little, DeBruine, & Jones 2005).

Moreover, our skills at face per­cep­tion seem to be sys­tem­at­ic­ally arranged along racial lines: most people are bet­ter at recog­niz­ing own-race and dominant-race faces, (Meissner & Brigham 2001), the res­ult of a pro­cess of spe­cial­isa­tion that emerges over the first 9 months of life as infants gradu­ally lose the capa­city to recog­nize faces of dif­fer­ent or non-dominant races (Kelly et al. 2007). A White adult in a major­ity white soci­ety will gen­er­ally be bet­ter at recog­niz­ing oth­er white faces than Black or Asian faces, for instance, where­as a Black per­son liv­ing in a major­ity Black soci­ety will con­versely be less good at recog­niz­ing White than Black faces.  This extends to the iden­ti­fic­a­tion of emo­tion from faces, as well as their recog­ni­tion: sub­jects are more accur­ate at identi­fy­ing the emo­tion dis­played on dom­in­ant or same-race faces than other-race faces (Elfenbeim & Ambady 2002).

One way of under­stand­ing this pro­file of skills is to think of faces as arranged with­in a mul­ti­di­men­sion­al “face space” depend­ing on their sim­il­ar­ity to one anoth­er. We hone our per­cep­tu­al capa­cit­ies with­in that area of face space to which we have most expos­ure. That area of face space becomes, in effect, stretched, allow­ing for finer grained dis­tinc­tions between faces. (Valentine 1991; Valentine, Lewis and Hills 2016). The great­er “dis­tance” between faces in the area of face space in which we are most spe­cial­ized renders those faces more mem­or­able and easi­er to dis­tin­guish from one anoth­er. Another way of think­ing of this is in terms of “norm-based cod­ing” (Rhodes and Leopold 2011): faces are encoded rel­at­ive to the aver­age face encountered. Faces fur­ther from the norm suf­fer in terms of our visu­al sens­it­iv­ity to the inform­a­tion they carry.

On the one hand, it isn’t hard to see how this kind of facial expert­ise could help us extract max­im­al inform­a­tion from the faces we most fre­quently encounter.  But the impact of this “same-race face effect” more gen­er­ally is poten­tially highly prob­lem­at­ic: a White per­son in a major­ity White soci­ety will be less likely to accur­ately recog­nise a Black indi­vidu­al, and less able to accur­ately per­ceive their emo­tions from their face. That diminu­tion of sens­it­iv­ity to faces of dif­fer­ent races paves the way for a range of down­stream impacts. Since the visu­al sys­tem fails to advert­ise this dif­fer­en­tial sens­it­iv­ity, the indi­vidu­al is liable to reas­on as though they have read their emo­tions with equal per­spicu­ity, and to draw con­clu­sions on that basis (that the indi­vidu­al feels less per­haps, when the emo­tion in ques­tion is simply visu­ally obscure to them). Relatedly, the lack of inform­a­tion extrac­ted per­cep­tu­ally from the face makes it more likely that the indi­vidu­al will fill that short­fall of inform­a­tion by draw­ing on ste­reo­types about the rel­ev­ant group: that Black people are aggress­ive, for instance, (Shapiro et al. 2009; Brooks and Freeman 2017). And restric­tions on the abil­ity to accur­ately recall cer­tain faces will bring with them social costs for those individuals.

Compare this visu­al bias to someone writ­ing a report about two indi­vidu­als, one White and one Black. The report about the White per­son is detailed and accur­ate, whilst the report on the Black per­son is much spars­er, lack­ing inform­a­tion rel­ev­ant to down­stream tasks. In such a case, we would reas­on­ably regard the report writer as biased, par­tic­u­larly if their report writ­ing reflec­ted this kind of dis­crep­ancy between White and Black tar­gets more gen­er­ally. If the visu­al sys­tem dis­plays a struc­tur­ally sim­il­ar bias in the inform­a­tion it provides us with, should we regard it, too, as biased?

To answer that ques­tion, we need to have an account of what it is for any­thingto be biased, be it a visu­al exper­i­ence, a belief, or a dis­pos­i­tion to behave or reas­on in some way or oth­er. We use ‘bias’ in many dif­fer­ent ways. In par­tic­u­lar, we need to dis­tin­guish here what I call form­al bias from pre­ju­di­cial bias. In cer­tain con­texts, a bias may be rel­at­ively neut­ral. A ship might be delib­er­ately giv­en a bias to list towards the port side, for instance, by uneven dis­tri­bu­tion of bal­last. Similarly, any sys­tem that resolves ambi­gu­ity in incom­ing sig­nal on the basis of inform­a­tion it has encountered in the past is biased by that pri­or inform­a­tion. But that’s a bias that, for the most part, enhances rather than detracts from the accur­acy of the res­ult­ing judge­ments or rep­res­ent­a­tions.  We could call biases of this kind form­al biases. 

Bias also has anoth­er, more col­lo­qui­al usage, accord­ing to which it picks out some­thing dis­tinct­ively neg­at­ive, because it indic­ates an unfairor dis­pro­por­tion­atejudge­ment, a judge­ment sub­ject to an influ­ence that is dis­tinct­ively ille­git­im­ate in some way. Bias in this sense often involves undue influ­ence by demo­graph­ic cat­egor­ies, for instance. We might describe an admis­sions pro­cess as biased in this way if it dis­pro­por­tion­ately excludes working-class can­did­ates, or women, or people with red hair. We can call bias of this kind pre­ju­di­cial bias. 

The visu­al sys­tem is clearly cap­able of exhib­it­ing the first kind of bias. As a sys­tem that sys­tem­at­ic­ally learns from past exper­i­ences in order to effect­ively pri­or­it­ise and pro­cess new inform­a­tion, it is a form­ally biased sys­tem. Similarly, the same-race face effect in face per­cep­tion involves the sys­tem­at­ic neg­lect of cer­tain inform­a­tion as the res­ult of task-specific expert­ise. That renders it an instance of form­al bias.

To decide wheth­er this also con­sti­tutes an instance of pre­ju­di­cial bias, we need to ask: is that neg­lect of inform­a­tion ille­git­im­ate? And if so, on what grounds? Two dif­fi­culties present them­selves at this junc­ture. The first is that we are, for the most part, not used to assess­ing the pro­cesses involved in visu­al per­cep­tion as legit­im­ate, or ille­git­im­ate (though that has come under increas­ing pres­sure recently, in par­tic­u­lar in Siegel (2017).) We need to devel­op a new set of tools for this kind of cri­tique. The second dif­fi­culty is the way in which form­al bias, includ­ing the devel­op­ment of per­cep­tu­al expert­ise of the kind demon­strated in the same race face effect, is a vir­tue of visu­al per­cep­tion. It makes visu­al per­cep­tion not just effi­cient, but pos­sible. Acknowledging that can seem to restrict our abil­ity to con­demn the bias in ques­tion as not just form­al, but prejudicial.

This throws us up against the ques­tion: what is the rela­tion­ship between form­al and pre­ju­di­cial bias? Formal bias is often a vir­tue: it allows for the more effi­cient extrac­tion of inform­a­tion, by draw­ing on rel­ev­ant post inform­a­tion. Prejudicial bias on the oth­er hand is a vice: it lim­its the sub­jects’ sens­it­iv­ity to rel­ev­ant inform­a­tion in a way that seems intu­it­ively prob­lem­at­ic. What are the cir­cum­stances under which the vir­tue of form­al bias becomes the vice of pre­ju­di­cial bias?

In part, this seems to depend on the con­text in which the pro­cess in ques­tion is deployed, and the task at hand. The vir­tues of form­al biases rely on sta­bil­ity in both the individual’s envir­on­ment and goals: that’s when reli­ance on past inform­a­tion and expert­ise developed via con­sist­ent expos­ure to cer­tain stim­uli is help­ful. The same-race face effect devel­ops as the visu­al sys­tem learns to extract inform­a­tion from those faces it most fre­quently encoun­ters. The res­ult­ing expert­ise can­not adapt at the same pace as our chan­ging, com­plex social goals across a range of con­texts. As a res­ult, this kind of form­al per­cep­tu­al expert­ise res­ults in a loss of import­ant inform­a­tion in cer­tain con­texts: an instance of pre­ju­di­cial bias. If that’s right, then the dis­tinc­tion between form­al and pre­ju­di­cial bias isn’t one that can be iden­ti­fied just by look­ing at a par­tic­u­lar cog­nit­ive pro­cess in isol­a­tion, but only by look­ing at that pro­cess across a dynam­ic set of con­texts and tasks.

 

References:

Brooks, J. A., & Freeman, J. B. (2017). Neuroimaging of per­son per­cep­tion: A social-visual inter­face. Neuroscience Letters. https://doi.org/10.1016/j.neulet.2017.12.046

Chalk, M. S., & Series, A. R. (2010). Rapidly learned stim­u­lus expect­a­tions alter per­cep­tion of motion.Journal of Vision, 108(2), 1–18.

Chun, M. M., & Turk-Browne, N. B. (2008). Associative Learning Mechanisms in Vision. Oxford University Press. Retrieved from http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195305487.001.0001/acprof-9780195305487-chapter‑7

Elfenbein, H. A., & Ambady, N. (2003). When famili­ar­ity breeds accur­acy: cul­tur­al expos­ure and facial emo­tion recog­ni­tion. Journal of Personality and Social Psychology,85(2), 276.

Jaquet, E., & Rhodes, G. (2008). Face afteref­fects indic­ate dis­so­ci­able, but not dis­tinct, cod­ing of male and female faces. Journal of Experimental Psychology. Human Perception and Performance, 34(1), 101–112. https://doi.org/10.1037/0096–1523.34.1.101

Jaquet, E., Rhodes, G., & Hayward, W. G. (2007). Opposite afteref­fects for Chinese and Caucasian faces are select­ive for social cat­egory inform­a­tion and not just phys­ic­al face dif­fer­ences. The Quarterly Journal of Experimental Psychology, 60(11), 1457–1467. https://doi.org/10.1080/17470210701467870

Jaquet, E., Rhodes, G., & Hayward, W. G. (2008). Race-contingent afteref­fects sug­gest dis­tinct per­cep­tu­al norms for dif­fer­ent race faces. Visual Cognition, 16(6), 734–753. https://doi.org/10.1080/13506280701350647

Kahneman, D., & Tversky, A. (1973). On the psy­cho­logy of pre­dic­tion. Psychological Review,80, 237–251.

Kelly, D. J., Quinn, P. C., Slater, A. M., Lee, K., Ge, L., & Pascalis, O. (2007). The other-race effect devel­ops dur­ing infancy: Evidence of per­cep­tu­al nar­row­ing. Psychological Science, 18(12), 1084–1089.

Little, A. C., DeBruine, L. M., & Jones, B. C. (2005). Sex-contingent face after-effects sug­gest dis­tinct neur­al pop­u­la­tions code male and female faces. Proceedings of the Royal Society B: Biological Sciences, 272(1578), 2283–2287. https://doi.org/10.1098/rspb.2005.3220

Meissner, C. A., & Brigham, J. C. (2001). Thirty years of invest­ig­at­ing the own-race bias in memory for faces: A meta-analytic review. Pyschology, Public Policy and Law,7(1), 3–35.

Rhodes, G., & Leopold, D. A. (2011). Adaptive Norm-Based Coding of Face Identity. https://doi.org/10.1093/oxfordhb/9780199559053.013.0014

Scholl, B. J. (2005). Innateness and (Bayesian) visu­al per­cep­tion. In P. Carruthers (Ed.), The Innate Mind: Structure and Contents(p. 34). New York: Oxford University Press.

Shapiro, J. R., Ackerman, J. M., Neuberg, S. L., Maner, J. K., Becker, D. V., & Kenrick, D. T. (2009). Following in the Wake of Anger: When Not Discriminating Is Discriminating. Personality & Social Psychology Bulletin, 35(10), 1356–1367. https://doi.org/10.1177/0146167209339627

Siegel, S. (2017). The Rationality of Perception.Oxford University Press

Summerfield, C., & Egner, T. (2009). Expectation (and atten­tion) in visu­al cog­ni­tion. Trends in Cognitive Science, 13(9), 403–409.

Valentine, T. (1991). A uni­fied account of the effects of dis­tinct­ive­ness, inver­sion, and race in face recog­ni­tion. The Quarterly Journal of Experimental Psychology, 43(2), 161–204.

Valentine, T., Lewis, M. B., & Hills, P. J. (2016). Face-space: A uni­fy­ing concept in face recog­ni­tion research. The Quarterly Journal of Experimental Psychology, 69(10), 1996–2019.

 

Conceptual short-term memory: a new tool for understanding perception, cognition, and consciousness

Henry Shevlin, Research Associate, Leverhulme Centre for the Future of Intelligence at The University of Cambridge

The notion of memory, as used in ordin­ary lan­guage, may seem to have little to do with per­cep­tion or con­scious exper­i­ence. While per­cep­tion informs us about the world as it is now, memory almo­­st by defi­­nition tells us about the past. Similarly, where­as con­scious exper­i­ence seems like an ongo­ing, occur­rent phe­nomen­on, it’s nat­ur­al to think of memory as being more like an inert store of inform­a­tion, access­ible when we need it but cap­able of lying dormant for years at a time.

However, in con­tem­por­ary cog­nit­ive sci­ence, memory is taken to include almost any psy­cho­lo­gic­al pro­cess that func­tions to store or main­tain inform­a­tion, even if only for very brief dur­a­tions (see also James, 1890). In this broad­er sense of the term, con­nec­tions between memory, per­cep­tion, and con­scious­ness are appar­ent. After all, some mech­an­ism for the short-term reten­tion of inform­a­tion will be required for almost any per­cep­tu­al or cog­nit­ive pro­cess, such as recog­ni­tion or infer­ence, to take place: as one group of psy­cho­lo­gists put it, “stor­age, in the sense of intern­al rep­res­ent­a­tion, is a pre­requis­ite for pro­cessing” (Halford, Phillips, & Wilson, 2001). Assuming, then, as many the­or­ists do, that per­cep­tion con­sists at least partly in the pro­cessing of sens­ory inform­a­tion, short-term memory is likely to have an import­ant role to play in a sci­entif­ic the­ory of per­cep­tion and per­cep­tu­al experience.

In this lat­ter sense of memory, two major forms of short-term store have been widely dis­cussed in rela­tion to per­cep­tion and con­scious­ness. The first of these is the vari­ous forms of sens­ory memory, and in par­tic­u­lar icon­ic memory. Iconic memory was first described by George Sperling, who in 1960 demon­strated that large amounts of visu­ally presen­ted inform­a­tion were retained for brief inter­vals, far more than sub­jects were able to actu­ally util­ize for beha­viour dur­ing the short win­dow in which they were avail­able (Figure 1). This phe­nomen­on, dubbed par­tial report superi­or­ity, was brought to the atten­tion of philo­soph­ers of mind via the work of Fred Dretske (1981) and Ned Block (1995, 2007). Dretske sug­ges­ted that the rich but incom­pletely access­ible nature of inform­a­tion presen­ted in Sperling’s paradigm was a mark­er of per­cep­tu­al rather than cog­nit­ive pro­cesses. Block sim­il­arly argued that sens­ory memory might be closely tied to per­cep­tion, and fur­ther, sug­ges­ted that such sens­ory forms of memory could serve as the basis for rich phe­nom­en­al con­scious­ness that ‘over­flowed’ the capa­city for cog­nit­ive access.

A second form of short-term term that has been widely dis­cussed by both psy­cho­lo­gists and philo­soph­ers is work­ing memory. Very roughly, work­ing memory is a short-term inform­a­tion­al store that is more robust than sens­ory memory but also more lim­ited in capa­city. Unlike inform­a­tion in sens­ory memory, which must be cog­nit­ively accessed in order to be deployed for vol­un­tary action, inform­a­tion in work­ing memory is imme­di­ately poised for use in such beha­viour, and is closely linked to notions such as cog­ni­tion and cog­nit­ive access. For reas­ons such as these, Dretske seemed inclined to treat this kind of capacity-limited pro­cess as closely tied or even identic­al to thought, a sug­ges­tion fol­lowed by Block.[1] Psychologists such as Nelson Cowan (2001: 91) and Alan Baddeley (2003: 836) take encod­ing in work­ing memory to be a cri­terion of con­scious­ness, while glob­al work­space the­or­ists such as Stanislas Dehaene (2014: 63) have regarded work­ing memory as intim­ately con­nec­ted – if not identic­al – with glob­al broad­cast.[2]

The fore­go­ing sum­mary is over-simplistic, but hope­fully serves to motiv­ate the claim that sci­entif­ic work on short-term memory mech­an­isms may have import­ant roles to play in under­stand­ing both the rela­tion between per­cep­tion and cog­ni­tion and con­scious exper­i­ence. With this idea in mind, I’ll now dis­cuss some recent evid­ence for a third import­ant short-term memory mech­an­ism, namely Molly Potter’s pro­posed Conceptual Short-Term Memory. This is a form of short-term memory that serves to encode not merely the sens­ory prop­er­ties of objects (like sens­ory memory), but also higher-level semant­ic inform­a­tion such as cat­egor­ic­al iden­tity. Unlike sens­ory memory, it seems some­what res­ist­ant to inter­fer­ence by the present­a­tion of new sens­ory inform­a­tion; where­as icon­ic memory can be effaced by the present­a­tion of new visu­al inform­a­tion, CSTM seems some­what robust. In these respects, it is sim­il­ar to work­ing memory. Unlike work­ing memory, how­ever, it seems to have both a high capa­city and a brief dur­a­tion; inform­a­tion in CSTM that is not rap­idly accessed by work­ing memory is lost after a second or two (for a more detailed dis­cus­sion, see Potter 2012).

Evidence for CSTM comes from a range of paradigms, only two of which I dis­cuss here (inter­ested read­ers may wish to con­sult Potter, Staub, & O’Connor, 2004; Grill-Spector and Kanwisher, 2005; and Luck, Vogel, & Shapiro, 1996). The first par­tic­u­larly impress­ive demon­stra­tion is a 2014 exper­i­ment examin­ing sub­jects’ abil­ity to identi­fy the pres­ence of a giv­en semant­ic tar­get (such as “wed­ding” or “pic­nic”) in a series of rap­idly presen­ted images (see Figure 2).

A num­ber of fea­tures of this exper­i­ment are worth emphas­iz­ing. First, sub­jects in some tri­als were cued to identi­fy the pres­ence of a tar­get only after present­a­tion of the images, sug­gest­ing that their per­form­ance did indeed rely on memory rather than merely, for example, effect­ive search strategies. Second, a rel­at­ively large num­ber of images were dis­played in quick suc­ces­sion, either 6 or 12, in both cases lar­ger than the nor­mal capa­city of work­ing memory. Subjects’ per­form­ance in the 12-item tri­als was not drastic­ally worse than in the 6‑item tri­als, sug­gest­ing that they were not rely­ing on nor­mal capacity-limited work­ing memory alone. Third, because the images were dis­played one after anoth­er in the same loc­a­tion in quick suc­ces­sion, it seems unlikely that they were rely­ing on sens­ory memory alone; as noted earli­er, sens­ory memory is vul­ner­able to over­writ­ing effects. Finally, the fact that sub­jects were able to identi­fy not merely the pres­ence of cer­tain visu­al fea­tures but the pres­ence or absence of spe­cif­ic semant­ic tar­gets sug­gests that they were not merely encod­ing low-level sens­ory inform­a­tion about the images, but also their spe­cif­ic cat­egor­ic­al iden­tit­ies, again telling against the idea that sub­jects’ per­form­ance relied on sens­ory memory alone.

Another rel­ev­ant exper­i­ment for the CSTM hypo­thes­is is that of Belke et al. (2008). In this exper­i­ment, sub­jects were presen­ted with a single array of either 4 or 8 items, and asked wheth­er a giv­en cat­egory of pic­ture (such as a motor­bike) was present. In some tri­als in which the tar­get was absent, a semantic­ally related dis­tract­or (such as a motor­bike hel­met) was present instead. The sur­pris­ing res­ult of this exper­i­ment, which involved an eye-tracking cam­era, was that sub­jects reli­ably fix­ated upon either tar­gets or semantic­ally related dis­tract­ors with their ini­tial eye move­ments, and were just as likely to do wheth­er the arrays con­tained 4 or 8 items, and even when assigned a cog­nit­ive load task before­hand (see Figure 3).

Again, these res­ults argu­ably point to the exist­ence of some fur­ther memory mech­an­ism bey­ond sens­ory memory and work­ing memory: if sub­jects were rely­ing on work­ing memory to dir­ect their eye move­ments, then one would expect such move­ments to be sub­ject to inter­fer­ence from the cog­nit­ive load, where­as the hypo­thes­is that sub­jects were rely­ing on exclus­ively sens­ory mech­an­isms runs into the prob­lem that such mech­an­isms do not seem to be sens­it­ive to high-level semant­ic prop­er­ties of stim­uli such as their spe­cif­ic cat­egory iden­tity, where­as in this tri­al, sub­jects’ eye move­ments were sens­it­ive to just such semant­ic prop­er­ties of the items in the array.[3]

Interpretation of exper­i­ments such as these is a tricky busi­ness, of course (for a more thor­ough dis­cus­sion, see Shevlin 2017). However, let us pro­ceed on the assump­tion that the CSTM hypo­thes­is is at least worth tak­ing ser­i­ously, and that there may be some high-capacity semant­ic buf­fer in addi­tion to more widely accep­ted mech­an­isms such as icon­ic memory and work­ing memory. What rel­ev­ance might this have for debates in philo­sophy and cog­nit­ive sci­ence? I will now briefly men­tion three such top­ics. Again, I will be over­sim­pli­fy­ing some­what, but my goal will be to out­line some areas where the CSTM hypo­thes­is might be of interest.

The first such debate con­cerns the nature of the con­tents of per­cep­tion. Do we merely see col­ours, shapes, and so on, or do we per­ceive high-level kinds such as tables, cats, and Donald Trump (Siegel, 2010)? Taking our cue from the data on CSTM, we might sug­gest that this ques­tion can be reframed in terms of which forms of short-term memory are genu­inely per­cep­tu­al. If we take there to be good grounds for con­fin­ing per­cep­tu­al rep­res­ent­a­tion to the kinds of rep­res­ent­a­tions in sens­ory memory, then we might be inclined to take an aus­tere view of the con­tents of exper­i­ence. By con­trast, if the kind of pro­cessing involved in encod­ing in CSTM is taken to be a form of late-stage per­cep­tion, then we might have evid­ence for the pres­ence of high-level per­cep­tu­al con­tent. It might reas­on­ably be objec­ted that this move is merely ‘kick­ing the can down the road’ to ques­tions about the perception-cognition bound­ary, and does not by itself resolve the debate about the con­tents of per­cep­tion. However, more pos­it­ively, this might provide a way of ground­ing largely phe­nomen­o­lo­gic­al debates in the more con­crete frame­works of memory research.

A second key debate where CSTM may play a role con­cerns the pres­ence of top-down effects on per­cep­tion. A copi­ous amount of exper­i­ment­al data (dat­ing back to early work by psy­cho­lo­gists such as Perky, 1910, but pro­lif­er­at­ing espe­cially in the last two dec­ades) has been pro­duced in sup­port of the idea that there are indeed ‘top-down’ effects on per­cep­tion, which in turn has been taken to sug­gest that our thoughts, beliefs, and desires can sig­ni­fic­antly affect how the world appears to us. Such claims have been force­fully chal­lenged by the likes of Firestone and Scholl (2015), who have sug­ges­ted that the rel­ev­ant effects can often be explained in terms of, for example, post­per­cep­tu­al judg­ment rather than per­cep­tion proper.

However, the CSTM hypo­thes­is may again offer a third com­prom­ise pos­i­tion. By dis­tin­guish­ing core per­cep­tu­al pro­cesses (namely those that rely on sens­ory buf­fers such as icon­ic memory) from the kind of later cat­egor­ic­al pro­cessing per­formed by CSTM, there may be oth­er pos­i­tions avail­able in the inter­pret­a­tion of alleged cases of top-down effects on per­cep­tion. For example, Firestone and Scholl claim that many such res­ults fail to prop­erly dis­tin­guish per­cep­tion from judg­ment, sug­gest­ing that, in many cases, exper­i­ment­al­ists’ res­ults can be inter­preted purely in terms of strictly cog­nit­ive effects rather than as involving effects on per­cep­tu­al exper­i­ence. However, if CSTM is a dis­tinct psy­cho­lo­gic­al pro­cess oper­at­ive between core per­cep­tu­al pro­cesses and later cent­ral cog­nit­ive pro­cesses, then appeals to things such as ‘per­cep­tu­al judg­ments’ may be bet­ter foun­ded than Firestone and Scholl seem to think. This would allow us to claim that at least some putat­ive cases of top-down effects went bey­ond mere post­per­cep­tu­al judg­ments while also respect­ing the hypo­thes­is that early vis­ion is encap­su­lated; see Pylyshyn, 1999).

A final debate in which CSTM may be of interest is the ques­tion of wheth­er per­cep­tu­al exper­i­ence is rich­er than (or ‘over­flows’) what is cog­nit­ively accessed. As noted earli­er, Ned Block has argued that inform­a­tion in sens­ory forms of memory may be con­scious even if it is not accessed – or even access­ible – to work­ing memory (Block, 2007). This would explain phe­nom­ena such as the appar­ent ‘rich­ness’ of exper­i­ence; thus if we ima­gine stand­ing in Times Square, sur­roun­ded by chaos and noise, it is phe­nomen­o­lo­gic­ally tempt­ing to think we can only focus on and access a tiny frac­tion of our ongo­ing exper­i­ences at any one moment. A com­mon chal­lenge to this kind of claim is that it threatens to divorce con­scious­ness from per­son­al level cog­nit­ive pro­cessing, leav­ing us open to extreme pos­sib­il­it­ies such as the ‘pan­psych­ic dis­aster’ of per­petu­ally inac­cess­ible con­scious exper­i­ence in very early pro­cessing areas such as the LGN (Prinz, 2007). Again, CSTM may offer a com­prom­ise pos­i­tion. As noted earli­er, the capa­city of CSTM does indeed seem to over­flow the sparse resources of work­ing memory. However, it also seems rely on per­son­al level pro­cessing, such as an individual’s store of learned cat­egor­ies. Thus one new pos­i­tion, for example, might claim that inform­a­tion must at least reach the stage of CSTM to be con­scious, thus allow­ing that per­cep­tu­al exper­i­ence may indeed over­flow work­ing memory while also rul­ing it out in early sens­ory areas.

These are all bold sug­ges­tions in need of extens­ive cla­ri­fic­a­tion and argu­ment, but it is my hope that I have at least demon­strated to the read­er how CSTM may be a hypo­thes­is of interest not merely to psy­cho­lo­gists of memory, but also those inter­ested in broad­er issues of men­tal archi­tec­ture and con­scious­ness. And while I should also stress that CSTM remains a work­ing hypo­thes­is in the psy­cho­logy of memory, it is one that I think is worth explor­ing on grounds of both sci­entif­ic and philo­soph­ic­al interest.

 

REFERENCES:

Baddeley, A.D. (2003). Working memory: Looking back and look­ing for­ward.Nature Reviews Neuroscience, 4(10), 829–839.

Belke, E., Humphreys, G., Watson, D., Meyer, A. and Telling, A., (2008). Top-down effects ofse­mant­ic know­ledge in visu­al search are mod­u­lated by cog­nit­ive but not per­cep­tu­al load. Perception & Psychophysics, 70 8, 1444 – 1458.

Bergström, F., & Eriksson, J. (2014). Maintenance of non-consciously presen­ted inform­a­tion engages the pre­front­al cor­tex. Frontiers in Human Neuroscience 8:938.

Block, N. (2007). Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience, Behavioral and Brain Sciences 30, pp. 481–499.

Cowan, N., (2001). The magic­al num­ber 4 in short-term memory: A recon­sid­er­a­tion of men­tal stor­age capa­city. Behavioral and Brain Sciences 241, 87–114.

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking Press, 2014.

Dretske, F. (1981). Knowledge and the Flow of Information. MIT Press.

Firestone, C., & Scholl, B.J. (2015). Cognition does not affect per­cep­tion: Evaluating the evid­ence for ‘top-down’ effects. Behavioral and Brain Sciences:1–77.

Grill-Spector, K., & Kanwisher, N. (2005). Visual Recognition. Psychological Science, 16(2), 152–160.

Halford, G. S., Phillips, S., & Wilson, W. H. (2001). Processing capa­city lim­its are not explained by stor­age lim­its. Behavioral and Brain Sciences 24 (1), 123–124.

James, W. (1890). The Principles of Psychology. Dover Publications.

Luck, S. J., Vogel, E. K., & Shapiro, K. L. (1996). Word mean­ings can be accessed but not repor­ted dur­ing the atten­tion­al blink. Nature, 383(6601), 616–618.

Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing con­cepts of work­ing memory. Nature Neuroscience, 17(3), 347–356.

Potter, M. C. (2012). Conceptual Short Term Memory in Perception and Thought. Frontiers in Psychology, 3:113.

Potter, M. C., Staub, A., & O’Connor, D. H. (2004). Pictorial and con­cep­tu­al rep­res­ent­a­tion of glimpsed pic­tures. Journal of Experimental Psychology: Human Perception and Performance, 30, 478–489.

Prinz, J. (2007). Accessed, access­ible, and inac­cess­ible: Where to draw the phe­nom­en­al line. Behavioral and Brain Sciences, 305–6.

Pylyshyn, Z. (1999). Is vis­ion con­tinu­ous with cog­ni­tion?: The case for cog­nit­ive impen­et­rab­il­ity of visu­al per­cep­tion. Behavioral and Brain Sciences, 22(03).

Shevlin, H. (2017). Conceptual Short-Term Memory: A Missing Part of the Mind? Journal of Consciousness Studies, 24, No. 7–8.

Siegel, S. (2010). The Contents of Visual Experience. Oxford.

Sperling, G. (1960). The Information Available in Brief Visual Presentations, Psychological Monographs: General and Applied 74, pp. 1–29.

 

[1] Note that Dretske does not use the term work­ing memory in this con­text, but clearly has some such pro­cess in mind, as made clear by his ref­er­ence to capacity-limited mech­an­isms for extract­ing information.

[2] A com­plic­at­ing factor in dis­cus­sion of work­ing memory comes from the recent emer­gence of vari­able resource mod­els of work­ing memory (Ma et al., 2014) and the dis­cov­ery that some forms of work­ing memory may be able to oper­ate uncon­sciously (see, e.g., Bergström & Eriksson, 2014).

[3] Given that the arrays remained vis­ible to sub­jects through­out the exper­i­ment, one might won­der why this exper­i­ment has rel­ev­ance for our under­stand­ing of memory. However, as noted earli­er, I take it that any short-term pro­cessing of inform­a­tion pre­sumes some kind of under­ly­ing tem­por­ary encod­ing mechanism.

Functional Localization—Complicated and Context-Sensitive, but Still Possible

Dan Burnston—Assistant Professor, Philosophy Department, Tulane University, Member Faculty in the Tulane Brain Institute

The ques­tion of wheth­er func­tions are loc­al­iz­able to dis­tinct parts of the brain, aside from its obvi­ous import­ance to neur­os­cience, bears on a wide range of philo­soph­ic­al issues—reductionism and mech­an­ist­ic explan­a­tion in philo­sophy of sci­ence; cog­nit­ive onto­logy and men­tal rep­res­ent­a­tion in philo­sophy of mind, among many oth­ers. But philo­soph­ic­al interest in the ques­tion has only recently begun to pick up (Bergeron, 2007; Klein, 2012; McCaffrey, 2015; Rathkopf, 2013).

I am a “con­tex­tu­al­ist” about loc­al­iz­a­tion: I think that func­tions are loc­al­iz­able to dis­tinct parts of the brain, and that dif­fer­ent parts of the brain can be dif­fer­en­ti­ated from each oth­er on the basis of their func­tions (Burnston, 2016a, 2016b). However, I also think that what a par­tic­u­lar part of the brain does depends on beha­vi­or­al and envir­on­ment­al con­text. That is, a giv­en part of the brain might per­form dif­fer­ent func­tions depend­ing on what else is hap­pen­ing in the organism’s intern­al or extern­al environment.

Embracing con­tex­tu­al­ism, as it turns out, involves ques­tion­ing some deeply held assump­tions with­in neur­os­cience, and con­nects the ques­tion of loc­al­iz­a­tion with oth­er debates in philo­sophy. In neur­os­cience, loc­al­iz­a­tion is gen­er­ally con­strued in what I call abso­lut­ist terms. Absolutism is a form of atomism—it sug­gests that loc­al­iz­a­tion can be suc­cess­ful only if 1–1 map­pings between brain areas and func­tions can be found. Since genu­ine mul­ti­func­tion­al­ity is anti­thet­ic­al to atom­ist assump­tions it has his­tor­ic­ally not been a closely ana­lyzed concept in sys­tems or cog­nit­ive neuroscience.

In philo­sophy, con­tex­tu­al­ism takes us into ques­tions about what con­sti­tutes good explan­a­tion—in this case, func­tion­al explan­a­tion. Debates about con­tex­tu­al­ism in oth­er areas of philo­sophy, such as semantics and epi­stem­o­logy (Preyer & Peter, 2005), usu­ally shape up as fol­lows. Contextualists are impressed by data sug­gest­ing con­tex­tu­al vari­ation in the phe­nomen­on of interest (usu­ally the truth val­ues of state­ments or of know­ledge attri­bu­tions). In response, anti-contextualists worry that there are neg­at­ive epi­stem­ic con­sequences to embra­cing this vari­ation. The res­ult­ing explan­a­tions will not, on their view, be suf­fi­ciently power­ful or sys­tem­at­ic (Cappelen & Lepore, 2005). We end up with explan­a­tions that do not gen­er­al­ize bey­ond indi­vidu­al cases. Hence, accord­ing to anti-contextualists, we should be motiv­ated to come up with the­or­ies that deny or explain away the data that seem­ingly sup­port con­tex­tu­al variation.

In order to argue for con­tex­tu­al­ism in the neur­al case, then, one must first estab­lish the data that sug­gests con­tex­tu­al vari­ation, then artic­u­late a vari­ety of con­tex­tu­al­ism that (i) suc­ceeds at dis­tin­guish­ing brain areas in terms of their dis­tinct func­tions, and (ii) describes genu­ine generalizations.

Usually, in sys­tems neur­os­cience, the goal is to cor­rel­ate physiolo­gic­al responses in par­tic­u­lar brain areas with par­tic­u­lar types of inform­a­tion in the world, sup­port­ing the claim that the responses rep­res­ent that inform­a­tion. I have pur­sued a detailed case study of per­cep­tu­al area MT (also known as “V5” or the “middle tem­por­al” area). The text­book descrip­tion of MT is that it rep­res­ents motion—it has spe­cif­ic responses to spe­cif­ic pat­terns of motion, and vari­ations amongst its cel­lu­lar responses rep­res­ent dif­fer­ent dir­ec­tions and velo­cit­ies. Hence, MT has the uni­vocal func­tion of rep­res­ent­ing motion: an abso­lut­ist description.

However, MT research in the last 20 years has uncovered data which strongly sug­gests that MT is not just a motion detect­or. I will only list some of the rel­ev­ant data here, which I dis­cuss exhaust­ively in oth­er places. Let’s con­sider a per­cep­tu­al “con­text” as a com­bin­a­tion of per­cep­tu­al features—including shape/orientation, depth, col­or, luminance/brightness, and motion. On the tra­di­tion­al hier­archy, each of these fea­tures has its own area ded­ic­ated to rep­res­ent­ing it. Contextualism, altern­at­ively, starts from the assump­tion that dif­fer­ent com­bin­a­tions of these fea­tures might res­ult in a giv­en area rep­res­ent­ing dif­fer­ent inform­a­tion.

  • Despite the tra­di­tion­al view that MT is “col­or blind” (Livingstone & Hubel, 1988), MT in fact responds to the iden­tity of col­ors when col­or is use­ful in dis­am­big­u­at­ing a motion stim­u­lus. Now in this case, MT still argu­ably rep­res­ents motion, but it does use col­or as a con­tex­tu­al cue for doing so.
  • Over 93% of MT cells rep­res­ent coarse depth (the rough dis­tance of an object away from the per­ceiv­er. Their tun­ing for depth is inde­pend­ent of their tun­ing for motion, and many cells rep­res­ent depth even in sta­tion­ary These depth sig­nals are pre­dict­ive of psy­cho­phys­ic­al results.
  • A major­ity of MT cells also have spe­cif­ic response prop­er­ties for fine depth (depth sig­nals res­ult­ing from the 3‑d shape and ori­ent­a­tion of objects) fea­tures of tilt and slant, and these can be cued by a vari­ety of dis­tinct fea­tures, includ­ing bin­ocu­lar dis­par­ity and rel­at­ive velocity.

How do these res­ults sup­port con­tex­tu­al­ism? Consider a par­tic­u­lar physiolo­gic­al response to a stim­u­lus in MT. If the data is cor­rect, then this sig­nal might rep­res­ent motion, or it might rep­res­ent depth—and indeed, either coarse or fine depth—depending on the con­text. Or, it might rep­res­ent a com­bin­a­tion of those influ­ences.[1]

The con­tex­tu­al­ism I advoc­ate focuses on the type of descrip­tions we should invoke in the­or­iz­ing about the func­tions of brain areas. First, our descrip­tions should be con­junct­ive: the func­tion of an area should be described as a con­junc­tion of the dif­fer­ent rep­res­ent­a­tion­al func­tions it serves and the con­texts in which it serves those func­tions. So, MT rep­res­ents motion in a par­tic­u­lar range of con­texts, but also rep­res­ents oth­er types of inform­a­tion in dif­fer­ent contexts—including abso­lute depth in both sta­tion­ary and mov­ing stim­uli, and fine depth in con­texts involving tilt and slant, as defined by either rel­at­ive dis­par­ity or rel­at­ive velocity.

When I say that a con­junc­tion is “open,” what I mean is that we shouldn’t take the func­tion­al descrip­tion as com­plete. We should see it as open to amend­ment as we study new con­texts. This open­ness is vital—it is an induc­tion on the fact that the func­tion­al descrip­tion of MT has changed as new con­texts have been explored—but also leads us pre­cisely into what both­ers anti-contextualists (Rathkopf, 2013). The worry is that open-descriptions do not have the the­or­et­ic­al strength that sup­ports good explan­a­tions. I argue that this worry is mistaken.

First, note that con­tex­tu­al­ist descrip­tions can still func­tion­ally decom­pose brain areas. The key to this is the index­ing of func­tions to con­texts. Compare MT to V4. While V4 also rep­res­ents “motion” con­strued broadly (in “kin­et­ic” or mov­ing edges), col­or, and fine depth, the con­texts in which V4 does so dif­fer from MT. For instance, V4 rep­res­ents col­or con­stan­cies which are not present in MT responses. V4’s spe­cif­ic com­bin­a­tion of sens­it­iv­it­ies to fine depth and curvature allows it to rep­res­ent pro­tuber­ances—curves in objects that extend towards the perceiver—which MT can­not rep­res­ent. So, the types of inform­a­tion that these areas rep­res­ent, along with the con­texts in which they rep­res­ent them, tease apart their functions.

Indexing to con­texts also points the way to solv­ing the prob­lem of gen­er­al­iz­a­tion, so long as we appro­pri­ately mod­u­late our expect­a­tions. For instance, on con­tex­tu­al­ism it is still a power­ful gen­er­al­iz­a­tion that MT rep­res­ents motion. This is sub­stan­ti­ated by the wide range of con­texts in which it rep­res­ents motion—including mov­ing dots, mov­ing bars, and color-segmented pat­terns. It’s just that rep­res­ent­ing motion is not a uni­ver­sal gen­er­al­iz­a­tion about its func­tion. It is a gen­er­al­iz­a­tion with more lim­ited scope. Similarly, MT rep­res­ents fine depth in some con­texts (tilt and slant, defined by dis­par­ity or velo­city), but not in all of them (pro­tuber­ances). Of course, if the func­tion of MT is genu­inely con­text sens­it­ive, then uni­ver­sal gen­er­al­iz­a­tions about its func­tion will not be pos­sible. Hence, insist­ing on uni­ver­sal gen­er­al­iz­a­tions is not an open strategy for an absolutist—at least not without ques­tion begging.

The real crux of the debate, I believe, is about the notion of pro­ject­ab­il­ity. We want our the­or­ies not just to describe what has occurred, but to inform future hypo­thes­iz­ing about nov­el situ­ations. Absolutists hope for a power­ful form of law-like pro­ject­ab­il­ity, on which a suc­cess­ful func­tion­al descrip­tion tells us for cer­tain what that area will do in new con­texts. The “open” struc­ture of con­tex­tu­al­ism pre­cludes this, but this doesn’t both­er the con­tex­tu­al­ist. This situ­ation might seem remin­is­cent of sim­il­ar stale­mates regard­ing con­tex­tu­al­ism in oth­er areas of philosophy.

There are two ways I have sought to break the stale­mate. First is to define a notion of pro­ject­ab­il­ity that informs sci­entif­ic prac­tice, but is com­pat­ible with con­tex­tu­al­ism. Second is to show that even very gen­er­al abso­lut­ist descrip­tions fail to deliv­er on the sup­posed explan­at­ory advant­ages of abso­lut­ism. The key to a con­tex­tu­al­ist notion of pro­ject­ab­il­ity, in my view, is to look for a form of pro­ject­ab­il­ity that struc­tures invest­ig­a­tion, rather than giv­ing law­ful pre­dic­tions. The basic idea is this: giv­en a new con­text, the null hypo­thes­is for an area’s func­tion in that con­text should be that it per­forms its pre­vi­ously known func­tion (or one of its known func­tions). I call this role a min­im­al hypo­thes­is, and the idea is that cur­rently known func­tion­al prop­er­ties struc­ture hypo­thes­iz­ing and invest­ig­a­tion in nov­el con­texts, by provid­ing three options: (i) either the area does not func­tion at all in the nov­el con­text (per­haps MT does not make any func­tion­al con­tri­bu­tion to, say, pro­cessing emo­tion­al valence); (ii) it func­tions in one of its already known ways, in which case anoth­er con­text gets indexed to, and gen­er­al­izes, an already known con­junct, or (iii) it per­forms a new func­tion in that con­text, for­cing a new con­junct to be added to the over­all descrip­tion of the area (indexed to the nov­el con­text, of course). While I won’t go into details here, I argue in (Burnston, 2016a) that this kind of reas­on­ing has shaped the pro­gress of under­stand­ing MT function.

One option open to a defend­er of abso­lut­ism is to attempt to explain away the data sug­gest­ing con­tex­tu­al vari­ation by chan­ging the type of func­tion­al descrip­tion that is sup­posed to gen­er­al­ize over all con­texts (Anderson, 2010; Bergeron, 2007; Rathkopf, 2013). For instance, rather than say­ing that a part of the brain rep­res­ents a spe­cif­ic type of inform­a­tion, maybe we should say that it per­forms the same type of com­pu­ta­tion, whatever inform­a­tion it is pro­cessing. I have called this kind of approach “com­pu­ta­tion­al abso­lut­ism” (Burnston, 2016b).

While com­pu­ta­tion­al neur­os­cience is an import­ant the­or­et­ic­al approach, it can’t save abso­lut­ism. My argu­ment against the view starts from an empir­ic­al premise—in mod­el­ing MT, there is not one com­pu­ta­tion­al descrip­tion that describes everything MT does. Instead, there are a range of the­or­et­ic­al mod­els that each provide good descrip­tions of aspects of MT func­tion. Given this lack of uni­ver­sal gen­er­al­iz­a­tion, the com­pu­ta­tion­al abso­lut­ist has some options. They might move towards more gen­er­al levels of com­pu­ta­tion­al descrip­tion, hop­ing to sub­sume more spe­cif­ic mod­els. The prob­lem with this is that the most gen­er­al com­pu­ta­tion­al descrip­tions in neur­os­cience are what are called canon­ic­al com­pu­ta­tions (Chirimuuta, 2014)—descriptions that can apply to vir­tu­ally all brain areas. But if this is the case, then these descrip­tions won’t suc­cess­fully dif­fer­en­ti­ate brain areas based on their func­tion. Hence, they don’t con­trib­ute to func­tion­al localization.

On the oth­er hand, sug­gest­ing that it is some­thing about the way these com­pu­ta­tions are applied in par­tic­u­lar con­texts runs right into the prob­lem of con­tex­tu­al vari­ation. Producing a mod­el that pre­dicts what, say, MT will do in cases of pat­tern motion or reverse-phi phe­nom­ena simply does not pre­dict what func­tion­al responses MT will have to depth—not, at least, without invest­ig­at­ing and build­ing in know­ledge about its physiolo­gic­al responses to those stim­uli. So, even if gen­er­al mod­els are help­ful in gen­er­at­ing pre­dic­tions in par­tic­u­lar instances, they don’t explain what goes on in them. If this descrip­tion is right, then the sup­posed explan­at­ory gain of CA is an empty prom­ise, and con­tex­tu­al ana­lys­is of func­tion is neces­sary. My view of the role of highly gen­er­al mod­els mir­rors those offered by Cartwright (1999) and Morrison (2007) in the phys­ic­al sciences.

Some caveats are in order here. I’ve only talked about one brain area, and as McCaffrey (2015) points out, dif­fer­ent areas might be amen­able to dif­fer­ent kinds of func­tion­al ana­lys­is. Perceptual areas are import­ant, how­ever, because they are paradigm suc­cess cases for func­tion­al loc­al­iz­a­tion. If con­tex­tu­al­ism works here, it can work else­where, as well as for oth­er units of ana­lys­is, such as cell pop­u­la­tions and net­works (Rentzeperis, Nikolaev, Kiper, & van Leeuwen, 2014). I share McCaffrey’s plur­al­ist lean­ings, but I think that a place for con­tex­tu­al­ist func­tion­al ana­lys­is must be made if func­tion­al decom­pos­i­tion is to suc­ceed. The con­tex­tu­al­ist approach is also com­pat­ible with oth­er frame­works, such as Klein’s (2017) focus on “difference-making” in under­stand­ing the func­tion of brain areas.

I’ll end with a teas­er about my cur­rent pro­ject on these top­ics (Burnston, in prep). Note that, if the func­tion of brain areas can genu­inely shift with con­text, this is not just a the­or­et­ic­al prob­lem, but a prob­lem for the brain. Other parts of the brain must inter­act with MT dif­fer­ently depend­ing on wheth­er it is cur­rently rep­res­ent­ing motion, coarse depth, fine depth, or some com­bin­a­tion. If this is the case, we can expect there to be mech­an­isms in the brain that medi­ate these shift­ing func­tions. Unsurprisingly, I am not the first to note this prob­lem. Neuroscientists have begun to employ con­cepts from com­mu­nic­a­tion and inform­a­tion tech­no­logy to show how physiolo­gic­al activ­ity from the same brain area might be inter­preted dif­fer­ently in dif­fer­ent con­texts, for instance by encod­ing dis­tinct inform­a­tion in dis­tinct dynam­ic prop­er­ties of the sig­nal (Akam & Kullmann, 2014). Contextualism informs the need for this kind of approach. I am cur­rently work­ing on explic­at­ing these frame­works and show­ing how they allow for func­tion­al decom­pos­i­tion even in dynam­ic and context-sensitive neur­al networks.

 

[1] The high pro­por­tion and reg­u­lar organ­iz­a­tion of depth-representing cells in MT res­ists the tempta­tion to try to save inform­a­tion­al spe­cificity by sub­divid­ing MT into smal­ler units, as is nor­mally done for V1. V1 is stand­ardly sep­ar­ated into dis­tinct pop­u­la­tions of ori­ent­a­tion, wavelength, and displacement-selective cells, but this kind of move is not avail­able for MT.

 

REFERENCES

Akam, T., & Kullmann, D. M. (2014). Oscillatory mul­ti­plex­ing of pop­u­la­tion codes for select­ive com­mu­nic­a­tion in the mam­mali­an brain. Nature Reviews Neuroscience, 15(2), 111–122.

Anderson, M. L. (2010). Neural reuse: A fun­da­ment­al organ­iz­a­tion­al prin­ciple of the brain. The Behavioral and brain sci­ences, 33(4), 245–266; dis­cus­sion 266–313. doi: 10.1017/S0140525X10000853

Bergeron, V. (2007). Anatomical and func­tion­al mod­u­lar­ity in cog­nit­ive sci­ence: Shifting the focus. Philosophical Psychology, 20(2), 175–195.

Burnston, D. C. (2016a). Computational neur­os­cience and loc­al­ized neur­al func­tion. Synthese, 1–22. doi: 10.1007/s11229-016‑1099‑8

Burnston, D. C. (2016b). A con­tex­tu­al­ist approach to func­tion­al loc­al­iz­a­tion in the brain. Biology & Philosophy, 1–24. doi: 10.1007/s10539-016‑9526‑2

Burnston, D. C. (In pre­par­a­tion). Getting over atom­ism: Functional decom­pos­i­tion in com­plex neur­al systems.

Cappelen, H., & Lepore, E. (2005). Insensitive semantics: A defense of semant­ic min­im­al­ism and speech act plur­al­ism: John Wiley & Sons.

Cartwright, N. (1999). The dappled world: A study of the bound­ar­ies of sci­ence: Cambridge University Press.

Chirimuuta, M. (2014). Minimal mod­els and canon­ic­al neur­al com­pu­ta­tions: the dis­tinct­ness of com­pu­ta­tion­al explan­a­tion in neur­os­cience. Synthese, 191(2), 127–153. doi: 10.1007/s11229-013‑0369‑y

Klein, C. (2012). Cognitive Ontology and Region- versus Network-Oriented Analyses. Philosophy of Science, 79(5), 952–960.

Klein, C. (2017). Brain regions as difference-makers. Philosophical Psychology, 30(1–2), 1–20.

Livingstone, M., & Hubel, D. (1988). Segregation of form, col­or, move­ment, and depth: Anatomy, physiology, and per­cep­tion. Science, 240(4853), 740–749.

McCaffrey, J. B. (2015). The brain’s het­ero­gen­eous func­tion­al land­scape. Philosophy of Science, 82(5), 1010–1022.

Morrison, M. (2007). Unifying sci­entif­ic the­or­ies: Physical con­cepts and math­em­at­ic­al struc­tures: Cambridge University Press.

Preyer, G., & Peter, G. (2005). Contextualism in philo­sophy: Knowledge, mean­ing, and truth: Oxford University Press.

Rathkopf, C. A. (2013). Localization and Intrinsic Function. Philosophy of Science, 80(1), 1–21.

Rentzeperis, I., Nikolaev, A. R., Kiper, D. C., & van Leeuwen, C. (2014). Distributed pro­cessing of col­or and form in the visu­al cor­tex. Frontiers in Psychology, 5.

A Deflationary Approach to the Cognitive Penetration Debate

Dan Burnston—Assistant Professor, Philosophy Department, Tulane University, Member Faculty in the Tulane Brain Institute

I con­strue the debate about cog­nit­ive pen­et­ra­tion (CP) in the fol­low­ing way: are there caus­al rela­tions between cog­ni­tion and per­cep­tion, such that the pro­cessing of the later is sys­tem­at­ic­ally sens­it­ive to the con­tent of the former? Framing the debate in this way imparts some prag­mat­ic com­mit­ments. We need to make clear what dis­tin­guishes per­cep­tion from cog­ni­tion, and what resources each brings to the table. And we need to cla­ri­fy what kind of caus­al rela­tion­ship exists, and wheth­er it is strong enough to be con­sidered “sys­tem­at­ic.”

I think that cur­rent debates about cog­nit­ive pen­et­ra­tion have failed to be clear enough on these vital prag­mat­ic con­sid­er­a­tions, and have become muddled as a res­ult. My view is that once we under­stand per­cep­tion and cog­ni­tion aright, we should recog­nize as an empir­ic­al fact that there are caus­al rela­tion­ships between them—however, these rela­tions are gen­er­al, dif­fuse, and prob­ab­il­ist­ic, rather than spe­cif­ic, tar­geted, and determ­in­ate. Many sup­port­ers of CP cer­tainly seem to have the lat­ter kind of rela­tion­ship in mind, and it is not clear that the former kind sup­ports the con­sequences for epi­stem­o­logy and cog­nit­ive archi­tec­ture that these sup­port­ers sup­pose. My primary goal, then, rather than deny­ing cog­nit­ive pen­et­ra­tion per se, is to de-fuse it (Burnston, 2016, 2017a, in prep).

The view of per­cep­tion, I believe, that informs most debates about CP, is that per­cep­tion con­sists in a set of strictly bottom-up, mutu­ally encap­su­lated fea­ture detect­ors, per­haps along with some basic mech­an­isms for bind­ing these fea­tures into dis­tinct “proto” objects (Clark, 2004). Anything cat­egor­ic­al, any­thing that involves inter-featural (to say noth­ing of inter­mod­al) asso­ci­ation, any­thing that involves top-down influ­ence, or assump­tions about the nature of the world, and any­thing that is learned or involves memory, must strictly be due to cognition.

To those of this the­or­et­ic­al per­sua­sion, evid­ence for effects of some sub­set of these types in per­cep­tion is prima facie evid­ence for CP.[1] Arguments in favor of CP move from the sup­posed pres­ence of these effects, along with argu­ments that they are not due to either pre-perceptual atten­tion­al shifts or post-perceptual judg­ments, to the con­clu­sion that CP occurs.

On reflec­tion, how­ever, this is a some­what odd, or at least non-obvious move. We start out from a pre­sup­pos­i­tion that per­cep­tion can­not involve X. Then we observe evid­ence that per­cep­tion does in fact involve X. In response, instead of modi­fy­ing our view of per­cep­tion, we insist that only some oth­er fac­ulty, like cog­ni­tion, must inter­vene and do for per­cep­tion that for which it, on its own, lacks. My argu­ments in this debate are meant to under­mine this kind of intu­ition by show­ing that, giv­en a bet­ter under­stand­ing of per­cep­tion, not only is pos­it­ing CP not required, it is also (in its stronger forms any­way) simply unlikely.

Consider the fol­low­ing example, the Cornsweet illu­sion (also called the Craik‑O’Brien-Cornsweet illusion).

Figure 1. The Cornsweet illusion.

In this kind of stim­u­lus, sub­jects almost uni­ver­sally per­ceive the patch on the left as dark­er than the patch on the right, des­pite the fact that they have the exact same lumin­ance, aside from the dark-to-light gradi­ent on the left of the cen­ter line (the “Cornsweet edge”) and the light-to-dark gradi­ent on the right. The stand­ard view of the illu­sion in per­cep­tu­al sci­ence is that per­cep­tion assumes that the object is exten­ded towards the per­ceiv­er in depth, with light com­ing from the left, such that the pan­el on the left would be more brightly illu­min­ated, and the patch on the right more dimly illu­min­ated.  Thus, in order for the left pan­el to pro­duce the same lumin­ance value at the ret­ina as the right pan­el, it must in fact be dark­er, and the visu­al sys­tem rep­res­ents it so: such effects are the res­ult of “an extraordin­ar­ily power­ful strategy of vis­ion” (Purves, Shimpi, & Lotto, 1999, p. 8549).[2]

Why con­strue the strategy as visu­al? There are a num­ber of related con­sid­er­a­tions. First, the phe­nomen­on involves fine-grained asso­ci­ations between par­tic­u­lar fea­tures (lumin­ance, dis­con­tinu­ity, and con­trast, in par­tic­u­lar con­fig­ur­a­tions) that vary sys­tem­at­ic­ally and con­tinu­ously with the amount of evid­ence for the inter­pret­a­tion. If one increases the depth-interpretation by fore­short­en­ing or “bow­ing” the fig­ure, the effect is enhanced, and with fur­ther mod­u­la­tion one can get quite pro­nounced effects. It is unclear at best when we would have come by such fine-grained beliefs about these stim­uli. Moreover, the effects are man­dat­ory, and oper­ate insens­it­ively to changes in our occur­rent beliefs. Fodor is (still) right, in my view, that this kind of man­d­it­or­i­ness sup­ports a per­cep­tu­al reading.

According to Jonathan Cohen and me (Burnston & Cohen, 2012, 2015), cur­rent per­cep­tu­al sci­ence reveals effects like this to be the norm, at all levels of per­cep­tion. If this “integ­rat­ive” view of per­cep­tion is true, then embody­ing assump­tions in com­plex asso­ci­ations is no evid­ence for CP—in fact it is part-and-parcel of what per­cep­tion does.

What about cat­egor­ic­al per­cep­tion? Consider the fol­low­ing example from Gureckis and Goldstone (2008), of what is com­monly referred to as a morph­space.

Figure 2. Categories for facial perception.

According to cur­rent views (Gauthier & Tarr, 2016; Goldstone & Hendrickson, 2010), cat­egor­ic­al per­cep­tion involves higher-order asso­ci­ations between cor­rel­ated low-level fea­tures. So, recog­niz­ing a par­tic­u­lar cat­egory of faces (for instance, an individual’s face, a gender, or a race) involves being able to notice cor­rel­a­tions between a num­ber of low-level facial fea­tures such as light­ness, nose or eye shape, etc., as well as their spa­tial con­fig­ur­a­tions (e.g., the dis­tance between the eyes or between the nose and the upper lip). A wide range of per­cep­tu­al cat­egor­ies have been shown to oper­ate similarly.

Interestingly, form­ing a cat­egory can morph these spaces, to group exem­plars togeth­er along the rel­ev­ant dimen­sions. In Gureckis and Goldstone’s example, once sub­jects learn to dis­crim­in­ate A from B faces (defined by the arbit­rary cen­ter line), nov­el examples of A faces will be judged to be more alike each oth­er along dia­gnost­ic dimen­sion A than they were pri­or to learn­ing. Despite these effects being cat­egor­ic­al, I sug­gest that they are strongly ana­log­ous to the cases above—they involve fea­t­ur­al asso­ci­ations that are fine-grained (a dimen­sion is “morph­ed” a par­tic­u­lar amount dur­ing the course of learn­ing) and man­dat­ory (it is hard not to see, e.g., your brother’s face as your broth­er) in a sim­il­ar way to those above. Moreover, sub­jects are often simply bad at describ­ing their per­cep­tu­al cat­egor­ies. In stud­ies such as Gureckis and Goldstone’s, sub­jects have trouble say­ing much about the dimen­sion­al asso­ci­ations that inform their per­cepts. As such, and giv­en the resources of the integ­rat­ive view, a way is opened to see­ing these cat­egor­ic­al effects as occur­ring with­in per­cep­tion.[3]

If being asso­ci­at­ive, assumption-involving, or cat­egor­ic­al doesn’t dis­tin­guish a per­cep­tu­al from a cog­nit­ive rep­res­ent­a­tion, what does? While there are issues cash­ing out the dis­tinc­tion in detail, I sug­gest that the best way to mark the perception/cognition dis­tinc­tion is in terms of rep­res­ent­a­tion­al form. Cognitive rep­res­ent­a­tions are dis­crete and language-like, while per­cep­tu­al rep­res­ent­a­tions rep­res­ent struc­tur­al dimen­sions of their referents—these might include shape dimen­sions (tilt, slant, ori­ent­a­tion, curvature, etc.), the dimen­sions that define the phe­nom­en­al col­or space, or higher-order dimen­sions such as the ones in the face case above. The form dis­tinc­tion cap­tures the kinds of con­sid­er­a­tions I’ve advanced here, as well as being com­pat­ible with wide range of related ways of draw­ing the dis­tinc­tion in philo­sophy and cog­nit­ive science.

With these dis­tinc­tions in place, we can talk about the kinds of cases that pro­ponents of CP take as evid­ence. On Macpherson’s example, Delk and Fillenbaum’s stud­ies pur­port­ing to show that “heart” shapes are per­ceived as a more sat­ur­ated red than non-heart shapes. Let’s put aside for a moment the pre­val­ent meth­od­o­lo­gic­al cri­tiques of these kinds of stud­ies (Firestone & Scholl, 2016). Even so, there is no reas­on to read the effect as one of cog­nit­ive pen­et­ra­tion. Simply the belief “hearts are red,” accord­ing to the form dis­tinc­tion, does not rep­res­ent the struc­tur­al prop­er­ties of the col­or space, and thus has no resources to inform per­cep­tion to modi­fy itself any par­tic­u­lar way. Of course, one might pos­it a more spe­cif­ic belief—say, that this par­tic­u­lar heart is a par­tic­u­lar shade of red—but this belief would have to be based on per­cep­tu­al evid­ence about the stim­u­lus. If per­cep­tion couldn’t rep­res­ent this stim­u­lus as this shade on its own, we wouldn’t come by the belief. Moreover, on the integ­rat­ive view this is the kind of thing per­cep­tion does any­way. Hence, there is no reas­on to see the per­cept as being the res­ult of cog­nit­ive intervention.

In cat­egor­ic­al con­texts, one strong motiv­a­tion for cog­nit­ive pen­et­ra­tion is the idea that per­cep­tu­al cat­egor­ies are learned, and often this learn­ing is informed by pri­or beliefs and instruc­tions (Churchland, 1988; Siegel, 2013; Stokes, 2014). There are prob­lems with these views, how­ever, both empir­ic­al and con­cep­tu­al. The empir­ic­al prob­lem is that learn­ing can occur without any cog­nit­ive influ­ence what­so­ever. Perceivers can become attuned to dia­gnost­ic dimen­sions for entirely nov­el cat­egor­ies simply by view­ing exem­plars (Folstein, Gauthier, & Palmeri, 2010). Here, sub­jects have no pri­or beliefs or instruc­tions for how to per­ceive the stim­u­lus, but per­cep­tu­al learn­ing occurs any­way. In many cases, how­ever, even when beliefs are employed in learn­ing a cat­egory, it’s obvi­ous that the belief does not encode any con­tent that is use­ful for inform­ing the spe­cif­ic per­cept. In Goldstone and Gureckis’ case above, sub­jects were shown exem­plar faces and told “this is an A” or “this is a B”. But this index­ic­al belief does not describe any­thing about the cat­egory they actu­ally learn.

One might expect that more detailed instruc­tions or pri­or beliefs can inform more detailed categories—for instance Siegel’s sug­ges­tion that noviti­ate arbor­ists be told to look at the shape of leaves in order to dis­tin­guish (say) pines from birches. However, this runs dir­ectly into the con­cep­tu­al prob­lem. Suppose that pine leaves are pointy while birch leaves are broad. Learners already know what pointy and broad things look like. If these beliefs are all that’s required, then sub­jects don’t need to learn any­thing per­cep­tu­ally in order to make the dis­crim­in­a­tion. However, if the beliefs are not suf­fi­cient to make the discrimination—either because it is a very fine-grained dis­crim­in­a­tion of shape, or because pine versus birch per­cep­tions in fact require the kind of higher-order dimen­sion­al struc­ture dis­cussed above—then their con­tent does not describe what per­cep­tion learns when sub­jects do learn to make the dis­tinc­tion per­cep­tu­ally.[4] In either case, there is a gap between the con­tent of the belief and the con­tent of the learned perception—a gap that is sup­por­ted by stud­ies of per­cep­tu­al learn­ing and expert­ise (for fur­ther dis­cus­sion, see Burnston, 2017a, in prep). So, while beliefs might be import­ant caus­al pre­curs­ors to per­cep­tu­al learn­ing, they do not pen­et­rate the learn­ing process.

So, the situ­ation is this: we have seen that, on the integ­rat­ive view and the form dis­tinc­tion, cog­ni­tion does not have the resources to determ­ine the kind of per­cep­tu­al effects that are of interest in debates about CP. In both syn­chron­ic and dia­chron­ic cases, per­cep­tion can do much of the heavy lift­ing itself, thus ren­der­ing CP unne­ces­sary to explain the effects. A final advant­age of this view­point, espe­cially the form dis­tinc­tion, is that it brings par­tic­u­lar forms of evid­ence to bear on the debate—particularly evid­ence about what hap­pens when pro­cessing of lexical/amodal sym­bols does in fact inter­act with pro­cessing of mod­al ones. The details are too much to go through here, but I argue that the key to under­stand­ing the rela­tion­ship between per­cep­tion and cog­ni­tion is to give up the notion that there are ever dir­ect rela­tion­ships between the token­ing of a par­tic­u­lar cog­nit­ive con­tent and a spe­cif­ic per­cep­tu­al out­come (Burnston, 2016, 2017b). Instead, I sug­gest that token­ing a cog­nit­ive concept biases per­cep­tion towards a wide range of pos­sible out­comes. Here, rather than determ­in­ate cas­u­al rela­tion­ships, we should expect highly prob­ab­il­ist­ic, highly gen­er­al, and highly flex­ible inter­ac­tions, where cog­ni­tion does not force per­cep­tion to act a cer­tain way, but can shift the baseline prob­ab­il­ity that we’ll per­ceive some­thing con­sist­ent with the cog­nit­ive con­tent. This brings prim­ing, atten­tion­al, and mod­u­lat­ory effects under a single rub­ric, but not one on which cog­ni­tion tinkers with the intern­al work­ings of spe­cif­ic per­cep­tu­al pro­cesses to determ­ine how they work in a giv­en instance. I thus call it the “extern­al effect” view of the cognition-perception interface.

Now it is open to the defend­er of cog­nit­ive pen­et­ra­tion to define this dif­fuse inter­ac­tion as an instance of penetration—penetration is a the­or­et­ic­al term one may define as one likes. I think, how­ever, that this notion is not what most cog­nit­ive pen­et­ra­tion the­or­ists have in mind, and it does not obvi­ously carry any of the sup­posed con­sequences for mod­u­lar­ity, the­or­et­ic­al neut­ral­ity, or the epi­stem­ic role of per­cep­tion that pro­ponents of CP assume (Burnston, 2017a; cf. Lyons, 2011). The kind of view I’ve offered cap­tures, in the best avail­able empir­ic­al and prag­mat­ic way, the range of phe­nom­ena at issue, and does so very dif­fer­ently than stand­ard dis­cus­sions of penetration.

 

REFERENCES

Burnston, D. C. (2016). Cognitive pen­et­ra­tion and the cognition–perception inter­face. Synthese, 1–24. DOI: doi:10.1007/s11229-016‑1116‑y

Burnston, D. C. (2017a). Is aes­thet­ic exper­i­ence evid­ence for cog­nit­ive pen­et­ra­tion? New Ideas in Psychology. DOI: https://doi.org/10.1016/j.newideapsych.2017.03.012

Burnston, D. C. (2017b). Interface prob­lems in the explan­a­tion of action. Philosophical Explorations, 20 (2), 242–258. DOI: http://dx.doi.org/10.1080/13869795.2017.1312504

Burnston, D. C. (In pre­par­a­tion). There is no dia­chron­ic cog­nit­ive penetration.

Burnston, D., & Cohen, J. (2012). Perception of fea­tures and per­cep­tion of objects. Croatian Journal of Philosophy (36), 283–314.

Burnston, D. C., & Cohen, J. (2015). Perceptual integ­ra­tion, mod­u­lar­ity, and cog­nit­ive pen­et­ra­tion Cognitive Influences on Perception: Implications for Philosophy of Mind, Epistemology, and Philosophy of Action. Oxford: Oxford University Press.

Churchland, P. M. (1988). Perceptual plas­ti­city and the­or­et­ic­al neut­ral­ity: A reply to Jerry Fodor. Philosophy of Science, 55(2), 167–187.

Clark, A. (2004). Feature-placing and proto-objects. Philosophical Psychology, 17(4), 443–469. doi: 10.1080/0951508042000304171

Firestone, C., & Scholl, B. J. (2016). Cognition does not affect per­cep­tion: Evaluating the evid­ence for “top-down” effects. Behavioral and Brain Sciences, 39, 1–77.

Fodor, J. (1984). Observation recon­sidered. Philosophy of Science, 51(1), 23–43.

Folstein, J. R., Gauthier, I., & Palmeri, T. J. (2010). Mere expos­ure alters cat­egory learn­ing of nov­el objects. Frontiers in Psychology, 1, 40.

Gauthier, I., & Tarr, M. J. (2016). Object Perception. Annual Review of Vision Science, 2(1).

Goldstone, R. L., & Hendrickson, A. T. (2010). Categorical per­cep­tion. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 69–78. doi: 10.1002/wcs.26

Gureckis, T. M., & Goldstone, R. L. (2008). The effect of the intern­al struc­ture of cat­egor­ies on per­cep­tion. Paper presen­ted at the Proceedings of the 30th Annual Conference of the Cognitive Science Society.

Lyons, J. (2011). Circularity, reli­ab­il­ity, and the cog­nit­ive pen­et­rab­il­ity of per­cep­tion. Philosophical Issues, 21(1), 289–311.

Macpherson, F. (2012). Cognitive pen­et­ra­tion of col­our exper­i­ence: rethink­ing the issue in light of an indir­ect mech­an­ism. Philosophy and Phenomenological Research, 84(1), 24–62.

Nanay, B. (2014). Cognitive pen­et­ra­tion and the gal­lery of indis­cern­ibles. Frontiers in Psychology, 5.

Purves, D., Shimpi, A., & Lotto, R. B. (1999). An empir­ic­al explan­a­tion of the Cornsweet effect. The Journal of Neuroscience, 19(19), 8542–8551.

Pylyshyn, Z. (1999). Is vis­ion con­tinu­ous with cog­ni­tion? The case for cog­nit­ive impen­et­rab­il­ity of visu­al per­cep­tion. The Behavioral and Brain Sciences, 22(3), 341–365; dis­cus­sion 366–423.

Raftopoulos, A. (2009). Cognition and per­cep­tion: How do psy­cho­logy and neur­al sci­ence inform philo­sophy? Cambridge: MIT Press.

Rock, I. (1983). The logic of per­cep­tion. Cambridge: MIT Press.

Siegel, S. (2013). The epi­stem­ic impact of the eti­ology of exper­i­ence. Philosophical Studies, 162(3), 697–722.

Stokes, D. (2014). Cognitive pen­et­ra­tion and the per­cep­tion of art. Dialectica, 68(1), 1–34.

Yuille, A., & Kersten, D. (2006). Vision as Bayesian infer­ence: ana­lys­is by syn­thes­is? Trends in Cognitive Sciences, 10(7), 301–308.

 

[1] Different the­or­ists stress dif­fer­ent prop­er­ties. Macpherson (2012) stresses effects being cat­egor­ic­al and asso­ci­ation­al, Nanay (2014) and Churchland (1988) their being top-down. Raftopoulos (2009) cites the role of memory in cat­egor­ic­al effects and Stokes (2014) and Siegel (2013) the import­ance of learn­ing in such contexts.

[2] This kind of read­ing of intra-perceptual pro­cessing is extremely com­mon across a range of the­or­ists and per­spect­ives in per­cep­tu­al psy­cho­logy (e.g., Pylyshyn, 1999; Rock, 1983; Yuille & Kersten, 2006).

[3] This view also rejects the attempt to make these effects cog­nit­ive by defin­ing them as tacit beliefs. The prob­lem with tacit beliefs is that they simply dic­tate that any­thing cor­res­pond­ing to a cat­egory or infer­ence must be cog­nit­ive, which is exactly what’s under dis­cus­sion here. The move thus doesn’t add any­thing to the debate.

[4] This requires assum­ing a “spe­cificity” con­di­tion on the con­tent of a pur­por­ted pen­et­rat­ing belief—namely that a can­did­ate pen­et­rat­or must have the con­tent that per­cep­tion learns to rep­res­ent. I argue in more detail else­where that giv­ing this con­di­tion up trivi­al­izes the pen­et­ra­tion thes­is (Burnston, in prep).

Enactivism, Computation, and Autonomy

by Joe Dewhurst ‑Teaching Assistant at The University of Edinburgh

Enactivism has his­tor­ic­ally rejec­ted com­pu­ta­tion­al char­ac­ter­isa­tions of cog­ni­tion, at least in more tra­di­tion­al ver­sions. This has led to the per­cep­tion that enact­iv­ist approaches to cog­ni­tion must be opposed to be more main­stream com­pu­ta­tion­al­ist approaches, which offer a com­pu­ta­tion­al char­ac­ter­isa­tion of cog­ni­tion. However, the con­cep­tion of com­pu­ta­tion which enact­iv­ism rejects is in some senses quite old fash­ioned, and it is not so clear that enact­iv­ism need neces­sar­ily be opposed to com­pu­ta­tion, under­stood in a more mod­ern sense. Demonstrating that there could be com­pat­ib­il­ity, or at least not a neces­sary oppos­i­tion, between enact­iv­ism and com­pu­ta­tion­al­ism (in some sense) would open the door to a pos­sible recon­cili­ation or cooper­a­tion between the two approaches.

In a recently pub­lished paper (Villalobos & Dewhurst 2017), my col­lab­or­at­or Mario and I have focused on elu­cid­at­ing some of the reas­ons why enact­iv­ism has rejec­ted com­pu­ta­tion, and have argued that these do not neces­sar­ily apply to more mod­ern accounts of com­pu­ta­tion. In par­tic­u­lar, we have demon­strated that a phys­ic­ally instan­ti­ated Turing machine, which we take to be a paradig­mat­ic example of a com­pu­ta­tion­al sys­tem, can meet the autonomy require­ments that enact­iv­ism uses to char­ac­ter­ise cog­nit­ive sys­tems. This demon­stra­tion goes some way towards estab­lish­ing that enact­iv­ism need not be opposed to com­pu­ta­tion­al char­ac­ter­isa­tions of cog­ni­tion, although there may be oth­er reas­ons for this oppos­i­tion, dis­tinct from the autonomy requirements.

The enact­ive concept of autonomy first appears in its mod­ern guise in Varela, Thompson, & Rosch’s 1991 book The Embodied Mind, although it has import­ant his­tor­ic­al pre­curs­ors in Maturana’s autopoi­et­ic the­ory (see his 1970, 1975, 1981; see also Maturana & Varela 1980) and cyber­net­ic work on homeo­stas­is (see e.g. Ashby 1956, 1960). There are three dimen­sions of autonomy that we con­sider in our ana­lys­is of com­pu­ta­tion. Self-determination requires that the beha­viour of an autonom­ous sys­tem must be determ­ined by that system’s own struc­ture, and not by extern­al instruc­tion. Operational clos­ure requires that the func­tion­al organ­isa­tion of an autonom­ous sys­tem must loop back on itself, such that the sys­tem pos­sesses no (non-arbitrary) inputs or out­puts. Finally, an autonom­ous sys­tem must be pre­cari­ous, such that the con­tin­ued exist­ence of the sys­tem depends on its own func­tion­al organ­isa­tion, rather than on extern­al factors out­side of its con­trol. In this post I will focus on demon­strat­ing that these cri­ter­ia can be applied to a phys­ic­al com­put­ing sys­tem, rather than address­ing why or how enact­iv­ism argues for them in the first place.

All three cri­ter­ia have tra­di­tion­ally been used to dis­qual­i­fy com­pu­ta­tion­al sys­tems from being autonom­ous sys­tems, and hence to deny that cog­ni­tion (which for enact­iv­ists requires autonomy) can be com­pu­ta­tion­al (see e.g. Thompson 2007: chapter 3). Here it is import­ant to recog­nise that the enact­iv­ists have a par­tic­u­lar account of com­pu­ta­tion in mind, one that they have inher­ited from tra­di­tion­al com­pu­ta­tion­al­ists. According to this ‘semant­ic’ account, a phys­ic­al com­puter is defined as a sys­tem that per­forms sys­tem­at­ic trans­form­a­tions over content-bearing (i.e. rep­res­ent­a­tion­al) states or sym­bols (see e.g. Sprevak 2010). With such an account in mind, it is easy to see why the autonomy cri­ter­ia might rule out com­pu­ta­tion­al sys­tems. We typ­ic­ally think of such a sys­tem as con­sum­ing sym­bol­ic inputs, which it trans­forms accord­ing to pro­grammed instruc­tions, before pro­du­cing fur­ther sym­bol­ic out­puts. Already this sys­tem has failed to meet the self-determination and oper­a­tion­al clos­ure cri­ter­ia. Furthermore, as arte­fac­tu­al com­puters are typ­ic­ally reli­ant on their cre­at­ors for main­ten­ance, etc., they also fail to meet the pre­cari­ous­ness cri­ter­ia. So, giv­en this quite tra­di­tion­al under­stand­ing of com­pu­ta­tion, it is easy to see why enact­iv­ists have typ­ic­ally denied that com­pu­ta­tion­al sys­tems can be autonomous.

Nonetheless, under­stood accord­ing to more recent, ‘mech­an­ist­ic’ accounts of com­pu­ta­tion, there is no reas­on to think that the autonomy cri­ter­ia must neces­sar­ily exclude com­pu­ta­tion­al sys­tems. Whilst they dif­fer in some details, all of these accounts deny that com­pu­ta­tion is inher­ently semant­ic, and instead define phys­ic­al com­pu­ta­tion in terms of mech­an­ist­ic struc­tures. We will not rehearse these accounts in any detail here, but the basic idea is that phys­ic­al com­pu­ta­tion can be under­stood in terms of mech­an­isms that per­form sys­tem­at­ic trans­form­a­tions over states that do not pos­sess any intrins­ic semant­ic con­tent (see e.g. Miłkowski 2013; Fresco 2014; Piccinini 2015). With this rough frame­work in mind, we can return to the autonomy criteria.

Even under the mech­an­ist­ic account, com­pu­ta­tion is usu­ally under­stood in terms of map­pings between inputs and out­puts, where there is a clear sense of the begin­ning and end of the com­pu­ta­tion­al oper­a­tion. A sys­tem organ­ised in this way can be described as ‘func­tion­ally open’, mean­ing that its func­tion­al organ­isa­tion is open to the world. A func­tion­ally closed sys­tem, on the oth­er hand, is one whose func­tion­al organ­isa­tion loops back through the world, such that the envir­on­ment­al impact of the system’s ‘out­puts’ con­trib­utes to the ‘inputs’ that it receives.

A simple example of this dis­tinc­tion can be found by con­sid­er­ing two dif­fer­ent ways that a ther­mo­stat could be used. In the first case the sensor, which detects ambi­ent tem­per­at­ure, is placed in one house, and the effect­or, which con­trols a radi­at­or, is placed in anoth­er (see fig­ure 1). This sys­tem is func­tion­ally open, because there is only a one-way con­nec­tion between the sensor and the effect­or, allow­ing us to straight­for­wardly identi­fy inputs and out­puts to the system.

A more con­ven­tion­al way of set­ting up a ther­mo­stat is with both the sensor and the effect­or in the same house (see fig­ure 2). In this case the appar­ent ‘out­put’ (i.e. con­trol of the radi­at­or) loops back way round to the appar­ent ‘input’ (i.e. ambi­ent tem­per­at­ure), form­ing a func­tion­ally closed sys­tem. The ambi­ent air tem­per­at­ure in the house is effect­ively part of the sys­tem, mean­ing that we could just as well treat the effect­or as provid­ing input and the sensor as pro­du­cing out­put – there is no non-arbitrary begin­ning or end to this system.

Whilst it is typ­ic­al to treat a com­put­ing mech­an­ism more like the first ther­mo­stat, with a clear input and out­put, we do not think that this per­spect­ive is essen­tial to the mech­an­ist­ic under­stand­ing of com­pu­ta­tion. There are two pos­sible ways that we could arrange a com­put­ing mech­an­ism. The func­tion­ally open mech­an­ism (fig­ure 3) reads from one tape and writes onto anoth­er, whilst the func­tion­ally closed mech­an­ism (fig­ure 4) reads and writes onto the same tape, cre­at­ing a closed sys­tem ana­log­ous to the ther­mo­stat with its sensor and effect­or in the same house. As Wells (1998) sug­gests, a con­ven­tion­al Turing machine is actu­ally arranged in the second way, provid­ing an illus­tra­tion of a func­tion­al closed com­put­ing mech­an­ism. Whether or not this is true of oth­er com­pu­ta­tion­al sys­tems is a dis­tinct ques­tion, but it is clear that at least some phys­ic­ally imple­men­ted com­puters can exhib­it oper­a­tion­al closure.

The self-determination cri­terion requires that a system’s oper­a­tions are determ­ined by its own struc­ture, rather than by extern­al instruc­tions. This cri­terion applies straight­for­wardly to at least some com­put­ing mech­an­isms. Whilst many com­puters are pro­gram­mable, their basic oper­a­tions are non­ethe­less determ­ined by their own phys­ic­al struc­ture, such that the ‘instruc­tions’ provided by the pro­gram­mer only make sense in the con­text of the sys­tem itself. To anoth­er sys­tem, with a dis­tinct phys­ic­al struc­ture, those ‘instruc­tions’ would be mean­ing­less. Just as the enact­ive auto­maton ‘Bittorio’ brings mean­ing to a mean­ing­less sea of 1s and 0s (see Varela 1988; Varela, Thompson, & Rosch 1991: 151–5), so the struc­ture of a com­put­ing mech­an­ism bring mean­ing to the world that it encounters.

Finally, we can turn to the pre­cari­ous­ness cri­terion. Whilst the com­pu­ta­tion­al sys­tems that we con­struct are typ­ic­ally reli­ant upon us for con­tin­ued main­ten­ance and a sup­ply of energy, and play no dir­ect role in their own upkeep, this is more a prag­mat­ic fea­ture of our design of those sys­tems, rather than any­thing essen­tial to com­pu­ta­tion. We could eas­ily ima­gine a com­put­ing mech­an­ism designed so that it seeks out its own source of energy and is able to main­tain its own phys­ic­al struc­ture. Such a sys­tem would be pre­cari­ous in just the same sense that enact­iv­ism con­ceives of liv­ing sys­tems as being pre­cari­ous. So there is no in-principle reas­on why a com­put­ing sys­tem should not be able to meet the pre­cari­ous­ness criterion.

In this post I have very briefly argued that the enact­iv­ist autonomy cri­ter­ia can be applied to (some) phys­ic­ally imple­men­ted com­put­ing mech­an­isms. Of course, enact­iv­ists may have oth­er reas­ons for think­ing that cog­nit­ive sys­tems can­not be com­pu­ta­tion­al. Nonetheless, we think this ana­lys­is could be inter­est­ing for a couple of reas­ons. Firstly, inso­far as com­pu­ta­tion­al neur­os­cience and com­pu­ta­tion­al psy­cho­logy have been suc­cess­ful research pro­grams, enact­iv­ists might be inter­ested in adopt­ing some aspects of com­pu­ta­tion­al explan­a­tion for their own ana­lyses of cog­nit­ive sys­tems. Secondly, we think that the enact­iv­ist char­ac­ter­isa­tion of autonom­ous sys­tems might help to elu­cid­ate the senses in which a com­pu­ta­tion­al sys­tem might be cog­nit­ive. Now that we have estab­lished the basic pos­sib­il­ity of autonom­ous com­pu­ta­tion­al sys­tems, we hope to devel­op future work along both of these lines, and invite oth­ers to do so too.

I will leave you with this short and amus­ing video of the autonom­ous robot­ic cre­ations of the British cyber­net­i­cist W. Grey Walter, which I hope might serve as a source of inspir­a­tion for future cooper­a­tion between enact­iv­ism and computationalism.

 

References

  • Ashby, R. (1956). An intro­duc­tion to cyber­net­ics. London: Chapman and Hall.
  • Ashby, R. (1960). Design for a Brain. London: Chapman and Hall.
  • Fresco, N. (2014). Physical com­pu­ta­tion and cog­nit­ive sci­ence. Berlin, Heidelberg: Springer-Verlag.
  • Maturana, H. (1970). Biology of cog­ni­tion. Biological Computer Laboratory, BCL Report 9, University of Illinois, Urbana.
  • Maturana, H. (1975). The organ­iz­a­tion of the liv­ing: A the­ory of the liv­ing organ­iz­a­tion. International Journal of Man-Machine stud­ies, 7, 313–332.
  • Maturana, H. (1981). Autopoiesis. In M. Zeleny (Ed.), Autopoiesis: a the­ory of liv­ing organ­iz­a­tion (pp. 21–33). New York; Oxford: North Holland.
  • Maturana, H. and Varela, F. (1980). Autopoiesis and cog­ni­tion: The real­iz­a­tion of the liv­ing. Dordrecht, Holland: Kluwer Academic Publisher.
  • Miłkowski, M. (2013). Explaining the com­pu­ta­tion­al mind. Cambridge, MA: MIT Press.
  • Piccinini, G. (2015). Physical Computation. Oxford: OUP.
  • Sprevak, M. (2010). Computation, Individuation, and Received View on Representations. Studies in History and Philosophy of Science, 41: 260–70.
  • Thompson, E. (2007). Mind in Life: Biology, phe­nomen­o­logy, and the sci­ences of mind. Cambridge, MA: Harvard University Press.
  • Varela F. 1988. Structural Coupling and the Origin of Meaning in a Simple Cellular Automation. In Sercarz E. E., Celada F., Mitchison N.A., Tada T. (eds.), The Semiotics of Cellular Communication in the Immune System, pp. 151–61. New York: Springer-Verlag.
  • Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.
  • Villalobos, M. & Dewhurst, J. (2017). Enactive autonomy in com­pu­ta­tion­al sys­tems. Synthese, doi:10.1007/s11229-017‑1386‑z
  • Wells, A. J. (1998). Turing’s Analysis of Computation and Theories of Cognitive Architecture. Cognition, 22(3), 269–94.

 

Are olfactory objects spatial?

by Solveig Aasen — Associate Professor of Philosophy at the University of Oslo

On sev­er­al recent accounts of orthonas­al olfac­tion, olfact­ory exper­i­ence does (in some sense) have a spa­tial aspect. These views open up nov­el ways of think­ing about the spa­ti­al­ity of what we per­ceive. For while olfact­ory exper­i­ence may not qual­i­fy as spa­tial in the way visu­al exper­i­ence does, it may nev­er­the­less be spa­tial in a dif­fer­ent way. What way? And how does it dif­fer from visu­al spatiality?

It is often noted that, by con­trast to what we see, what we smell is neither at a dis­tance nor at a dir­ec­tion from us. Unlike anim­als such as rats and the ham­mer­head shark, which have their nos­trils placed far enough apart that they can smell in ste­reo (much like we can see and hear in ste­reo), we humans are not able to tell which dir­ec­tion a smell is com­ing from (except per­haps under spe­cial con­di­tions (Radil and Wysocki 1998; Porter et al. 2005), or if we indi­vidu­ate olfac­tion so as to include the tri­gem­in­al nerve (Young et al. 2014)). Nor are we able to tell how a smell is dis­trib­uted around where we are sit­ting (Batty 2010a p. 525; 2011, p. 166). Nevertheless, it can be argued that what we smell can be spa­tial in some sense. Several sug­ges­tions to this effect are on offer.

Batty (2010a; 2010b; 2011; 2014) holds that what we smell (olfact­ory prop­er­ties, accord­ing to her) is presen­ted as ‘here’. This is not a loc­a­tion like any oth­er. It is the only loc­a­tion at which olfact­ory prop­er­ties are ever presen­ted, for olfact­ory exper­i­ence, on Batty’s view, lacks spa­tial dif­fer­en­ti­ation. Moreover, she emphas­ises that, if we are to make room for a cer­tain kind of non-veridical olfact­ory exper­i­ence, ‘here’ can­not be a loc­a­tion in our envir­on­ment; it is not to be under­stood as ‘out there’ (Batty 2010b, pp. 20–21). This lat­ter point con­trasts with Richardson’s (2013) view. She observes that, because olfact­ory exper­i­ence involves sniff­ing, it is part of the phe­nomen­o­logy of olfact­ory exper­i­ence that some­thing (odours, accord­ing to Richardson) seems to be brought into the nos­trils from out­side the body. Thus, the object of olfact­ory exper­i­ence seems spa­tial in the sense that what we smell is com­ing from without, although it is not com­ing from any par­tic­u­lar loc­a­tion. It is inter­est­ing that although Batty and Richardson claims con­trast, they both seem to think that they are point­ing out a spa­tial aspect of olfact­ory exper­i­ences when claim­ing that what we smell is, respect­ively, ‘here’ or com­ing from without.

Another view, com­pat­ible with the claim that what we smell is neither at a dis­tance nor dir­ec­tion from us, is presen­ted by Young (2016). He emphas­ises the fact that the molecu­lar struc­ture of chem­ic­al com­pounds determ­ines which olfact­ory qual­ity sub­jects exper­i­ence. It is pre­cisely this struc­ture with­in an odour plume, he argues, that is the object of olfact­ory exper­i­ence. Would an olfact­ory exper­i­ence of the molecu­lar struc­ture have a spa­tial aspect? Young does not spe­cify this. But since the struc­ture of the molecule is spa­tial, one can at least envis­age that exper­i­en­cing molecu­lar struc­ture is, in part, to exper­i­ence the spa­tial rela­tions between molecules. If so, we can envis­age spa­ti­al­ity without per­spect­ive. For, pre­sum­ably, the spa­tial ori­ent­a­tion the molecules have rel­at­ive to each oth­er and to the per­ceiv­er would not mat­ter to the exper­i­ence. Presumably, it would be their intern­al spa­tial struc­ture that is exper­i­enced, regard­less of their ori­ent­a­tion rel­at­ive to oth­er things.

The claim that what we smell is neither at a dir­ec­tion nor dis­tance from us can, how­ever, be dis­puted. As Young (2016) notes, this claim neg­lects the pos­sib­il­ity of track­ing smells over time. Although the bound­ar­ies of the cloud of odours are less clear than for visu­al objects, the exten­sion of the cloud in space and the changes in its intens­ity seem to be spa­tial aspects of our olfact­ory exper­i­ences when we move around over time. Perhaps one would object that the more fun­da­ment­al type of olfact­ory exper­i­ence is syn­chron­ic and not dia­chron­ic. The syn­chron­ic vari­ety has cer­tainly received the most atten­tion in the lit­er­at­ure. But if one’s inter­ested in an invest­ig­a­tion of our ordin­ary olfact­ory exper­i­ences, it is not clear why dia­chron­ic exper­i­ences should be less worthy of consideration.

Perhaps one would think that an obvi­ous spa­tial aspect of olfact­ory exper­i­ence is the spa­tial prop­er­ties of the source, i.e. the phys­ic­al object from which the chem­ic­al com­pounds in the air ori­gin­ate. But there is a sur­pris­ingly wide­spread con­sensus in the lit­er­at­ure that the source is not part of what we per­ceive in olfac­tion. Lycan’s (1996; 2014) lay­er­ing view may be an excep­tion. He claims that we smell sources by smelling odours. But, as Lycan him­self notes, there is a ques­tion as to wheth­er the ‘by’-relation is an infer­ence rela­tion. If it is, his claim is not neces­sar­ily sub­stan­tially dif­fer­ent from Batty’s (2014, pp. 241–243) claim that olfact­ory prop­er­ties are locked onto source objects at the level of belief, but that sources are not perceived.

Something that makes eval­u­ation of the above­men­tioned ideas about olfact­ory spa­ti­al­ity com­plic­ated is that there is a vari­ety of facts about olfac­tion that can be taken to inform an account of olfact­ory exper­i­ence. As Stevenson and Wilson (2006) note, chem­ic­al struc­ture has been much stud­ied. But even though the nose has about 300 recept­ors ‘which allow the detec­tion of a nearly end­less com­bin­a­tion of dif­fer­ent odor­ants’ (ibid., p. 246), how rel­ev­ant is the chem­ic­al struc­ture to the ques­tion ‘what we can per­ceive?’, when the dis­crim­in­a­tions we as per­ceiv­ers report are much less detailed? What is the rel­ev­ance of facts about the work­ings and indi­vidu­ation of the olfact­ory sys­tem? Is it a ser­i­ous flaw if our con­clu­sions about olfact­ory exper­i­ence con­tra­dict the phe­nomen­o­logy? Different con­trib­ut­ors to the debate seem to provide or pre­sup­pose dif­fer­ent answers to ques­tions like these. This makes com­par­is­on com­plic­ated. Comparison aside, how­ever, some inter­est­ing ideas about olfact­ory spa­ti­al­ity can, as briefly shown, be appre­ci­ated on their own terms.

 

 

References:

Batty, C. 2014. ‘Olfactory Objects’. In D. Stokes, M. Matthen and S. Biggs (eds.), Perception and Its Modalities. Oxford: Oxford University Press.

Batty, C. 2011. ‘Smelling Lessons’. Philosophical Studies 153: 161–174.

Batty, C. 2010a. ‘A Representationalist Account of Olfactory Expereince’. Canadian Journal of Philosophy 40(4): 511–538.

Batty, C. 2010b. ‘What the Nose Doesn’t Know: Non-veridicality and Olfactory Experience’. Journal of Consciousness Studies 17: 10–27.

Lycan, W. G. 2014. ‘The Intentionality of Smell’. Frontiers in Psychology 5: 68–75.

Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: Bradford Books/MIT Press.

Radil, T. and C. J. Wysocki. 1998. ‘Spatiotemporal mask­ing in pure olfac­tion’. Olfaction and Taste 12(855): 641–644.

Richardson, L. 2013. ‘Sniffing and Smelling’. Philosophical Studies 162: 401–419.

Porter, J. Anand, T., Johnson, B. N., Kahn, R. M., and N. Sobel. 2005. ‘Brain mech­an­isms for extract­ing spa­tial inform­a­tion from smell’. Neuron 47: 581–592.

Young, B. D. 2016. ‘Smelling Matter’. Philosophical Psychology 29(4): 520–534.

Young, B. D., A. Keller and D. Rosenthal. 2014. ‘Quality-space Theory in Olfaction’. Frontiers in Psychology 5: 116–130.

Wilson, D. A. and R. J. Stevenson. 2006. Learning to Smell. Olfactory Perception from Neurobiology to Behaviour. Baltimore, MD: The John Hopkins University Press.