Do nonhuman animals know they’re not alone? Of course they must know there are lots of things in the world around them – rocks, water, trees, other creatures and what have you. But do they know that they inhabit a world populated by minded creatures – that the animals around them see and know things, that they have beliefs, intentions and desires? Can they attribute mental states to other animals, and use those attributions to predict or explain their behaviour? If so, then they’re what philosophers and psychologists call ‘mindreaders’.
Whether animals are mindreaders has been a contested question in comparative cognition for around forty years (beginning with Premack & Woodruff, 1978), and it remains controversial. My interest in this post is not so much whether animals are mindreaders but rather, if animals are mindreaders, what kind of mindreaders might they be? The motivating thought is this: even if animals do represent and reason about the mental states of others, their understanding of mental those states might be somewhat different from ours.
The idea that animals might have a limited or ‘minimal’ understanding of mental states has been explored in a number of places (see, for instance, Bermúdez, 2011; Butterfill & Apperly, 2013; Call & Tomasello, 2008). These proposals differ, but they have in common the idea that animals don’t construe mental states as representations – that is, as states which represent the world, and which can do so accurately or inaccurately. If these proposals are right, animals might be able to represent others as having factive mental states like seeing or knowing, but would not be able to make sense of another agent having a false belief, or any state that misrepresents the world.
Recent work on mindreading in chimpanzees puts pressure on this sort of proposal. Christopher Krupenye and colleagues (Krupenye, Kano, Hirata, Call, & Tomasello, 2016) found that chimpanzees were able to predict the behaviour of a human with a false belief. It’s not uncontroversial (see Andrews, 2018 for discussion), but for the sake of argument let’s say that this is indeed evidence that chimps understand false beliefs, as states that misrepresent the world. Does that mean that chimps’ understanding of mental states is essentially the same as our own?
I’ve argued that it doesn’t. That’s because there are important ways in which mindreaders might differ from one another, even if they represent mental states as representational. To see that, let’s think a bit more about representations. A representation has a content – how it represents the world as being – which can be accurate or inaccurate. The sentence ‘Santa is in the chimney’ is a representation whose content is that Santa is in the chimney.It’s accurate if Santa is in the chimney, and inaccurate if he’s somewhere else. But a representation also has a format– it exploits a particular representational system in order to represent what it represents. ‘Santa is in the chimney’ is a representation with a sentential, linguistic format. But we could represent the same content in a number of other formats. For instance, we might represent it pictorially by drawing Santa in the chimney, as in Figure 1. Or we might draw up a map representing the same thing, as in Figure 2.
Given that representations may differ with respect to the representational format they exploit, mindreaders might differ with respect to the representational format they take mental states to have. Some might treat beliefs as something like ‘sentences in the head’. Others might treat them as more picture-like. Still others might be what I’ve called ‘mindmappers’ (Boyle, 2019) – they might take literally the idea that a belief is a ‘map of the neighbouring space by which we steer’ (Ramsey, 1931).
This matters, because the representational format one takes mental states to have has a significant impact on one’s mindreading abilities – because different representational formats themselves differ from one another in systematic ways.
Take maps. As I’m using the term, a map makes use of a lexicon of icons, each of which stands for a particular (type of) thing, which it combines according to the principle of spatial isomorphism. Simply put, by placing two icons in a particular spatial relationship on a map, one thereby represents that the two things denoted by the icons stand in an isomorphic spatial relationship in reality. That’s all there is to it.
If you want to represent the spatial layout of a number of objects in a particular region of space, there are lots of advantages to using a map: it’s a very natural and user-friendly way to represent that kind of information. A single map can contain an awful lot of information about the spatial layout of a region. To convey the content of a map in language would usually require a large and unwieldy set of sentences (or a very lengthy sentence). And updating information in a map without introducing inconsistency is easy to do. Updating the represented location of an object by moving an icon thereby also updates the represented relationships between that object and everything else on the map, keeping the whole consistent. If one represented all of this spatial information sententially, it would be easy to introduce inconsistencies. (See Camp, 2007 for a fuller discussion of maps’ representational features.)
For all that, maps are an extremely limiting representational format: all they can really represent is the spatial layout of objects in a region. If you want to represent that Christmas is coming, that the goose is getting fat, or that Santa is really your dad, a map would be a poor format to choose. These are not the kinds of contents that a map can express. For that kind of thing, you need a more expressively powerful format – like language.
The point is that the distinctive strengths and weaknesses of representational formats will show up in their mindreading abilities and behaviour. Humans can ascribe an apparently unlimited range of beliefs – beliefs about Santa’s true identity, about death and resurrection, about possible presents with no known location. I think this is good evidence that we take mental states to be linguistic, or at least to have a format which mirrors language’s expressive power.
But animals might not be like us in that respect: they might think of beliefs as maps in the head. If they do, they would be able to capture what others think about where things are to be found, but they wouldn’t be able to make sense of beliefs about object identities or about non-spatial properties – and nor could they make sense of someone having a belief about an object whilst having no belief about its location. To my knowledge, whether animals can represent these non-spatial beliefs has not been investigated. So, it remains an open empirical question whether they treat beliefs as map-like, linguistic, or having some other format. But it’s a question worth investigating. If animals construed mental states as having a non-linguistic format, there would remain a significant sense in which animals’ mindreading abilities differed qualitatively from ours.
Andrews, K. (2018). Do chimpanzees reason about belief? In K. Andrews & J. Beck (Eds.), The Routledge Handbook of Philosophy of Animal Minds. Abingdon: Routledge.
Bermúdez, J. L. (2011). The force-field puzzle and mindreading in non-human primates. Review of Philosophy and Psychology, 2(3), 397–410. https://doi.org/10.1007/s13164-011‑0077‑9
Boyle, A. (2019). Mapping the minds of others. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-019–00434‑z
Butterfill, S. A., & Apperly, I. A. (2013). How to construct a minimal theory of mind. Mind & Language, 28(5), 606–637.
Call, J., & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192.
Camp, E. (2007). Thinking with maps. Philosophical Perspectives, 21, 145–182.
Krupenye, C., Kano, F., Hirata, S., Call, J., & Tomasello, M. (2016). Great apes anticipate that other individuals will act according to false beliefs. Science, 354(6308), 110–114. https://doi.org/10.1126/science.aaf8110
Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 4, 515–526.
Ramsey, F. P. (1931). The Foundations of Mathematics. London: Kegan Paul.
What is the central executive? In cognitive psychology, executive functioning concerns the computational processes that control cognition, including the direction of attention, action selection, decision making, task switching, and other such functions. In cognitive science, the central processor is sometimes modeled after the CPU of a von Neumann architecture, the module of a computational system that makes calls to memory, executes transformations in line with algorithms over the retrieved data, and then writes back to memory the results of these transformations. On my account of the mind, the central processor possesses the psychological functions that are part of executive functioning. I will refer to this combined construct of a central processor that performs executive functions as the central executive.
The central executive has a range of properties, but for this post, I will focus on domain generality, informational accessibility, and inferential richness. By domain general, I mean that the central executive contains information from different modalities (such as vision, audition, etc.). By informationally accessible, I mean both that the central executive’s algorithms have access to information outside of the central executive and that information contained in these algorithms is accessible by other processes, whether also part of the central executive or part of input or output specific systems. By inferentially rich, I mean that the information in the central executive is potentially combined with any other piece of information to result in new beliefs. The functions of the central executive may or may not be conscious.
Three concepts at the heart of my model of the central executive will provide the resources to begin to explain these three properties: internal search, a global workspace, and foraging.
The first concept is internal search. Newell famously said that search is at the heart of cognition (Newell 1994), a position with which much modern cognitive neuroscience agrees (Behrens, Muller et al. 2018; Bellmund, Gärdenfors et al. 2018). Search is the process of traveling through some space (physical or abstract, such as concept space or the internet) in order to locate a goal, and internal search refers to a search that occurs within the organism. Executive functions, I contend, are types of search.
The second concept in my analysis is the global workspace. Search requires some space through which to occur. In the case of cognition, search occurs in the global workspace: a computational space in which different data structures are located and compete for computational resources and operations. The global workspace is a notion that originated in cognitive theories of consciousness (Baars 1993) but has recently been applied to cognition (Schneider 2011). The global workspace can be conceptualized in different ways. The global workspace could be something like a hard drive that stores data but to which many different other parts of the system (such as the brain) simultaneously have access. Or, it could be something like an arena where different data structures literally roam around and interact with computational operations (like a literal implementation of a production architecture; see Newell 1994; Simon 1999). The central executive is partly constituted by internal search through a global workspace.
The third and final concept in my analysis is foraging. Foraging is a special type of directed search for resources under ignorance. Specifically, foraging is the goal-directed search for resources in non-exclusive, iterated, accept-or-reject decision contexts (Barack and Platt 2017;Barack ms). I contend that central executive processes involve foraging (and hence this third concept is a special case of the first concept, internal search). While central executive processes may not literally make decisions, the analogy is apt. The internal search through the global workspace is directed: a particular goal is sought, which in the case of the central executive is going to be defined by some sort of loss function that the system is attempting to minimize. This search is non-exclusive, as operations on data that are foregone can be executed at a later time. The search is iterated, as the same operation can be performed repeatedly. Finally, the operations of the central executive are accept-or-reject in the sense that computational operations performed on data structures either occur or they do not in a one-at-a-time, serial fashion.
The analysis of the central executive as foraging-type searches through an internal, global workspace may shed light on the three key properties mentioned earlier: domain generality, informational accessibility, and inferential richness.
First, domain generality is provided for by the global workspace and unrestricted search. This workspace is neutral with regard to the subject matter of the data structures it contains, and so is domain general. The search processes that operate in that workspace are also unrestricted in their subject matter—those processes can operate over any data that matches the input constraints for the production system. (While they may be unrestricted in their subject matter, they are restricted by the constraints on the data imposed by the production system’s triggering conditions.) The unrestricted subject matter of the global workspace and the unrestricted nature of the production processes both contribute to the domain general nature of the central executive. This analysis of domain generality suggests two types of such generality should be distinguished. There are constraints on what type of content (perceptual, motoric, general, etc.) can be contained in stored data structures, and there are constraints on the type of content that can trigger a transformation. A domain general workspace can contain domain specific productions, for example.
Second, informational accessibility reflects the global workspace’s structure. In order to be a global workspace, different modality- or domain-specific modules must have access to the workspace. But this access means that there must be connections to the workspace. Other aspects of informational access remain to be explained. In particular, while the global workspace may be widely interconnected, that does not entail that modules have access to information in specific algorithms in the workspace. The presence of a workspace merely insures some of the needed architectural features for such access are present.
Third, inferential richness results from this internal foraging through the workspace. Foraging computations are optimal in that they minimize or maximize some function under uncertainty. Such optimality implies that the executed computation reflects the best data at hand, regardless of its content. Any such data can be utilized to determine the operation that is actually executed at a given time. This explanation of inferential richness is not quite the sort described by Quine (Quine 1960)or Fodor (Fodor 1983), who envision inferential richness as the potential for any piece of information to influence any other. But with enough simple foraging-like computations and enough time, this potential widespread influence can be approximated.
These comments have been speculative, but I hope I have provided an outline of a sketch for a new model of the central executive. Obviously much more conceptual and theoretical work needs to be done, and many objections—perhaps most famously those of Fodor, who despaired of a scientific account of such central processes—remain to be addressed. I intend on fleshing out these ideas in a series of essays. Regardless, I think that there is much more promise in a scientific explanation of these crucial, central psychological processes than has been previously appreciated.
Baars, B. J. (1993). A cognitive theory of consciousness, Cambridge University Press.
Barack, D. L. and M. L. Platt (2017). Engaging and Exploring: Cortical Circuits for Adaptive Foraging Decisions. Impulsivity, Springer: 163–199.
Barack, D. L. (ms). “Information Harvesting: Reasoning as Foraging in the Space of Propositions.”
Behrens, T. E., T. H. Muller, J. C. Whittington, S. Mark, A. B. Baram, K. L. Stachenfeld and Z. Kurth-Nelson (2018). “What is a cognitive map? Organizing knowledge for flexible behavior.” Neuron100(2): 490–509.
Bellmund, J. L., P. Gärdenfors, E. I. Moser and C. F. J. S. Doeller (2018). “Navigating cognition: Spatial codes for human thinking.” 362(6415): eaat6766.
Fodor, J. A. (1983). The modularity of mind: An essay on faculty psychology, MIT press.
Newell, A. (1994). Unified Theories of Cognition, Harvard University Press.
Quine, W. V. O. (1960). Word and object, MIT press.
Schneider, S. (2011). The language of thought, The MIT Press.
Simon, H. (1999). Production systems. The MIT Encyclopedia of the Cognitive Sciences: 676–677.
The extent to which the mind is modular is a foundational concern in cognitive science. Much of this debate has centered on the question of the degree to which input systems, i.e., sensory systems such as vision, are modular (see, e.g., Fodor 1983; Pylyshyn 1999; MacPherson 2012; Firestone & Scholl 201; Burnston 2017; Mandelbaum 2017). By contrast, researchers have paid far less attention to the question of the extent to which our main output system, i.e., the motor system, qualifies as such.
This is not to say that the latter question has gone without acknowledgement. Indeed, in his classic essay Modularity of Mind, Fodor (1983)—a pioneer in thinking about this topic—writes: “It would please me if the kinds of arguments that I shall give for the modularity of input systems proved to have application to motor systems as well. But I don’t propose to investigate that possibility here” (Fodor 1983, p.42).
I’d like to take some steps towards doing so in this post.
To start, we need to say a bit more about what modularity amounts to. A central feature of modular systems—and the one on which I fill focus here—is their informational encapsulation. Informational encapsulation concerns the rangeof information that is accessible to a module in computing the function that maps the inputs it receives to the outputs it yields. A system is informationally encapsulated to the degree that it lacks access to information stored outside the system in the course of processing its inputs. (cf. Robbins 2009, Fodor 1983).
Importantly, informational encapsulation is a relative notion. A system may be informationally encapsulated with respect to some information, but not with respect to other information. When a system is informationally encapsulated with respect to the states of what Fodor called “the central system”—those states familiar to us as propositional attitude states like beliefs and intentions—this is referred to as cognitiveimpenetrabilityor, what I will refer to here as cognitive impermeability. In characterizing the notion of cognitive permeability more precisely, one must be careful not to presuppose that it is perceptual systems only that are at issue. For a neutral characterization, I prefer the following: A system is cognitively permeable if and only if the function it computes is sensitive to the content of a subject S’s mental states, including S’s intentions, beliefs, and desires. In the famous Müller-Lyer illusion, the visual system lacks access to the subject’s belief that the two lines are identical in length in computing the relative size of the stimului, so it is cognitively impermeable relative to that belief.
On this characterization of cognitive permeability, the motor system is clearly cognitively permeable in virtue of its computations, and corresponding outputs, being systematically sensitive to the content of an agent’s intentions. The evidence for this is every intentional action you’ve ever performed. Perhaps the uncontroversial nature of this fact has precluded further investigation of cognitive permeability in the motor system. But there are at least two interesting questions to explore here. First, since cognitive permeability, just like informational encapsulation, comes in degrees, we should ask to what extent is the motor system cognitively permeable. Are there interesting limitations that can be drawn out? (Spoiler: yes.) Second, insofar as there are such limitations, we should ask the extent to which they are fixed. Can they be modulated in interesting ways by the agent? (Spoiler: yes.)
Experimental results suggest that there are indeed interesting limitations to the cognitive permeability of the motor system. This is perhaps most clearly shown by appeal to experimental work employing visuomotor rotation tasks (see also Shepherd 2017 for an important discussion of such work with which I am broadly sympathetic). In such tasks, the participant is instructed to reach for a target on a computer screen. They do not see their hand, but they receive visual feedback from a cursor that represents the trajectory of their reaching movement. On some trials, the experimenters introduce a bias to the visual feedback from the cursor by rotating it relative to the actual trajectory of their unseen movement during the reaching task. For example, a bias might be introduced such that the visual feedback from the cursor represents the trajectory of their reach as being rotated 45°clockwise from the actual trajectory of their arm movement. This manipulation allows experimenters to determine how the motor system will compensate for the conflict between the visual feedback that is predicted on the basis of the motor commands it is executing, and the visual feedback the agent actually receives from the cursor. The main finding is that the motor system gradually adapts to the bias in a way that results in the recalibration of the movements it outputs such that they show “drift” in the direction oppositethat of the rotation, thus reducing the mismatch between the visual feedback and the predicted feedback.
Figure 1. A: A typical set-up for a visuomotor rotation task. B: Typical error feedback when a counterclockwise directional bias is introduced. (Source: Krakauer 2009)
In the paradigm just described, participants do not form an intention to adopt a compensatory strategy; the adaptation the motor system exhibits is purely the result of implicit learning mechanisms that govern its output. But in a variant of this paradigm (Mazzoni & Krakauer 2006), participants are instructed to adopt an explicit “cheating” strategy—that is, to form intentions—to counter the angular bias introduced by the experimenters. This is achieved by placing a neighbouring target (Tn) at a 45°angle from the proper target (Tp) in the direction oppositethe bias (e.g., if the bias is 45°counterclockwise from the Tp, the Tn is placed 45°clockwise from the Tp), such that if participants aim for the Tn, the bias will be compensated for, and the cursor will hit the Tp, thus satisfying the primary task goal.
In this set-up, reaching errors related to the Tp are almost completely eliminated at first. The agent hits the Tp (according to the feedback from the cursor) as a result of forming the intention to aim for the strategically placed Tn. But as participants continue to perform the task on further trials, something interesting happens: their movements once again gradually start to show drift, but this time towardsthe Tn and away from the Tp. What this result is thought to reflect is yet another implicit process of adaption by the motor system, which aims to correct for the difference between the aimed for location (Tn) and the visual feedback (in the direction of the Tp).
Two further details are important for our purposes: First, when participants are instructed to stop using the strategy of aiming for the Tn (in order to hit the Tp) and return their aim to the Tp “[s]ubstantial and long-lasting” (Mazzoni & Krakauer 2006, p.3643) aftereffects are observed, meaning the motor system persists in aiming to reduce the difference between the visual feedback and the earlier aimed for location. Second, in a separate study by Taylor & Ivry (2011) using a very similar paradigm wherein participants had significantly more trials per block (320), participants did eventually correct for the secondary adaption by the motor system and reverse the direction of their movement, but only gradually, and by means of adopting explicit aiming strategies to counteract the drift.
On the basis of these results, we can draw at least three interesting conclusions about cognitive permeability and the motor system: First, although it is clearly sensitive to the content of the proximal intentions that it takes as input (in this case the intention to aim for the Tn), it is not always, or only weakly so, to the distal intentions that those very proximal intentions serve—in this case the intention to hit the Tp. If this is correct, it may be that the motor system lacks sensitivity to the structure of practical reasoning that often guides an agent’s present action in the background. In this case, the motor system seems not to register that the agent intends to hit the Tp by way ofaiming and reaching for the Tn.
Second, given that aftereffects persist for some time even once the explicit aiming strategy (and therefore the intention to aim for the Tn) has been abandoned, we may conclude that the motor system is only sensitive to the content of proximal intentions to a limited degree in that it takes time for it to properly update its performance relative to the agent’s current proximal intention. The implicit adaptation, indexed to the earlier intention, cannot be overridden immediately.
Third, this degree of sensitivity is not fixed, but rather can vary over time as the result of an agent’s interventions, as determined in Taylor & Ivry’s study, where the drift was eventually reversed after a sufficiently large number of trials wherein the agent continuously adjusted their aiming strategy.
To close, I’d like to outline what I take to be a couple of important upshots of the preceding discussion for neighbouring philosophical debates:
Recent discussions of skilled action have sought to determine “how far down” action control is intelligent (see, e.g., Fridland 2014, 2017; Levy 2017; Shepherd 2017). And, on at least some views, this is a function of the degree to which the motor system is sensitive to the content of an agent’s intentions. Here we see that this sensitivity is sometimes limited, but can also improve over time. In my view, this reveals another important dimension of the motor system’s intelligence that goes beyond mere sensitivity, and that pertains to its ability to adapt to an agent’s present goals through learning processes that exhibit a reasonable degree of both stability and flexibility.
Recently, action theorists have turned their attention to solving the so-called “interface problem”, that is, the problem of how intentions and motor representations successfully coordinate given their (arguably) different representational formats (see, e.g., Butterfill & Sinigaglia 2014; Burnston 2017; Fridland 2017; Mylopoulos & Pacherie 2017, 2018; Shepherd 2017; Ferretti & Caiani 2018). The preceding discussion may suggest a more limited degree of interfacing than one might have thought—obtaining only between an agent’s most proximal intentions and the motor system. It may also suggest that successful interfacing depends on both the learning mechanism(s) of the motor system (for maximal smoothness and stability) as well as a continuous interplay between its outputs and the agent’s own practical reasoning for how best to achieve their goals (for maximal flexibility).
Burnston, D. (2017). Interface problems in the explanation of action. Philosophical Explorations, 20(2), 242–258.
Butterfill, S. A. & Sinigaglia, C. (2014). Intention and motor representation in purposive action. Philosophy and Phenomenological Research, 88, 119–145.
Ferretti, G. & Caiani, S.Z. (2018). Solving the interface problem without translation: the same format thesis. Pacific Philosophical Quarterly,doi: 10.1111/papq.12243
Fodor, J. (1983). The modularity of mind: An essay on faculty psychology. Cambridge: The MIT Press.
Fridland, E. (2014). They’ve lost control: Reflections on skill. Synthese, 91(12), 2729–2750.
Fridland, E. (2017). Skill and motor control: intelligence all the way down. Philosophical Studies, 174(6), 1539–1560.
Krakauer J. W. (2009). Motor learning and consolidation: the case of visuomotor rotation. Advances in experimental medicine and biology, 629, 405–21.
Levy, N. (2017). Embodied savoir-faire: knowledge-how requires motor representations. Synthese, 194(2), 511–530.
MacPherson, F. (2012). Cognitive penetration of colour experience: Rethinking the debate in light of an indirect mechanism. Philosophy and Phenomenological Research,84(1). 24–62.
Mazzoni, P. & Krakauer, J. W. (2006). An implicit plan overrides an explicit strategy during visuomotor adaptation. The Journal of Neuroscience, 26(14): 3642–3645.
Mylopoulos, M. & Pacherie, E. (2017). Intentions and motor representations: The interface challenge. Review of Philosophy and Psychology, 8(2), pp. 317–336.
Mylopoulos, M. & Pacherie, E. (2018). Intentions: The dynamic hierarchical model revisited. WIREs Cognitive Science, doi: 10.1002/wcs.1481
Shepherd, J. (2017). Skilled action and the double life of intention. Philosophy and Phenomenological Research, doi:10.1111/phpr.12433
Taylor, J.A. and Ivry, R.B. (2011). Flexible cognitive strategies during motor learning. PLoS Computational Biology7(3), p.e1001096.
Smell is the Cinderella of our Senses. Traditionally dismissed as communicating merely subjective feelings and brutish sensations, the sense of smell never attracted critical attention in philosophy or science. The characteristics of odor perception and its neural basis are key to understanding the mind through the brain, however.
This claim might sound surprising. Human olfaction acquired a rather poor reputation throughout most of Western intellectual history. “Of all the senses it is the one which appears to contribute least to the cognitions of the human mind,” commented the French philosopher of the Enlightenment, Étienne Bonnot de Condillac, in 1754. Immanuel Kant (1798) even called smell “the most ungrateful” and “dispensable” of the senses. Scientists were not more positive in their judgment either. Olfaction, Charles Darwin concluded (1874), was “of extremely slight service” to mankind. Further, statements about people who paid attention to smell frequently mixed with prejudice about sex and race: Women, children, and non-white races — essentially all groups long excluded from the rationality of white men — were found to show increased olfactory sensitivity (Classen et al. 1994). Olfaction, therefore, did not appear to be a topic of reputable academic investment — until recently.
Scientific research on smell was catapulted into mainstream neuroscience almost overnight with the discovery of the olfactory receptor genes by Linda Buck and Richard Axel in 1991. It turned out that the olfactory receptors constitute the largest protein gene family in most mammalian genomes (except for dolphins), exhibiting a plethora of properties significant for structure-function analysis of protein behavior (Firestein 2001; Barwich 2015). Finally, the receptor gene discovery provided targeted access to probe odor signaling in the brain (Mombaerts et al. 1996; Shepherd 2012). Excitement soon kicked in, and hopes rose high to crack the coding principles of the olfactory system in no time. Because the olfactory pathway has a notable characteristic, one that Ramon y Cajal highlighted as early as 1901/02: Olfactory signals require only two synapses to go straight into the core cortex (forming almost immediate connections with the amygdala and hypothalamus)! To put this into perspective, in vision two synapses won’t get you even out of the retina. You can follow the rough trajectory of an olfactory signal in Figure 1 below.
Three decades later and the big revelation still is on hold. A lot of prejudice and negative opinion about the human sense of smell have been debunked (Shepherd 2004; Barwich 2016; McGann 2017). But the olfactory brain remains a mystery to date. It appears to differ markedly in its neural principles of signal integration from vision, audition, and somatosensation (Barwich 2018; Chen et al. 2014). The background to this insight is a remarkable piece of contemporary history of science. (Almost all actors key to the modern molecular development of research on olfaction are still alive and actively conducting research.)
Olfaction — unlike other sensory systems — does not maintain a topographic organization of stimulus representation in its primary cortex (Stettler and Axel 2009; Sosulski et al. 2011). That’s neuralese for: We actually do not know how the brain organizes olfactory information so that it can tell what kind of perceptual object or odor image an incoming signal encodes. You won’t find a map of stimulus representation in the brain, such that chemical groups like ketones would sit next to aldehydes or perceptual categories like rose were right next to lavender. Instead, axons from the mitral cells in the olfactory bulb (the first neural station of olfactory processing at the frontal lobe of the brain) project to all kinds of areas in the piriform cortex (the largest domain of the olfactory cortex, previously assumed to be involved in odor object formation). In place of a map, you find a mosaic (Figure 1).
What does this tell us about smell perception and the brain in general? Theories of perception, in effect, always have been theories of vision. Concepts originally derived from vision were made fit to apply to what’s usually sidelined as “the other senses.” This tendency permeates neuroscience as well as philosophy (Matthen 2005). However, it is a deeply problematic strategy for two reasons. First, other sensory modalities (smell, taste, and touch but also the hidden senses of proprioception and interoception) do not resonate entirely with the structure of the visual system (Barwich 2014; Keller 2017; Smith 2017b). Second, we may have narrowed our investigative lens and overlooked important aspects also of the visual system that can be “rediscovered” if we took a closer look at smell and other modalities. Insight into the complexity of cross-modal interactions, especially in food studies, suggests that much already (Smith 2012; Spence and Piqueras-Fiszman 2014). So the real question we should ask is:
How would theories of perception differ if we extended our perspective on the senses; in particular, to include features of olfaction?
Two things stand out already. The first concerns theories of the brain, the other the permeable border between processes of perception and cognition.
First, when it comes to the principles of neural organization, not everything in vision that appears crystal clear really is. The cornerstone of visual topography has been called into question more recently by the prominent neuroscientist Margaret Livingstone (who, not coincidentally, trained with David Hubel: one half of the famous duo of Hubel and Wiesel (2004) whose findings led to the paradigm of neural topography in vision research in the first place). Livingstone et al. (2017) found that the spatially discrete activation patterns in the fusiform face area of macaques were contingent upon experience — both in their development and, interestingly, partly also their maintenance. In other words, learning is more fundamental to the arrangement of neural signals in visual information processing and integration than previously thought. The spatially discrete patterns of the visual system may constitute more of a developmental byproduct than simply a genetically predetermined Bauplan. From this perspective, figuring out the connectivity that underpins non-topographic and associative neural signaling, such as in olfaction, offers a complementary model to determine the general principles of brain organization.
Second, emphasis on experience and associative processing in perceptual object formation (e.g., top-down effects in learning) also mirrors current trends in cognitive neuroscience. Smell has long been neglected from mainstream theories of perception precisely because of the characteristic properties that make it subject to strong contextual and cognitive biases. Consider a wine taster, who experiences wine quality differently by focusing on distinct criteria of observational likeness in comparison with a layperson. She can point to subtle flavor notes that the layperson may have missed but, after paying attention, is also able to perceive (e.g., a light oak note). Such influence of attention and learning on perception, ranging from normal perception to the acquisition of perceptual expertise, is constitutive of odor and its phenomenology (Wilson and Stevenson 2006; Barwich 2017; Smith 2017a). Notably, the underlying biases (influenced by semantic knowledge and familiarity) are increasingly studied as constitutive determinants of brain processes in recent cognitive neuroscience; especially in forward models or models of predictive coding where the brain is said to cope with the plethora of sensory data by anticipating stimulus regularities on the basis of prior experience (e.g., Friston 2010; Graziano 2015). While advocates of these theories have centered their work on vision, olfaction now serves as an excellent model to further the premise of the brain as operating on the basis of forecasting mechanisms (Barwich 2018); blurring the boundary between perceptual and cognitive processes with the implicit hypothesis that perception is ultimately shaped by experience.
These are ongoing developments. Unknown as yet is how the brain makes sense of scents. What is becoming increasingly clear is that theorizing about the senses necessitates a modernized perspective that admits other modalities and their dimensions. We cannot explain the multitude of perceptual phenomena with vision alone. To think otherwise is not only hubris but sheer ignorance. Smell is less evident in its conceptual borders and classification, its mechanisms of perceptual constancy and variation. It thus requires new philosophical thinking, one that reexamines traditional assumptions about stimulus representation and the conceptual separation of perception and judgment. However, a proper understanding of smell — especially in its contextual sensitivity to cognitive influences — cannot succeed without also taking an in-depth look at its neural underpinnings. Differences in coding, concerning both receptor and neural levels of the sensory systems, matter to how incoming information is realized as perceptual impressions in the mind, along with the question of what these perceptions are and communicate in the first place.
Olfaction is just one prominent example of how misleading historic intellectual predilections about human cognition can be. Neuroscience fundamentally opened up possibilities regarding its methods and outlook, in particular over the past two decades. It is about time that we adjust our somewhat older philosophical conjectures of mind and brain accordingly.
Barwich, AS. 2014. “A Sense So Rare: Measuring Olfactory Experiences and Making a Case for a Process Perspective on Sensory Perception.” Biological Theory9(3): 258–268.
Barwich, AS. 2015. “What is so special about smell? Olfaction as a model system in neurobiology.” Postgraduate Medical Journal92: 27–33.
Barwich, AS. 2016. “Making Sense of Smell.” The Philosophers’ Magazine73: 41–47.
Barwich, AS. 2017. “Up the Nose of the Beholder? Aesthetic Perception in Olfaction as a Decision-Making Process.” New Ideas in Psychology47: 157–165.
Barwich, AS. 2018. “Measuring the World: Towards a Process Model of Perception.” In: Everything Flows: Towards a Processual Philosophy of Biology. (D Nicholson, and J Dupré, eds). Oxford University Press, pp. 337–356.
Buck, L, and R Axel. 1991. “A novel multigene family may encode odorant receptors: a molecular basis for odor recognition.” Cell65(1): 175–187.
Cajal R y. 1988[1901/02]. “Studies on the Human Cerebral Cortex IV: Structure of the Olfactory Cerebral Cortex of Man and Mammals.” In: Cajal on the Cerebral Cortex. An Annotated Translation of the Complete Writings, ed. by J DeFelipe and EG Jones. Oxford University Press.
Chen, CFF, Zou, DJ, Altomare, CG, Xu, L, Greer, CA, and S Firestein. 2014. “Nonsensory target-dependent organization of piriform cortex.” Proceedings of the National Academy of Sciences111(47): 16931–16936.
Classen, C, Howes, D, and A Synnott. 1994.Aroma: The cultural history of smell.Routledge.
Condillac, E B d. 1930 . Condillac’s treatise on the sensations. (MGS Carr, trans). The Favil Press.
Darwin, C. 1874. The descent of man and selection in relation to sex(Vol. 1). Murray.
Firestein, S. 2001. “How the olfactory system makes sense of scents.” Nature413(6852): 211.
Friston, K. 2010. “The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience11(2): 127.
Graziano, MS, and TW Webb. 2015. “The attention schema theory: a mechanistic account of subjective awareness.” Frontiers in Psychology6: 500.
Hubel, DH, and TN Wiesel. 2004. Brain and visual perception: the story of a 25-year collaboration. Oxford University Press.
Kant, I. 2006 . Anthropology from a pragmatic point of view(RB Louden, ed). Cambridge University Press.
Keller, A. 2017. Philosophy of Olfactory Perception. Springer.
Livingstone, MS, Vincent, JL, Arcaro, MJ, Srihasam, K, Schade, PF, and T Savage. 2017. “Development of the macaque face-patch system.”Nature Communications8: 14897.
Matthen, M. 2005. Seeing, doing, and knowing: A philosophical theory of sense perception. Clarendon Press.
McGann, JP. 2017. “Poor human olfaction is a 19th-century myth.” Science356(6338): eaam7263.
Mombaerts, P, Wang, F, Dulac, C, Chao, SK, Nemes, A, Mendelsohn, Edmondson, J, and R Axel. 1996. “Visualizing an olfactory sensory map” Cell87(4): 675–686.
Shepherd, GM. 2004. “The human sense of smell: are we better than we think?” PLoS Biology2(5): e146.
Shepherd, GM. 2012. Neurogastronomy: how the brain creates flavor and why it matters. Columbia University Press.
Smith, BC. 2012. “Perspective: complexities of flavour.” Nature486(7403): S6-S6.
Smith BC. 2017a. “Beyond Liking: The True Taste of a Wine?” The World of Wine58: 138–147.
Smith, BC. 2017b. “Human Olfaction, Crossmodal Perception, and Consciousness.” Chemical Senses 42(9): 793–795.
Spence, C, and B Piqueras-Fiszman. 2014. The perfect meal: the multisensory science of food and dining.John Wiley & Sons.
Sosulski, DL, Bloom, ML, Cutforth, T, Axel, R, and SR Datta. 2011. “Distinct representations of olfactory information in different cortical centres.” Nature472(7342): 213.
Stettler, DD, and R Axel. 2009. “Representations of odor in the piriform cortex.” Neuron63(6): 854–864.
Wilson, DA, and RJ Stevenson. 2006.Learning to smell: olfactory perception from neurobiology to behavior.JHU Press.
Beliefs and judgements can be biased: my expectations of someone with a London accent might be biased by my previous exposure to Londoners or stereotypes about them; my confidence that my friend will get the job she is interviewing for may be biased by my loyalty; and my suspicion that it will rain tomorrow may be biased by my exposure to weather in Cambridge over the past few days. What about visual experiences? Can visual experiences be biased?
That’s the question I explore in this blog post. In particular, I’ll ask whether a visual experience could be biased, in the sense of exemplifying forms of racial prejudice. I’ll suggest that the answer to this question is a tentative “yes”, and that that presents some novel challenges to how we think of both bias and visual perception.
According to a very simplistic way of thinking about visual perception, it presents the world to us just as it is: it puts us directly in touch with our environment, in a manner that allows it to play a unique, possibly foundational epistemic role. Perception in general, and visual experience with it, is sometimes treated as a kind of given: a sourceof evidence that is immune to the sorts of rational flaws that beset our cognitive responses to evidence. This approach encourages us to think of visual experience as a neutral corrective to the kinds of flaws that can arise in belief, such as bias or prejudice: there is no roomin the processes that generate visual experience for the kinds of influence that cause belief to be biased or prejudiced.
But there is a tension between that view and certain facts about the subpersonal processes that support visual perception in creatures like ourselves. In particular, our visual system faces an underdetermination challenge: the light signals received by the retina fail, on their own, to determine a unique external stimulus (Scholl 2005). To resolve the resulting ambiguity, the visual system must rely on prior information about the environment, and likely stimuli within it. But those priors are not fixed and immutable: the visual system updates them in light of previous experience (Chalk et al 2010, Chun &Turk-Browne 2008). In this way, the visual system learns from the idiosyncratic course that the individual takes through the world.
Equally, the visual system is overwhelmed with possible input: the information available from the environment at any one moment far surpasses what the brain can process (Summerfield & Egner 2009). It must selectively attend to certain objects or areas within the visual field, in order to prioritise the highest value information. Preexisting expectations and priorities determine the salience of information within a given scene. The nature and content of the visual experience you are having at any moment in part depends on the relative value you place on the information in your environment.
We perceive the world, then, in light of our prior expectations, and past exposure to it. Those processes of learning and adaptation, of developing skills that fit a particular environmental context, leave visual perception vulnerable to a kind of visual counterpart to bias: we do not come to the world each time with fresh eyes. If we did, we would see less accurately and efficiently than we do.
Cognitive biases often emerge as a response to particular environmental pressures: they persist because they lend some advantage in certain circumstances, but come at the expense of sensitivity to certain other information (Kahneman & Tversky 1973). Similarly, the capacity of the visual system to develop an expertise within a particular context can restrict its sensitivity to certain sorts of information. We can see this kind of structure in the specialist abilities we develop to see faces.
You might naturally think that we perceive high-level features of faces, such as the emotion they display or the racial category they belong to, not directly, but only in virtue of, or perhaps via some kind of subpersonal inference from, their lower-level features: the arrangement of facial features, for instance, or the color and shading that let us pick out those features. In fact, there’s good evidence that we perceive the social category of a face, or the emotion it displays directly. For instance, we demonstrate “visual adaptation” to facial emotion: after seeing a series of angry faces, a neutral face appears happy. And those adaptation effects are specific to the gender and race of the face,suggesting that these categories of faces may be coded for my different neural populations (Jaquet, Rhodes, & Hayward 2007, 2008; Jacquet & Rodes 2005, Little, DeBruine, & Jones 2005).
Moreover, our skills at face perception seem to be systematically arranged along racial lines: most people are better at recognizing own-race and dominant-race faces, (Meissner & Brigham 2001), the result of a process of specialisation that emerges over the first 9 months of life as infants gradually lose the capacity to recognize faces of different or non-dominant races (Kelly et al. 2007). A White adult in a majority white society will generally be better at recognizing other white faces than Black or Asian faces, for instance, whereas a Black person living in a majority Black society will conversely be less good at recognizing White than Black faces. This extends to the identification of emotion from faces, as well as their recognition: subjects are more accurate at identifying the emotion displayed on dominant or same-race faces than other-race faces (Elfenbeim & Ambady 2002).
One way of understanding this profile of skills is to think of faces as arranged within a multidimensional “face space” depending on their similarity to one another. We hone our perceptual capacities within that area of face space to which we have most exposure. That area of face space becomes, in effect, stretched, allowing for finer grained distinctions between faces. (Valentine 1991; Valentine, Lewis and Hills 2016). The greater “distance” between faces in the area of face space in which we are most specialized renders those faces more memorable and easier to distinguish from one another. Another way of thinking of this is in terms of “norm-based coding” (Rhodes and Leopold 2011): faces are encoded relative to the average face encountered. Faces further from the norm suffer in terms of our visual sensitivity to the information they carry.
On the one hand, it isn’t hard to see how this kind of facial expertise could help us extract maximal information from the faces we most frequently encounter. But the impact of this “same-race face effect” more generally is potentially highly problematic: a White person in a majority White society will be less likely to accurately recognise a Black individual, and less able to accurately perceive their emotions from their face. That diminution of sensitivity to faces of different races paves the way for a range of downstream impacts. Since the visual system fails to advertise this differential sensitivity, the individual is liable to reason as though they have read their emotions with equal perspicuity, and to draw conclusions on that basis (that the individual feels less perhaps, when the emotion in question is simply visually obscure to them). Relatedly, the lack of information extracted perceptually from the face makes it more likely that the individual will fill that shortfall of information by drawing on stereotypes about the relevant group: that Black people are aggressive, for instance, (Shapiro et al. 2009; Brooks and Freeman 2017). And restrictions on the ability to accurately recall certain faces will bring with them social costs for those individuals.
Compare this visual bias to someone writing a report about two individuals, one White and one Black. The report about the White person is detailed and accurate, whilst the report on the Black person is much sparser, lacking information relevant to downstream tasks. In such a case, we would reasonably regard the report writer as biased, particularly if their report writing reflected this kind of discrepancy between White and Black targets more generally. If the visual system displays a structurally similar bias in the information it provides us with, should we regard it, too, as biased?
To answer that question, we need to have an account of what it is for anythingto be biased, be it a visual experience, a belief, or a disposition to behave or reason in some way or other. We use ‘bias’ in many different ways. In particular, we need to distinguish here what I call formal bias from prejudicial bias. In certain contexts, a bias may be relatively neutral. A ship might be deliberately given a bias to list towards the port side, for instance, by uneven distribution of ballast. Similarly, any system that resolves ambiguity in incoming signal on the basis of information it has encountered in the past is biased by that prior information. But that’s a bias that, for the most part, enhances rather than detracts from the accuracy of the resulting judgements or representations. We could call biases of this kind formal biases.
Bias also has another, more colloquial usage, according to which it picks out something distinctively negative, because it indicates an unfairor disproportionatejudgement, a judgement subject to an influence that is distinctively illegitimate in some way. Bias in this sense often involves undue influence by demographic categories, for instance. We might describe an admissions process as biased in this way if it disproportionately excludes working-class candidates, or women, or people with red hair. We can call bias of this kind prejudicial bias.
The visual system is clearly capable of exhibiting the first kind of bias. As a system that systematically learns from past experiences in order to effectively prioritise and process new information, it is a formally biased system. Similarly, the same-race face effect in face perception involves the systematic neglect of certain information as the result of task-specific expertise. That renders it an instance of formal bias.
To decide whether this also constitutes an instance of prejudicial bias, we need to ask: is that neglect of information illegitimate? And if so, on what grounds? Two difficulties present themselves at this juncture. The first is that we are, for the most part, not used to assessing the processes involved in visual perception as legitimate, or illegitimate (though that has come under increasing pressure recently, in particular in Siegel (2017).) We need to develop a new set of tools for this kind of critique. The second difficulty is the way in which formal bias, including the development of perceptual expertise of the kind demonstrated in the same race face effect, is a virtue of visual perception. It makes visual perception not just efficient, but possible. Acknowledging that can seem to restrict our ability to condemn the bias in question as not just formal, but prejudicial.
This throws us up against the question: what is the relationship between formal and prejudicial bias? Formal bias is often a virtue: it allows for the more efficient extraction of information, by drawing on relevant post information. Prejudicial bias on the other hand is a vice: it limits the subjects’ sensitivity to relevant information in a way that seems intuitively problematic. What are the circumstances under which the virtue of formal bias becomes the vice of prejudicial bias?
In part, this seems to depend on the context in which the process in question is deployed, and the task at hand. The virtues of formal biases rely on stability in both the individual’s environment and goals: that’s when reliance on past information and expertise developed via consistent exposure to certain stimuli is helpful. The same-race face effect develops as the visual system learns to extract information from those faces it most frequently encounters. The resulting expertise cannot adapt at the same pace as our changing, complex social goals across a range of contexts. As a result, this kind of formal perceptual expertise results in a loss of important information in certain contexts: an instance of prejudicial bias. If that’s right, then the distinction between formal and prejudicial bias isn’t one that can be identified just by looking at a particular cognitive process in isolation, but only by looking at that process across a dynamic set of contexts and tasks.
Brooks, J. A., & Freeman, J. B. (2017). Neuroimaging of person perception: A social-visual interface. Neuroscience Letters. https://doi.org/10.1016/j.neulet.2017.12.046
Chalk, M. S., & Series, A. R. (2010). Rapidly learned stimulus expectations alter perception of motion.Journal of Vision, 108(2), 1–18.
Chun, M. M., & Turk-Browne, N. B. (2008). Associative Learning Mechanisms in Vision. Oxford University Press. Retrieved from http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195305487.001.0001/acprof-9780195305487-chapter‑7
Elfenbein, H. A., & Ambady, N. (2003). When familiarity breeds accuracy: cultural exposure and facial emotion recognition. Journal of Personality and Social Psychology,85(2), 276.
Jaquet, E., & Rhodes, G. (2008). Face aftereffects indicate dissociable, but not distinct, coding of male and female faces. Journal of Experimental Psychology. Human Perception and Performance, 34(1), 101–112. https://doi.org/10.1037/0096–15126.96.36.199
Jaquet, E., Rhodes, G., & Hayward, W. G. (2007). Opposite aftereffects for Chinese and Caucasian faces are selective for social category information and not just physical face differences. The Quarterly Journal of Experimental Psychology, 60(11), 1457–1467. https://doi.org/10.1080/17470210701467870
Jaquet, E., Rhodes, G., & Hayward, W. G. (2008). Race-contingent aftereffects suggest distinct perceptual norms for different race faces. Visual Cognition, 16(6), 734–753. https://doi.org/10.1080/13506280701350647
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review,80, 237–251.
Kelly, D. J., Quinn, P. C., Slater, A. M., Lee, K., Ge, L., & Pascalis, O. (2007). The other-race effect develops during infancy: Evidence of perceptual narrowing. Psychological Science, 18(12), 1084–1089.
Little, A. C., DeBruine, L. M., & Jones, B. C. (2005). Sex-contingent face after-effects suggest distinct neural populations code male and female faces. Proceedings of the Royal Society B: Biological Sciences, 272(1578), 2283–2287. https://doi.org/10.1098/rspb.2005.3220
Meissner, C. A., & Brigham, J. C. (2001). Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review. Pyschology, Public Policy and Law,7(1), 3–35.
Rhodes, G., & Leopold, D. A. (2011). Adaptive Norm-Based Coding of Face Identity. https://doi.org/10.1093/oxfordhb/9780199559053.013.0014
Scholl, B. J. (2005). Innateness and (Bayesian) visual perception. In P. Carruthers (Ed.), The Innate Mind: Structure and Contents(p. 34). New York: Oxford University Press.
Shapiro, J. R., Ackerman, J. M., Neuberg, S. L., Maner, J. K., Becker, D. V., & Kenrick, D. T. (2009). Following in the Wake of Anger: When Not Discriminating Is Discriminating. Personality & Social Psychology Bulletin, 35(10), 1356–1367. https://doi.org/10.1177/0146167209339627
Siegel, S. (2017). The Rationality of Perception.Oxford University Press
Summerfield, C., & Egner, T. (2009). Expectation (and attention) in visual cognition. Trends in Cognitive Science, 13(9), 403–409.
Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. The Quarterly Journal of Experimental Psychology, 43(2), 161–204.
Valentine, T., Lewis, M. B., & Hills, P. J. (2016). Face-space: A unifying concept in face recognition research. The Quarterly Journal of Experimental Psychology, 69(10), 1996–2019.
The notion of memory, as used in ordinary language, may seem to have little to do with perception or conscious experience. While perception informs us about the world as it is now, memory almost by definition tells us about the past. Similarly, whereas conscious experience seems like an ongoing, occurrent phenomenon, it’s natural to think of memory as being more like an inert store of information, accessible when we need it but capable of lying dormant for years at a time.
However, in contemporary cognitive science, memory is taken to include almost any psychological process that functions to store or maintain information, even if only for very brief durations (see also James, 1890). In this broader sense of the term, connections between memory, perception, and consciousness are apparent. After all, some mechanism for the short-term retention of information will be required for almost any perceptual or cognitive process, such as recognition or inference, to take place: as one group of psychologists put it, “storage, in the sense of internal representation, is a prerequisite for processing” (Halford, Phillips, & Wilson, 2001). Assuming, then, as many theorists do, that perception consists at least partly in the processing of sensory information, short-term memory is likely to have an important role to play in a scientific theory of perception and perceptual experience.
In this latter sense of memory, two major forms of short-term store have been widely discussed in relation to perception and consciousness. The first of these is the various forms of sensory memory, and in particular iconic memory. Iconic memory was first described by George Sperling, who in 1960 demonstrated that large amounts of visually presented information were retained for brief intervals, far more than subjects were able to actually utilize for behaviour during the short window in which they were available (Figure 1). This phenomenon, dubbed partial report superiority, was brought to the attention of philosophers of mind via the work of Fred Dretske (1981) and Ned Block (1995, 2007). Dretske suggested that the rich but incompletely accessible nature of information presented in Sperling’s paradigm was a marker of perceptual rather than cognitive processes. Block similarly argued that sensory memory might be closely tied to perception, and further, suggested that such sensory forms of memory could serve as the basis for rich phenomenal consciousness that ‘overflowed’ the capacity for cognitive access.
A second form of short-term term that has been widely discussed by both psychologists and philosophers is working memory. Very roughly, working memory is a short-term informational store that is more robust than sensory memory but also more limited in capacity. Unlike information in sensory memory, which must be cognitively accessed in order to be deployed for voluntary action, information in working memory is immediately poised for use in such behaviour, and is closely linked to notions such as cognition and cognitive access. For reasons such as these, Dretske seemed inclined to treat this kind of capacity-limited process as closely tied or even identical to thought, a suggestion followed by Block. Psychologists such as Nelson Cowan (2001: 91) and Alan Baddeley (2003: 836) take encoding in working memory to be a criterion of consciousness, while global workspace theorists such as Stanislas Dehaene (2014: 63) have regarded working memory as intimately connected – if not identical – with global broadcast.
The foregoing summary is over-simplistic, but hopefully serves to motivate the claim that scientific work on short-term memory mechanisms may have important roles to play in understanding both the relation between perception and cognition and conscious experience. With this idea in mind, I’ll now discuss some recent evidence for a third important short-term memory mechanism, namely Molly Potter’s proposed Conceptual Short-Term Memory. This is a form of short-term memory that serves to encode not merely the sensory properties of objects (like sensory memory), but also higher-level semantic information such as categorical identity. Unlike sensory memory, it seems somewhat resistant to interference by the presentation of new sensory information; whereas iconic memory can be effaced by the presentation of new visual information, CSTM seems somewhat robust. In these respects, it is similar to working memory. Unlike working memory, however, it seems to have both a high capacity and a brief duration; information in CSTM that is not rapidly accessed by working memory is lost after a second or two (for a more detailed discussion, see Potter 2012).
Evidence for CSTM comes from a range of paradigms, only two of which I discuss here (interested readers may wish to consult Potter, Staub, & O’Connor, 2004; Grill-Spector and Kanwisher, 2005; and Luck, Vogel, & Shapiro, 1996). The first particularly impressive demonstration is a 2014 experiment examining subjects’ ability to identify the presence of a given semantic target (such as “wedding” or “picnic”) in a series of rapidly presented images (see Figure 2).
A number of features of this experiment are worth emphasizing. First, subjects in some trials were cued to identify the presence of a target only after presentation of the images, suggesting that their performance did indeed rely on memory rather than merely, for example, effective search strategies. Second, a relatively large number of images were displayed in quick succession, either 6 or 12, in both cases larger than the normal capacity of working memory. Subjects’ performance in the 12-item trials was not drastically worse than in the 6‑item trials, suggesting that they were not relying on normal capacity-limited working memory alone. Third, because the images were displayed one after another in the same location in quick succession, it seems unlikely that they were relying on sensory memory alone; as noted earlier, sensory memory is vulnerable to overwriting effects. Finally, the fact that subjects were able to identify not merely the presence of certain visual features but the presence or absence of specific semantic targets suggests that they were not merely encoding low-level sensory information about the images, but also their specific categorical identities, again telling against the idea that subjects’ performance relied on sensory memory alone.
Another relevant experiment for the CSTM hypothesis is that of Belke et al. (2008). In this experiment, subjects were presented with a single array of either 4 or 8 items, and asked whether a given category of picture (such as a motorbike) was present. In some trials in which the target was absent, a semantically related distractor (such as a motorbike helmet) was present instead. The surprising result of this experiment, which involved an eye-tracking camera, was that subjects reliably fixated upon either targets or semantically related distractors with their initial eye movements, and were just as likely to do whether the arrays contained 4 or 8 items, and even when assigned a cognitive load task beforehand (see Figure 3).
Again, these results arguably point to the existence of some further memory mechanism beyond sensory memory and working memory: if subjects were relying on working memory to direct their eye movements, then one would expect such movements to be subject to interference from the cognitive load, whereas the hypothesis that subjects were relying on exclusively sensory mechanisms runs into the problem that such mechanisms do not seem to be sensitive to high-level semantic properties of stimuli such as their specific category identity, whereas in this trial, subjects’ eye movements were sensitive to just such semantic properties of the items in the array.
Interpretation of experiments such as these is a tricky business, of course (for a more thorough discussion, see Shevlin 2017). However, let us proceed on the assumption that the CSTM hypothesis is at least worth taking seriously, and that there may be some high-capacity semantic buffer in addition to more widely accepted mechanisms such as iconic memory and working memory. What relevance might this have for debates in philosophy and cognitive science? I will now briefly mention three such topics. Again, I will be oversimplifying somewhat, but my goal will be to outline some areas where the CSTM hypothesis might be of interest.
The first such debate concerns the nature of the contents of perception. Do we merely see colours, shapes, and so on, or do we perceive high-level kinds such as tables, cats, and Donald Trump (Siegel, 2010)? Taking our cue from the data on CSTM, we might suggest that this question can be reframed in terms of which forms of short-term memory are genuinely perceptual. If we take there to be good grounds for confining perceptual representation to the kinds of representations in sensory memory, then we might be inclined to take an austere view of the contents of experience. By contrast, if the kind of processing involved in encoding in CSTM is taken to be a form of late-stage perception, then we might have evidence for the presence of high-level perceptual content. It might reasonably be objected that this move is merely ‘kicking the can down the road’ to questions about the perception-cognition boundary, and does not by itself resolve the debate about the contents of perception. However, more positively, this might provide a way of grounding largely phenomenological debates in the more concrete frameworks of memory research.
A second key debate where CSTM may play a role concerns the presence of top-down effects on perception. A copious amount of experimental data (dating back to early work by psychologists such as Perky, 1910, but proliferating especially in the last two decades) has been produced in support of the idea that there are indeed ‘top-down’ effects on perception, which in turn has been taken to suggest that our thoughts, beliefs, and desires can significantly affect how the world appears to us. Such claims have been forcefully challenged by the likes of Firestone and Scholl (2015), who have suggested that the relevant effects can often be explained in terms of, for example, postperceptual judgment rather than perception proper.
However, the CSTM hypothesis may again offer a third compromise position. By distinguishing core perceptual processes (namely those that rely on sensory buffers such as iconic memory) from the kind of later categorical processing performed by CSTM, there may be other positions available in the interpretation of alleged cases of top-down effects on perception. For example, Firestone and Scholl claim that many such results fail to properly distinguish perception from judgment, suggesting that, in many cases, experimentalists’ results can be interpreted purely in terms of strictly cognitive effects rather than as involving effects on perceptual experience. However, if CSTM is a distinct psychological process operative between core perceptual processes and later central cognitive processes, then appeals to things such as ‘perceptual judgments’ may be better founded than Firestone and Scholl seem to think. This would allow us to claim that at least some putative cases of top-down effects went beyond mere postperceptual judgments while also respecting the hypothesis that early vision is encapsulated; see Pylyshyn, 1999).
A final debate in which CSTM may be of interest is the question of whether perceptual experience is richer than (or ‘overflows’) what is cognitively accessed. As noted earlier, Ned Block has argued that information in sensory forms of memory may be conscious even if it is not accessed – or even accessible – to working memory (Block, 2007). This would explain phenomena such as the apparent ‘richness’ of experience; thus if we imagine standing in Times Square, surrounded by chaos and noise, it is phenomenologically tempting to think we can only focus on and access a tiny fraction of our ongoing experiences at any one moment. A common challenge to this kind of claim is that it threatens to divorce consciousness from personal level cognitive processing, leaving us open to extreme possibilities such as the ‘panpsychic disaster’ of perpetually inaccessible conscious experience in very early processing areas such as the LGN (Prinz, 2007). Again, CSTM may offer a compromise position. As noted earlier, the capacity of CSTM does indeed seem to overflow the sparse resources of working memory. However, it also seems rely on personal level processing, such as an individual’s store of learned categories. Thus one new position, for example, might claim that information must at least reach the stage of CSTM to be conscious, thus allowing that perceptual experience may indeed overflow working memory while also ruling it out in early sensory areas.
These are all bold suggestions in need of extensive clarification and argument, but it is my hope that I have at least demonstrated to the reader how CSTM may be a hypothesis of interest not merely to psychologists of memory, but also those interested in broader issues of mental architecture and consciousness. And while I should also stress that CSTM remains a working hypothesis in the psychology of memory, it is one that I think is worth exploring on grounds of both scientific and philosophical interest.
Baddeley, A.D. (2003). Working memory: Looking back and looking forward.Nature Reviews Neuroscience,4(10), 829–839.
Belke, E., Humphreys, G., Watson, D., Meyer, A. and Telling, A., (2008). Top-down effects ofsemantic knowledge in visual search are modulated by cognitive but not perceptual load. Perception & Psychophysics, 70 8, 1444 – 1458.
Bergström, F., & Eriksson, J. (2014). Maintenance of non-consciously presented information engages the prefrontal cortex. Frontiers in Human Neuroscience 8:938.
Block, N. (2007). Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience, Behavioral and Brain Sciences 30, pp. 481–499.
Cowan, N., (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences 241, 87–114.
Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking Press, 2014.
Dretske, F. (1981). Knowledge and the Flow of Information. MIT Press.
Firestone, C., & Scholl, B.J. (2015). Cognition does not affect perception: Evaluating the evidence for ‘top-down’ effects. Behavioral and Brain Sciences:1–77.
Grill-Spector, K., & Kanwisher, N. (2005). Visual Recognition. Psychological Science,16(2), 152–160.
Halford, G. S., Phillips, S., & Wilson, W. H. (2001). Processing capacity limits are not explained by storage limits. Behavioral and Brain Sciences 24 (1), 123–124.
James, W. (1890). The Principles of Psychology. Dover Publications.
Luck, S. J., Vogel, E. K., & Shapiro, K. L. (1996). Word meanings can be accessed but not reported during the attentional blink. Nature,383(6601), 616–618.
Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory. Nature Neuroscience,17(3), 347–356.
Potter, M. C. (2012). Conceptual Short Term Memory in Perception and Thought. Frontiers in Psychology, 3:113.
Potter, M. C., Staub, A., & O’Connor, D. H. (2004). Pictorial and conceptual representation of glimpsed pictures. Journal of Experimental Psychology: Human Perception and Performance, 30, 478–489.
Prinz, J. (2007). Accessed, accessible, and inaccessible: Where to draw the phenomenal line. Behavioral and Brain Sciences, 305–6.
Pylyshyn, Z. (1999). Is vision continuous with cognition?: The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences,22(03).
Shevlin, H. (2017). Conceptual Short-Term Memory: A Missing Part of the Mind? Journal of Consciousness Studies, 24, No. 7–8.
Siegel, S. (2010). The Contents of Visual Experience. Oxford.
Sperling, G. (1960). The Information Available in Brief Visual Presentations, Psychological Monographs: General and Applied 74, pp. 1–29.
 Note that Dretske does not use the term working memory in this context, but clearly has some such process in mind, as made clear by his reference to capacity-limited mechanisms for extracting information.
 A complicating factor in discussion of working memory comes from the recent emergence of variable resource models of working memory (Ma et al., 2014) and the discovery that some forms of working memory may be able to operate unconsciously (see, e.g., Bergström & Eriksson, 2014).
 Given that the arrays remained visible to subjects throughout the experiment, one might wonder why this experiment has relevance for our understanding of memory. However, as noted earlier, I take it that any short-term processing of information presumes some kind of underlying temporary encoding mechanism.
The question of whether functions are localizable to distinct parts of the brain, aside from its obvious importance to neuroscience, bears on a wide range of philosophical issues—reductionism and mechanistic explanation in philosophy of science; cognitive ontology and mental representation in philosophy of mind, among many others. But philosophical interest in the question has only recently begun to pick up (Bergeron, 2007; Klein, 2012; McCaffrey, 2015; Rathkopf, 2013).
I am a “contextualist” about localization: I think that functions are localizable to distinct parts of the brain, and that different parts of the brain can be differentiated from each other on the basis of their functions (Burnston, 2016a, 2016b). However, I also think that what a particular part of the brain does depends on behavioral and environmental context. That is, a given part of the brain might perform different functions depending on what else is happening in the organism’s internal or external environment.
Embracing contextualism, as it turns out, involves questioning some deeply held assumptions within neuroscience, and connects the question of localization with other debates in philosophy. In neuroscience, localization is generally construed in what I call absolutist terms. Absolutism is a form of atomism—it suggests that localization can be successful only if 1–1 mappings between brain areas and functions can be found. Since genuine multifunctionality is antithetical to atomist assumptions it has historically not been a closely analyzed concept in systems or cognitive neuroscience.
In philosophy, contextualism takes us into questions about what constitutes good explanation—in this case, functional explanation. Debates about contextualism in other areas of philosophy, such as semantics and epistemology (Preyer & Peter, 2005), usually shape up as follows. Contextualists are impressed by data suggesting contextual variation in the phenomenon of interest (usually the truth values of statements or of knowledge attributions). In response, anti-contextualists worry that there are negative epistemic consequences to embracing this variation. The resulting explanations will not, on their view, be sufficiently powerful or systematic (Cappelen & Lepore, 2005). We end up with explanations that do not generalize beyond individual cases. Hence, according to anti-contextualists, we should be motivated to come up with theories that deny or explain away the data that seemingly support contextual variation.
In order to argue for contextualism in the neural case, then, one must first establish the data that suggests contextual variation, then articulate a variety of contextualism that (i) succeeds at distinguishing brain areas in terms of their distinct functions, and (ii) describes genuine generalizations.
Usually, in systems neuroscience, the goal is to correlate physiological responses in particular brain areas with particular types of information in the world, supporting the claim that the responses represent that information. I have pursued a detailed case study of perceptual area MT (also known as “V5” or the “middle temporal” area). The textbook description of MT is that it represents motion—it has specific responses to specific patterns of motion, and variations amongst its cellular responses represent different directions and velocities. Hence, MT has the univocal function of representing motion: an absolutist description.
However, MT research in the last 20 years has uncovered data which strongly suggests that MT is not just a motion detector. I will only list some of the relevant data here, which I discuss exhaustively in other places. Let’s consider a perceptual “context” as a combination of perceptual features—including shape/orientation, depth, color, luminance/brightness, and motion. On the traditional hierarchy, each of these features has its own area dedicated to representing it. Contextualism, alternatively, starts from the assumption that different combinations of these features might result in a given area representing different information.
Despite the traditional view that MT is “color blind” (Livingstone & Hubel, 1988), MT in fact responds to the identity of colors when color is useful in disambiguating a motion stimulus. Now in this case, MT still arguably represents motion, but it does use color as a contextual cue for doing so.
Over 93% of MT cells represent coarse depth (the rough distance of an object away from the perceiver. Their tuning for depth is independent of their tuning for motion, and many cells represent depth even in stationary These depth signals are predictive of psychophysical results.
A majority of MT cells also have specific response properties for fine depth (depth signals resulting from the 3‑d shape and orientation of objects) features of tilt and slant, and these can be cued by a variety of distinct features, including binocular disparity and relative velocity.
How do these results support contextualism? Consider a particular physiological response to a stimulus in MT. If the data is correct, then this signal might represent motion, or it might represent depth—and indeed, either coarse or fine depth—depending on the context. Or, it might represent a combination of those influences.
The contextualism I advocate focuses on the type of descriptions we should invoke in theorizing about the functions of brain areas. First, our descriptions should be conjunctive: the function of an area should be described as a conjunction of the different representational functions it serves and the contexts in which it serves those functions. So, MT represents motion in a particular range of contexts, but also represents other types of information in different contexts—including absolute depth in both stationary and moving stimuli, and fine depth in contexts involving tilt and slant, as defined by either relative disparity or relative velocity.
When I say that a conjunction is “open,” what I mean is that we shouldn’t take the functional description as complete. We should see it as open to amendment as we study new contexts. This openness is vital—it is an induction on the fact that the functional description of MT has changed as new contexts have been explored—but also leads us precisely into what bothers anti-contextualists (Rathkopf, 2013). The worry is that open-descriptions do not have the theoretical strength that supports good explanations. I argue that this worry is mistaken.
First, note that contextualist descriptions can still functionally decompose brain areas. The key to this is the indexing of functions to contexts. Compare MT to V4. While V4 also represents “motion” construed broadly (in “kinetic” or moving edges), color, and fine depth, the contexts in which V4 does so differ from MT. For instance, V4 represents color constancies which are not present in MT responses. V4’s specific combination of sensitivities to fine depth and curvature allows it to represent protuberances—curves in objects that extend towards the perceiver—which MT cannot represent. So, the types of information that these areas represent, along with the contexts in which they represent them, tease apart their functions.
Indexing to contexts also points the way to solving the problem of generalization, so long as we appropriately modulate our expectations. For instance, on contextualism it is still a powerful generalization that MT represents motion. This is substantiated by the wide range of contexts in which it represents motion—including moving dots, moving bars, and color-segmented patterns. It’s just that representing motion is not a universal generalization about its function. It is a generalization with more limited scope. Similarly, MT represents fine depth in some contexts (tilt and slant, defined by disparity or velocity), but not in all of them (protuberances). Of course, if the function of MT is genuinely context sensitive, then universal generalizations about its function will not be possible. Hence, insisting on universal generalizations is not an open strategy for an absolutist—at least not without question begging.
The real crux of the debate, I believe, is about the notion of projectability. We want our theories not just to describe what has occurred, but to inform future hypothesizing about novel situations. Absolutists hope for a powerful form of law-like projectability, on which a successful functional description tells us for certain what that area will do in new contexts. The “open” structure of contextualism precludes this, but this doesn’t bother the contextualist. This situation might seem reminiscent of similar stalemates regarding contextualism in other areas of philosophy.
There are two ways I have sought to break the stalemate. First is to define a notion of projectability that informs scientific practice, but is compatible with contextualism. Second is to show that even very general absolutist descriptions fail to deliver on the supposed explanatory advantages of absolutism. The key to a contextualist notion of projectability, in my view, is to look for a form of projectability that structures investigation, rather than giving lawful predictions. The basic idea is this: given a new context, the null hypothesis for an area’s function in that context should be that it performs its previously known function (or one of its known functions). I call this role a minimal hypothesis, and the idea is that currently known functional properties structure hypothesizing and investigation in novel contexts, by providing three options: (i) either the area does not function at all in the novel context (perhaps MT does not make any functional contribution to, say, processing emotional valence); (ii) it functions in one of its already known ways, in which case another context gets indexed to, and generalizes, an already known conjunct, or (iii) it performs a new function in that context, forcing a new conjunct to be added to the overall description of the area (indexed to the novel context, of course). While I won’t go into details here, I argue in (Burnston, 2016a) that this kind of reasoning has shaped the progress of understanding MT function.
One option open to a defender of absolutism is to attempt to explain away the data suggesting contextual variation by changing the type of functional description that is supposed to generalize over all contexts (Anderson, 2010; Bergeron, 2007; Rathkopf, 2013). For instance, rather than saying that a part of the brain represents a specific type of information, maybe we should say that it performs the same type of computation, whatever information it is processing. I have called this kind of approach “computational absolutism” (Burnston, 2016b).
While computational neuroscience is an important theoretical approach, it can’t save absolutism. My argument against the view starts from an empirical premise—in modeling MT, there is not one computational description that describes everything MT does. Instead, there are a range of theoretical models that each provide good descriptions of aspects of MT function. Given this lack of universal generalization, the computational absolutist has some options. They might move towards more general levels of computational description, hoping to subsume more specific models. The problem with this is that the most general computational descriptions in neuroscience are what are called canonical computations (Chirimuuta, 2014)—descriptions that can apply to virtually all brain areas. But if this is the case, then these descriptions won’t successfully differentiate brain areas based on their function. Hence, they don’t contribute to functional localization.
On the other hand, suggesting that it is something about the way these computations are applied in particular contexts runs right into the problem of contextual variation. Producing a model that predicts what, say, MT will do in cases of pattern motion or reverse-phi phenomena simply does not predict what functional responses MT will have to depth—not, at least, without investigating and building in knowledge about its physiological responses to those stimuli. So, even if general models are helpful in generating predictions in particular instances, they don’t explain what goes on in them. If this description is right, then the supposed explanatory gain of CA is an empty promise, and contextual analysis of function is necessary. My view of the role of highly general models mirrors those offered by Cartwright (1999) and Morrison (2007) in the physical sciences.
Some caveats are in order here. I’ve only talked about one brain area, and as McCaffrey (2015) points out, different areas might be amenable to different kinds of functional analysis. Perceptual areas are important, however, because they are paradigm success cases for functional localization. If contextualism works here, it can work elsewhere, as well as for other units of analysis, such as cell populations and networks (Rentzeperis, Nikolaev, Kiper, & van Leeuwen, 2014). I share McCaffrey’s pluralist leanings, but I think that a place for contextualist functional analysis must be made if functional decomposition is to succeed. The contextualist approach is also compatible with other frameworks, such as Klein’s (2017) focus on “difference-making” in understanding the function of brain areas.
I’ll end with a teaser about my current project on these topics (Burnston, in prep). Note that, if the function of brain areas can genuinely shift with context, this is not just a theoretical problem, but a problem for the brain. Other parts of the brain must interact with MT differently depending on whether it is currently representing motion, coarse depth, fine depth, or some combination. If this is the case, we can expect there to be mechanisms in the brain that mediate these shifting functions. Unsurprisingly, I am not the first to note this problem. Neuroscientists have begun to employ concepts from communication and information technology to show how physiological activity from the same brain area might be interpreted differently in different contexts, for instance by encoding distinct information in distinct dynamic properties of the signal (Akam & Kullmann, 2014). Contextualism informs the need for this kind of approach. I am currently working on explicating these frameworks and showing how they allow for functional decomposition even in dynamic and context-sensitive neural networks.
 The high proportion and regular organization of depth-representing cells in MT resists the temptation to try to save informational specificity by subdividing MT into smaller units, as is normally done for V1. V1 is standardly separated into distinct populations of orientation, wavelength, and displacement-selective cells, but this kind of move is not available for MT.
Akam, T., & Kullmann, D. M. (2014). Oscillatory multiplexing of population codes for selective communication in the mammalian brain. Nature Reviews Neuroscience, 15(2), 111–122.
Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. The Behavioral and brain sciences, 33(4), 245–266; discussion 266–313. doi: 10.1017/S0140525X10000853
Bergeron, V. (2007). Anatomical and functional modularity in cognitive science: Shifting the focus. Philosophical Psychology, 20(2), 175–195.
Burnston, D. C. (2016a). Computational neuroscience and localized neural function. Synthese, 1–22. doi: 10.1007/s11229-016‑1099‑8
Burnston, D. C. (2016b). A contextualist approach to functional localization in the brain. Biology & Philosophy, 1–24. doi: 10.1007/s10539-016‑9526‑2
Burnston, D. C. (In preparation). Getting over atomism: Functional decomposition in complex neural systems.
Cappelen, H., & Lepore, E. (2005). Insensitive semantics: A defense of semantic minimalism and speech act pluralism: John Wiley & Sons.
Cartwright, N. (1999). The dappled world: A study of the boundaries of science: Cambridge University Press.
Chirimuuta, M. (2014). Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience. Synthese, 191(2), 127–153. doi: 10.1007/s11229-013‑0369‑y
Klein, C. (2012). Cognitive Ontology and Region- versus Network-Oriented Analyses. Philosophy of Science, 79(5), 952–960.
Klein, C. (2017). Brain regions as difference-makers. Philosophical Psychology, 30(1–2), 1–20.
Livingstone, M., & Hubel, D. (1988). Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240(4853), 740–749.
McCaffrey, J. B. (2015). The brain’s heterogeneous functional landscape. Philosophy of Science, 82(5), 1010–1022.
Morrison, M. (2007). Unifying scientific theories: Physical concepts and mathematical structures: Cambridge University Press.
Preyer, G., & Peter, G. (2005). Contextualism in philosophy: Knowledge, meaning, and truth: Oxford University Press.
Rathkopf, C. A. (2013). Localization and Intrinsic Function. Philosophy of Science, 80(1), 1–21.
Rentzeperis, I., Nikolaev, A. R., Kiper, D. C., & van Leeuwen, C. (2014). Distributed processing of color and form in the visual cortex. Frontiers in Psychology, 5.
I construe the debate about cognitive penetration (CP) in the following way: are there causal relations between cognition and perception, such that the processing of the later is systematically sensitive to the content of the former? Framing the debate in this way imparts some pragmatic commitments. We need to make clear what distinguishes perception from cognition, and what resources each brings to the table. And we need to clarify what kind of causal relationship exists, and whether it is strong enough to be considered “systematic.”
I think that current debates about cognitive penetration have failed to be clear enough on these vital pragmatic considerations, and have become muddled as a result. My view is that once we understand perception and cognition aright, we should recognize as an empirical fact that there are causal relationships between them—however, these relations are general, diffuse, and probabilistic, rather than specific, targeted, and determinate. Many supporters of CP certainly seem to have the latter kind of relationship in mind, and it is not clear that the former kind supports the consequences for epistemology and cognitive architecture that these supporters suppose. My primary goal, then, rather than denying cognitive penetration per se, is to de-fuse it (Burnston, 2016, 2017a, in prep).
The view of perception, I believe, that informs most debates about CP, is that perception consists in a set of strictly bottom-up, mutually encapsulated feature detectors, perhaps along with some basic mechanisms for binding these features into distinct “proto” objects (Clark, 2004). Anything categorical, anything that involves inter-featural (to say nothing of intermodal) association, anything that involves top-down influence, or assumptions about the nature of the world, and anything that is learned or involves memory, must strictly be due to cognition.
To those of this theoretical persuasion, evidence for effects of some subset of these types in perception is prima facie evidence for CP. Arguments in favor of CP move from the supposed presence of these effects, along with arguments that they are not due to either pre-perceptual attentional shifts or post-perceptual judgments, to the conclusion that CP occurs.
On reflection, however, this is a somewhat odd, or at least non-obvious move. We start out from a presupposition that perception cannot involve X. Then we observe evidence that perception does in fact involve X. In response, instead of modifying our view of perception, we insist that only some other faculty, like cognition, must intervene and do for perception that for which it, on its own, lacks. My arguments in this debate are meant to undermine this kind of intuition by showing that, given a better understanding of perception, not only is positing CP not required, it is also (in its stronger forms anyway) simply unlikely.
Consider the following example, the Cornsweet illusion (also called the Craik‑O’Brien-Cornsweet illusion).
In this kind of stimulus, subjects almost universally perceive the patch on the left as darker than the patch on the right, despite the fact that they have the exact same luminance, aside from the dark-to-light gradient on the left of the center line (the “Cornsweet edge”) and the light-to-dark gradient on the right. The standard view of the illusion in perceptual science is that perception assumes that the object is extended towards the perceiver in depth, with light coming from the left, such that the panel on the left would be more brightly illuminated, and the patch on the right more dimly illuminated. Thus, in order for the left panel to produce the same luminance value at the retina as the right panel, it must in fact be darker, and the visual system represents it so: such effects are the result of “an extraordinarily powerful strategy of vision” (Purves, Shimpi, & Lotto, 1999, p. 8549).
Why construe the strategy as visual? There are a number of related considerations. First, the phenomenon involves fine-grained associations between particular features (luminance, discontinuity, and contrast, in particular configurations) that vary systematically and continuously with the amount of evidence for the interpretation. If one increases the depth-interpretation by foreshortening or “bowing” the figure, the effect is enhanced, and with further modulation one can get quite pronounced effects. It is unclear at best when we would have come by such fine-grained beliefs about these stimuli. Moreover, the effects are mandatory, and operate insensitively to changes in our occurrent beliefs. Fodor is (still) right, in my view, that this kind of manditoriness supports a perceptual reading.
According to Jonathan Cohen and me (Burnston & Cohen, 2012, 2015), current perceptual science reveals effects like this to be the norm, at all levels of perception. If this “integrative” view of perception is true, then embodying assumptions in complex associations is no evidence for CP—in fact it is part-and-parcel of what perception does.
What about categorical perception? Consider the following example from Gureckis and Goldstone (2008), of what is commonly referred to as a morphspace.
According to current views (Gauthier & Tarr, 2016; Goldstone & Hendrickson, 2010), categorical perception involves higher-order associations between correlated low-level features. So, recognizing a particular category of faces (for instance, an individual’s face, a gender, or a race) involves being able to notice correlations between a number of low-level facial features such as lightness, nose or eye shape, etc., as well as their spatial configurations (e.g., the distance between the eyes or between the nose and the upper lip). A wide range of perceptual categories have been shown to operate similarly.
Interestingly, forming a category can morph these spaces, to group exemplars together along the relevant dimensions. In Gureckis and Goldstone’s example, once subjects learn to discriminate A from B faces (defined by the arbitrary center line), novel examples of A faces will be judged to be more alike each other along diagnostic dimension A than they were prior to learning. Despite these effects being categorical, I suggest that they are strongly analogous to the cases above—they involve featural associations that are fine-grained (a dimension is “morphed” a particular amount during the course of learning) and mandatory (it is hard not to see, e.g., your brother’s face as your brother) in a similar way to those above. Moreover, subjects are often simply bad at describing their perceptual categories. In studies such as Gureckis and Goldstone’s, subjects have trouble saying much about the dimensional associations that inform their percepts. As such, and given the resources of the integrative view, a way is opened to seeing these categorical effects as occurring within perception.
If being associative, assumption-involving, or categorical doesn’t distinguish a perceptual from a cognitive representation, what does? While there are issues cashing out the distinction in detail, I suggest that the best way to mark the perception/cognition distinction is in terms of representational form. Cognitive representations are discrete and language-like, while perceptual representations represent structural dimensions of their referents—these might include shape dimensions (tilt, slant, orientation, curvature, etc.), the dimensions that define the phenomenal color space, or higher-order dimensions such as the ones in the face case above. The form distinction captures the kinds of considerations I’ve advanced here, as well as being compatible with wide range of related ways of drawing the distinction in philosophy and cognitive science.
With these distinctions in place, we can talk about the kinds of cases that proponents of CP take as evidence. On Macpherson’s example, Delk and Fillenbaum’s studies purporting to show that “heart” shapes are perceived as a more saturated red than non-heart shapes. Let’s put aside for a moment the prevalent methodological critiques of these kinds of studies (Firestone & Scholl, 2016). Even so, there is no reason to read the effect as one of cognitive penetration. Simply the belief “hearts are red,” according to the form distinction, does not represent the structural properties of the color space, and thus has no resources to inform perception to modify itself any particular way. Of course, one might posit a more specific belief—say, that this particular heart is a particular shade of red—but this belief would have to be based on perceptual evidence about the stimulus. If perception couldn’t represent this stimulus as this shade on its own, we wouldn’t come by the belief. Moreover, on the integrative view this is the kind of thing perception does anyway. Hence, there is no reason to see the percept as being the result of cognitive intervention.
In categorical contexts, one strong motivation for cognitive penetration is the idea that perceptual categories are learned, and often this learning is informed by prior beliefs and instructions (Churchland, 1988; Siegel, 2013; Stokes, 2014). There are problems with these views, however, both empirical and conceptual. The empirical problem is that learning can occur without any cognitive influence whatsoever. Perceivers can become attuned to diagnostic dimensions for entirely novel categories simply by viewing exemplars (Folstein, Gauthier, & Palmeri, 2010). Here, subjects have no prior beliefs or instructions for how to perceive the stimulus, but perceptual learning occurs anyway. In many cases, however, even when beliefs are employed in learning a category, it’s obvious that the belief does not encode any content that is useful for informing the specific percept. In Goldstone and Gureckis’ case above, subjects were shown exemplar faces and told “this is an A” or “this is a B”. But this indexical belief does not describe anything about the category they actually learn.
One might expect that more detailed instructions or prior beliefs can inform more detailed categories—for instance Siegel’s suggestion that novitiate arborists be told to look at the shape of leaves in order to distinguish (say) pines from birches. However, this runs directly into the conceptual problem. Suppose that pine leaves are pointy while birch leaves are broad. Learners already know what pointy and broad things look like. If these beliefs are all that’s required, then subjects don’t need to learn anything perceptually in order to make the discrimination. However, if the beliefs are not sufficient to make the discrimination—either because it is a very fine-grained discrimination of shape, or because pine versus birch perceptions in fact require the kind of higher-order dimensional structure discussed above—then their content does not describe what perception learns when subjects do learn to make the distinction perceptually. In either case, there is a gap between the content of the belief and the content of the learned perception—a gap that is supported by studies of perceptual learning and expertise (for further discussion, see Burnston, 2017a, in prep). So, while beliefs might be important causal precursors to perceptual learning, they do not penetrate the learning process.
So, the situation is this: we have seen that, on the integrative view and the form distinction, cognition does not have the resources to determine the kind of perceptual effects that are of interest in debates about CP. In both synchronic and diachronic cases, perception can do much of the heavy lifting itself, thus rendering CP unnecessary to explain the effects. A final advantage of this viewpoint, especially the form distinction, is that it brings particular forms of evidence to bear on the debate—particularly evidence about what happens when processing of lexical/amodal symbols does in fact interact with processing of modal ones. The details are too much to go through here, but I argue that the key to understanding the relationship between perception and cognition is to give up the notion that there are ever direct relationships between the tokening of a particular cognitive content and a specific perceptual outcome (Burnston, 2016, 2017b). Instead, I suggest that tokening a cognitive concept biases perception towards a wide range of possible outcomes. Here, rather than determinate casual relationships, we should expect highly probabilistic, highly general, and highly flexible interactions, where cognition does not force perception to act a certain way, but can shift the baseline probability that we’ll perceive something consistent with the cognitive content. This brings priming, attentional, and modulatory effects under a single rubric, but not one on which cognition tinkers with the internal workings of specific perceptual processes to determine how they work in a given instance. I thus call it the “external effect” view of the cognition-perception interface.
Now it is open to the defender of cognitive penetration to define this diffuse interaction as an instance of penetration—penetration is a theoretical term one may define as one likes. I think, however, that this notion is not what most cognitive penetration theorists have in mind, and it does not obviously carry any of the supposed consequences for modularity, theoretical neutrality, or the epistemic role of perception that proponents of CP assume (Burnston, 2017a; cf. Lyons, 2011). The kind of view I’ve offered captures, in the best available empirical and pragmatic way, the range of phenomena at issue, and does so very differently than standard discussions of penetration.
Burnston, D. C. (2016). Cognitive penetration and the cognition–perception interface. Synthese, 1–24. DOI: doi:10.1007/s11229-016‑1116‑y
Burnston, D. C. (2017a). Is aesthetic experience evidence for cognitive penetration? New Ideas in Psychology. DOI: https://doi.org/10.1016/j.newideapsych.2017.03.012
Burnston, D. C. (2017b). Interface problems in the explanation of action. Philosophical Explorations, 20 (2), 242–258. DOI: http://dx.doi.org/10.1080/13869795.2017.1312504
Burnston, D. C. (In preparation). There is no diachronic cognitive penetration.
Burnston, D., & Cohen, J. (2012). Perception of features and perception of objects. Croatian Journal of Philosophy (36), 283–314.
Burnston, D. C., & Cohen, J. (2015). Perceptual integration, modularity, and cognitive penetration Cognitive Influences on Perception: Implications for Philosophy of Mind, Epistemology, and Philosophy of Action. Oxford: Oxford University Press.
Churchland, P. M. (1988). Perceptual plasticity and theoretical neutrality: A reply to Jerry Fodor. Philosophy of Science, 55(2), 167–187.
Clark, A. (2004). Feature-placing and proto-objects. Philosophical Psychology, 17(4), 443–469. doi: 10.1080/0951508042000304171
Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and Brain Sciences, 39, 1–77.
Fodor, J. (1984). Observation reconsidered. Philosophy of Science, 51(1), 23–43.
Folstein, J. R., Gauthier, I., & Palmeri, T. J. (2010). Mere exposure alters category learning of novel objects. Frontiers in Psychology, 1, 40.
Gauthier, I., & Tarr, M. J. (2016). Object Perception. Annual Review of Vision Science, 2(1).
Goldstone, R. L., & Hendrickson, A. T. (2010). Categorical perception. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 69–78. doi: 10.1002/wcs.26
Gureckis, T. M., & Goldstone, R. L. (2008). The effect of the internal structure of categories on perception. Paper presented at the Proceedings of the 30th Annual Conference of the Cognitive Science Society.
Lyons, J. (2011). Circularity, reliability, and the cognitive penetrability of perception. Philosophical Issues, 21(1), 289–311.
Macpherson, F. (2012). Cognitive penetration of colour experience: rethinking the issue in light of an indirect mechanism. Philosophy and Phenomenological Research, 84(1), 24–62.
Nanay, B. (2014). Cognitive penetration and the gallery of indiscernibles. Frontiers in Psychology, 5.
Purves, D., Shimpi, A., & Lotto, R. B. (1999). An empirical explanation of the Cornsweet effect. The Journal of Neuroscience, 19(19), 8542–8551.
Pylyshyn, Z. (1999). Is vision continuous with cognition? The case for cognitive impenetrability of visual perception. The Behavioral and Brain Sciences, 22(3), 341–365; discussion 366–423.
Raftopoulos, A. (2009). Cognition and perception: How do psychology and neural science inform philosophy? Cambridge: MIT Press.
Rock, I. (1983). The logic of perception. Cambridge: MIT Press.
Siegel, S. (2013). The epistemic impact of the etiology of experience. Philosophical Studies, 162(3), 697–722.
Stokes, D. (2014). Cognitive penetration and the perception of art. Dialectica, 68(1), 1–34.
Yuille, A., & Kersten, D. (2006). Vision as Bayesian inference: analysis by synthesis? Trends in Cognitive Sciences, 10(7), 301–308.
 Different theorists stress different properties. Macpherson (2012) stresses effects being categorical and associational, Nanay (2014) and Churchland (1988) their being top-down. Raftopoulos (2009) cites the role of memory in categorical effects and Stokes (2014) and Siegel (2013) the importance of learning in such contexts.
 This kind of reading of intra-perceptual processing is extremely common across a range of theorists and perspectives in perceptual psychology (e.g., Pylyshyn, 1999; Rock, 1983; Yuille & Kersten, 2006).
 This view also rejects the attempt to make these effects cognitive by defining them as tacit beliefs. The problem with tacit beliefs is that they simply dictate that anything corresponding to a category or inference must be cognitive, which is exactly what’s under discussion here. The move thus doesn’t add anything to the debate.
 This requires assuming a “specificity” condition on the content of a purported penetrating belief—namely that a candidate penetrator must have the content that perception learns to represent. I argue in more detail elsewhere that giving this condition up trivializes the penetration thesis (Burnston, in prep).
Enactivism has historically rejected computational characterisations of cognition, at least in more traditional versions. This has led to the perception that enactivist approaches to cognition must be opposed to be more mainstream computationalist approaches, which offer a computational characterisation of cognition. However, the conception of computation which enactivism rejects is in some senses quite old fashioned, and it is not so clear that enactivism need necessarily be opposed to computation, understood in a more modern sense. Demonstrating that there could be compatibility, or at least not a necessary opposition, between enactivism and computationalism (in some sense) would open the door to a possible reconciliation or cooperation between the two approaches.
In a recently published paper (Villalobos & Dewhurst 2017), my collaborator Mario and I have focused on elucidating some of the reasons why enactivism has rejected computation, and have argued that these do not necessarily apply to more modern accounts of computation. In particular, we have demonstrated that a physically instantiated Turing machine, which we take to be a paradigmatic example of a computational system, can meet the autonomy requirements that enactivism uses to characterise cognitive systems. This demonstration goes some way towards establishing that enactivism need not be opposed to computational characterisations of cognition, although there may be other reasons for this opposition, distinct from the autonomy requirements.
The enactive concept of autonomy first appears in its modern guise in Varela, Thompson, & Rosch’s 1991 book The Embodied Mind, although it has important historical precursors in Maturana’s autopoietic theory (see his 1970, 1975, 1981; see also Maturana & Varela 1980) and cybernetic work on homeostasis (see e.g. Ashby 1956, 1960). There are three dimensions of autonomy that we consider in our analysis of computation. Self-determination requires that the behaviour of an autonomous system must be determined by that system’s own structure, and not by external instruction. Operational closure requires that the functional organisation of an autonomous system must loop back on itself, such that the system possesses no (non-arbitrary) inputs or outputs. Finally, an autonomous system must be precarious, such that the continued existence of the system depends on its own functional organisation, rather than on external factors outside of its control. In this post I will focus on demonstrating that these criteria can be applied to a physical computing system, rather than addressing why or how enactivism argues for them in the first place.
All three criteria have traditionally been used to disqualify computational systems from being autonomous systems, and hence to deny that cognition (which for enactivists requires autonomy) can be computational (see e.g. Thompson 2007: chapter 3). Here it is important to recognise that the enactivists have a particular account of computation in mind, one that they have inherited from traditional computationalists. According to this ‘semantic’ account, a physical computer is defined as a system that performs systematic transformations over content-bearing (i.e. representational) states or symbols (see e.g. Sprevak 2010). With such an account in mind, it is easy to see why the autonomy criteria might rule out computational systems. We typically think of such a system as consuming symbolic inputs, which it transforms according to programmed instructions, before producing further symbolic outputs. Already this system has failed to meet the self-determination and operational closure criteria. Furthermore, as artefactual computers are typically reliant on their creators for maintenance, etc., they also fail to meet the precariousness criteria. So, given this quite traditional understanding of computation, it is easy to see why enactivists have typically denied that computational systems can be autonomous.
Nonetheless, understood according to more recent, ‘mechanistic’ accounts of computation, there is no reason to think that the autonomy criteria must necessarily exclude computational systems. Whilst they differ in some details, all of these accounts deny that computation is inherently semantic, and instead define physical computation in terms of mechanistic structures. We will not rehearse these accounts in any detail here, but the basic idea is that physical computation can be understood in terms of mechanisms that perform systematic transformations over states that do not possess any intrinsic semantic content (see e.g. Miłkowski 2013; Fresco 2014; Piccinini 2015). With this rough framework in mind, we can return to the autonomy criteria.
Even under the mechanistic account, computation is usually understood in terms of mappings between inputs and outputs, where there is a clear sense of the beginning and end of the computational operation. A system organised in this way can be described as ‘functionally open’, meaning that its functional organisation is open to the world. A functionally closed system, on the other hand, is one whose functional organisation loops back through the world, such that the environmental impact of the system’s ‘outputs’ contributes to the ‘inputs’ that it receives.
A simple example of this distinction can be found by considering two different ways that a thermostat could be used. In the first case the sensor, which detects ambient temperature, is placed in one house, and the effector, which controls a radiator, is placed in another (see figure 1). This system is functionally open, because there is only a one-way connection between the sensor and the effector, allowing us to straightforwardly identify inputs and outputs to the system.
A more conventional way of setting up a thermostat is with both the sensor and the effector in the same house (see figure 2). In this case the apparent ‘output’ (i.e. control of the radiator) loops back way round to the apparent ‘input’ (i.e. ambient temperature), forming a functionally closed system. The ambient air temperature in the house is effectively part of the system, meaning that we could just as well treat the effector as providing input and the sensor as producing output – there is no non-arbitrary beginning or end to this system.
Whilst it is typical to treat a computing mechanism more like the first thermostat, with a clear input and output, we do not think that this perspective is essential to the mechanistic understanding of computation. There are two possible ways that we could arrange a computing mechanism. The functionally open mechanism (figure 3) reads from one tape and writes onto another, whilst the functionally closed mechanism (figure 4) reads and writes onto the same tape, creating a closed system analogous to the thermostat with its sensor and effector in the same house. As Wells (1998) suggests, a conventional Turing machine is actually arranged in the second way, providing an illustration of a functional closed computing mechanism. Whether or not this is true of other computational systems is a distinct question, but it is clear that at least some physically implemented computers can exhibit operational closure.
The self-determination criterion requires that a system’s operations are determined by its own structure, rather than by external instructions. This criterion applies straightforwardly to at least some computing mechanisms. Whilst many computers are programmable, their basic operations are nonetheless determined by their own physical structure, such that the ‘instructions’ provided by the programmer only make sense in the context of the system itself. To another system, with a distinct physical structure, those ‘instructions’ would be meaningless. Just as the enactive automaton ‘Bittorio’ brings meaning to a meaningless sea of 1s and 0s (see Varela 1988; Varela, Thompson, & Rosch 1991: 151–5), so the structure of a computing mechanism bring meaning to the world that it encounters.
Finally, we can turn to the precariousness criterion. Whilst the computational systems that we construct are typically reliant upon us for continued maintenance and a supply of energy, and play no direct role in their own upkeep, this is more a pragmatic feature of our design of those systems, rather than anything essential to computation. We could easily imagine a computing mechanism designed so that it seeks out its own source of energy and is able to maintain its own physical structure. Such a system would be precarious in just the same sense that enactivism conceives of living systems as being precarious. So there is no in-principle reason why a computing system should not be able to meet the precariousness criterion.
In this post I have very briefly argued that the enactivist autonomy criteria can be applied to (some) physically implemented computing mechanisms. Of course, enactivists may have other reasons for thinking that cognitive systems cannot be computational. Nonetheless, we think this analysis could be interesting for a couple of reasons. Firstly, insofar as computational neuroscience and computational psychology have been successful research programs, enactivists might be interested in adopting some aspects of computational explanation for their own analyses of cognitive systems. Secondly, we think that the enactivist characterisation of autonomous systems might help to elucidate the senses in which a computational system might be cognitive. Now that we have established the basic possibility of autonomous computational systems, we hope to develop future work along both of these lines, and invite others to do so too.
I will leave you with this short and amusing video of the autonomous robotic creations of the British cyberneticist W. Grey Walter, which I hope might serve as a source of inspiration for future cooperation between enactivism and computationalism.
Ashby, R. (1956). An introduction to cybernetics. London: Chapman and Hall.
Ashby, R. (1960). Design for a Brain. London: Chapman and Hall.
Fresco, N. (2014). Physical computation and cognitive science. Berlin, Heidelberg: Springer-Verlag.
Maturana, H. (1970). Biology of cognition. Biological Computer Laboratory, BCL Report 9, University of Illinois, Urbana.
Maturana, H. (1975). The organization of the living: A theory of the living organization. International Journal of Man-Machine studies, 7, 313–332.
Maturana, H. (1981). Autopoiesis. In M. Zeleny (Ed.), Autopoiesis: a theory of living organization (pp. 21–33). New York; Oxford: North Holland.
Maturana, H. and Varela, F. (1980). Autopoiesis and cognition: The realization of the living. Dordrecht, Holland: Kluwer Academic Publisher.
Miłkowski, M. (2013). Explaining the computational mind. Cambridge, MA: MIT Press.
Piccinini, G. (2015). Physical Computation. Oxford: OUP.
Sprevak, M. (2010). Computation, Individuation, and Received View on Representations. Studies in History and Philosophy of Science, 41: 260–70.
Thompson, E. (2007). Mind in Life: Biology, phenomenology, and the sciences of mind. Cambridge, MA: Harvard University Press.
Varela F. 1988. Structural Coupling and the Origin of Meaning in a Simple Cellular Automation. In Sercarz E. E., Celada F., Mitchison N.A., Tada T. (eds.), The Semiotics of Cellular Communication in the Immune System, pp. 151–61. New York: Springer-Verlag.
Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.
Villalobos, M. & Dewhurst, J. (2017). Enactive autonomy in computational systems. Synthese, doi:10.1007/s11229-017‑1386‑z
Wells, A. J. (1998). Turing’s Analysis of Computation and Theories of Cognitive Architecture. Cognition, 22(3), 269–94.
On several recent accounts of orthonasal olfaction, olfactory experience does (in some sense) have a spatial aspect. These views open up novel ways of thinking about the spatiality of what we perceive. For while olfactory experience may not qualify as spatial in the way visual experience does, it may nevertheless be spatial in a different way. What way? And how does it differ from visual spatiality?
It is often noted that, by contrast to what we see, what we smell is neither at a distance nor at a direction from us. Unlike animals such as rats and the hammerhead shark, which have their nostrils placed far enough apart that they can smell in stereo (much like we can see and hear in stereo), we humans are not able to tell which direction a smell is coming from (except perhaps under special conditions (Radil and Wysocki 1998; Porter et al. 2005), or if we individuate olfaction so as to include the trigeminal nerve (Young et al. 2014)). Nor are we able to tell how a smell is distributed around where we are sitting (Batty 2010a p. 525; 2011, p. 166). Nevertheless, it can be argued that what we smell can be spatial in some sense. Several suggestions to this effect are on offer.
Batty (2010a; 2010b; 2011; 2014) holds that what we smell (olfactory properties, according to her) is presented as ‘here’. This is not a location like any other. It is the only location at which olfactory properties are ever presented, for olfactory experience, on Batty’s view, lacks spatial differentiation. Moreover, she emphasises that, if we are to make room for a certain kind of non-veridical olfactory experience, ‘here’ cannot be a location in our environment; it is not to be understood as ‘out there’ (Batty 2010b, pp. 20–21). This latter point contrasts with Richardson’s (2013) view. She observes that, because olfactory experience involves sniffing, it is part of the phenomenology of olfactory experience that something (odours, according to Richardson) seems to be brought into the nostrils from outside the body. Thus, the object of olfactory experience seems spatial in the sense that what we smell is coming from without, although it is not coming from any particular location. It is interesting that although Batty and Richardson claims contrast, they both seem to think that they are pointing out a spatial aspect of olfactory experiences when claiming that what we smell is, respectively, ‘here’ or coming from without.
Another view, compatible with the claim that what we smell is neither at a distance nor direction from us, is presented by Young (2016). He emphasises the fact that the molecular structure of chemical compounds determines which olfactory quality subjects experience. It is precisely this structure within an odour plume, he argues, that is the object of olfactory experience. Would an olfactory experience of the molecular structure have a spatial aspect? Young does not specify this. But since the structure of the molecule is spatial, one can at least envisage that experiencing molecular structure is, in part, to experience the spatial relations between molecules. If so, we can envisage spatiality without perspective. For, presumably, the spatial orientation the molecules have relative to each other and to the perceiver would not matter to the experience. Presumably, it would be their internal spatial structure that is experienced, regardless of their orientation relative to other things.
The claim that what we smell is neither at a direction nor distance from us can, however, be disputed. As Young (2016) notes, this claim neglects the possibility of tracking smells over time. Although the boundaries of the cloud of odours are less clear than for visual objects, the extension of the cloud in space and the changes in its intensity seem to be spatial aspects of our olfactory experiences when we move around over time. Perhaps one would object that the more fundamental type of olfactory experience is synchronic and not diachronic. The synchronic variety has certainly received the most attention in the literature. But if one’s interested in an investigation of our ordinary olfactory experiences, it is not clear why diachronic experiences should be less worthy of consideration.
Perhaps one would think that an obvious spatial aspect of olfactory experience is the spatial properties of the source, i.e. the physical object from which the chemical compounds in the air originate. But there is a surprisingly widespread consensus in the literature that the source is not part of what we perceive in olfaction. Lycan’s (1996; 2014) layering view may be an exception. He claims that we smell sources by smelling odours. But, as Lycan himself notes, there is a question as to whether the ‘by’-relation is an inference relation. If it is, his claim is not necessarily substantially different from Batty’s (2014, pp. 241–243) claim that olfactory properties are locked onto source objects at the level of belief, but that sources are not perceived.
Something that makes evaluation of the abovementioned ideas about olfactory spatiality complicated is that there is a variety of facts about olfaction that can be taken to inform an account of olfactory experience. As Stevenson and Wilson (2006) note, chemical structure has been much studied. But even though the nose has about 300 receptors ‘which allow the detection of a nearly endless combination of different odorants’ (ibid., p. 246), how relevant is the chemical structure to the question ‘what we can perceive?’, when the discriminations we as perceivers report are much less detailed? What is the relevance of facts about the workings and individuation of the olfactory system? Is it a serious flaw if our conclusions about olfactory experience contradict the phenomenology? Different contributors to the debate seem to provide or presuppose different answers to questions like these. This makes comparison complicated. Comparison aside, however, some interesting ideas about olfactory spatiality can, as briefly shown, be appreciated on their own terms.
Batty, C. 2014. ‘Olfactory Objects’. In D. Stokes, M. Matthen and S. Biggs (eds.), Perception and Its Modalities. Oxford: Oxford University Press.
Batty, C. 2011. ‘Smelling Lessons’. Philosophical Studies 153: 161–174.
Batty, C. 2010a. ‘A Representationalist Account of Olfactory Expereince’. Canadian Journal of Philosophy 40(4): 511–538.
Batty, C. 2010b. ‘What the Nose Doesn’t Know: Non-veridicality and Olfactory Experience’. Journal of Consciousness Studies 17: 10–27.
Lycan, W. G. 2014. ‘The Intentionality of Smell’. Frontiers in Psychology 5: 68–75.
Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: Bradford Books/MIT Press.
Radil, T. and C. J. Wysocki. 1998. ‘Spatiotemporal masking in pure olfaction’. Olfaction and Taste 12(855): 641–644.
Richardson, L. 2013. ‘Sniffing and Smelling’. Philosophical Studies 162: 401–419.
Porter, J. Anand, T., Johnson, B. N., Kahn, R. M., and N. Sobel. 2005. ‘Brain mechanisms for extracting spatial information from smell’. Neuron 47: 581–592.
Young, B. D. 2016. ‘Smelling Matter’. Philosophical Psychology 29(4): 520–534.
Young, B. D., A. Keller and D. Rosenthal. 2014. ‘Quality-space Theory in Olfaction’. Frontiers in Psychology 5: 116–130.
Wilson, D. A. and R. J. Stevenson. 2006. Learning to Smell. Olfactory Perception from Neurobiology to Behaviour. Baltimore, MD: The John Hopkins University Press.