The Modularity of the Motor System

Myrto Mylopoulos – Department of Philosophy and Institute of Cognitive Science, Carleton University

The extent to which the mind is modular is a foundational concern in cognitive science. Much of this debate has centered on the question of the degree to which input systems, i.e., sensory systems such as vision, are modular (see, e.g., Fodor 1983; Pylyshyn 1999; MacPherson 2012; Firestone & Scholl 201; Burnston 2017; Mandelbaum 2017). By contrast, researchers have paid far less attention to the question of the extent to which our main output system, i.e., the motor system, qualifies as such.

This is not to say that the latter question has gone without acknowledgement. Indeed, in his classic essay Modularity of Mind, Fodor (1983)—a pioneer in thinking about this topic—writes: “It would please me if the kinds of arguments that I shall give for the modularity of input systems proved to have application to motor systems as well. But I don’t propose to investigate that possibility here” (Fodor 1983, p.42).

I’d like to take some steps towards doing so in this post.

To start, we need to say a bit more about what modularity amounts to. A central feature of modular systems—and the one on which I fill focus here—is their informational encapsulation. Informational encapsulation concerns the rangeof information that is accessible to a module in computing the function that maps the inputs it receives to the outputs it yields. A system is informationally encapsulated to the degree that it lacks access to information stored outside the system in the course of processing its inputs. (cf. Robbins 2009, Fodor 1983).

Importantly, informational encapsulation is a relative notion. A system may be informationally encapsulated with respect to some information, but not with respect to other information. When a system is informationally encapsulated with respect to the states of what Fodor called “the central system”—those states familiar to us as propositional attitude states like beliefs and intentions—this is referred to as cognitiveimpenetrabilityor, what I will refer to here as cognitive impermeability. In characterizing the notion of cognitive permeability more precisely, one must be careful not to presuppose that it is perceptual systems only that are at issue. For a neutral characterization, I prefer the following: A system is cognitively permeable if and only if the function it computes is sensitive to the content of a subject S’s mental states, including S’s intentions, beliefs, and desires. In the famous Müller-Lyer illusion, the visual system lacks access to the subject’s belief that the two lines are identical in length in computing the relative size of the stimului, so it is cognitively impermeable relative to that belief.

On this characterization of cognitive permeability, the motor system is clearly cognitively permeable in virtue of its computations, and corresponding outputs, being systematically sensitive to the content of an agent’s intentions. The evidence for this is every intentional action you’ve ever performed. Perhaps the uncontroversial nature of this fact has precluded further investigation of cognitive permeability in the motor system. But there are at least two interesting questions to explore here. First, since cognitive permeability, just like informational encapsulation, comes in degrees, we should ask to what extent is the motor system cognitively permeable. Are there interesting limitations that can be drawn out? (Spoiler: yes.) Second, insofar as there are such limitations, we should ask the extent to which they are fixed. Can they be modulated in interesting ways by the agent? (Spoiler: yes.)

Experimental results suggest that there are indeed interesting limitations to the cognitive permeability of the motor system. This is perhaps most clearly shown by appeal to experimental work employing visuomotor rotation tasks (see also Shepherd 2017 for an important discussion of such work with which I am broadly sympathetic). In such tasks, the participant is instructed to reach for a target on a computer screen. They do not see their hand, but they receive visual feedback from a cursor that represents the trajectory of their reaching movement. On some trials, the experimenters introduce a bias to the visual feedback from the cursor by rotating it relative to the actual trajectory of their unseen movement during the reaching task. For example, a bias might be introduced such that the visual feedback from the cursor represents the trajectory of their reach as being rotated 45°clockwise from the actual trajectory of their arm movement. This manipulation allows experimenters to determine how the motor system will compensate for the conflict between the visual feedback that is predicted on the basis of the motor commands it is executing, and the visual feedback the agent actually receives from the cursor. The main finding is that the motor system gradually adapts to the bias in a way that results in the recalibration of the movements it outputs such that they show “drift” in the direction oppositethat of the rotation, thus reducing the mismatch between the visual feedback and the predicted feedback.

Figure 1. A: A typical set-up for a visuomotor rotation task. B: Typical error feedback when a counterclockwise directional bias is introduced. (Source: Krakauer 2009)

In the paradigm just described, participants do not form an intention to adopt a compensatory strategy; the adaptation the motor system exhibits is purely the result of implicit learning mechanisms that govern its output. But in a variant of this paradigm (Mazzoni & Krakauer 2006), participants are instructed to adopt an explicit “cheating” strategy—that is, to form intentions—to counter the angular bias introduced by the experimenters. This is achieved by placing a neighbouring target (Tn) at a 45°angle from the proper target (Tp) in the direction oppositethe bias (e.g., if the bias is 45°counterclockwise from the Tp, the Tn is placed 45°clockwise from the Tp), such that if participants aim for the Tn, the bias will be compensated for, and the cursor will hit the Tp, thus satisfying the primary task goal.

In this set-up, reaching errors related to the Tp are almost completely eliminated at first. The agent hits the Tp (according to the feedback from the cursor) as a result of forming the intention to aim for the strategically placed Tn. But as participants continue to perform the task on further trials, something interesting happens: their movements once again gradually start to show drift, but this time towardsthe Tn and away from the Tp. What this result is thought to reflect is yet another implicit process of adaption by the motor system, which aims to correct for the difference between the aimed for location (Tn) and the visual feedback (in the direction of the Tp).

Two further details are important for our purposes: First, when participants are instructed to stop using the strategy of aiming for the Tn (in order to hit the Tp) and return their aim to the Tp “[s]ubstantial and long-lasting” (Mazzoni & Krakauer 2006, p.3643) aftereffects are observed, meaning the motor system persists in aiming to reduce the difference between the visual feedback and the earlier aimed for location. Second, in a separate study by Taylor & Ivry (2011) using a very similar paradigm wherein participants had significantly more trials per block (320), participants did eventually correct for the secondary adaption by the motor system and reverse the direction of their movement, but only gradually, and by means of adopting explicit aiming strategies to counteract the drift.

On the basis of these results, we can draw at least three interesting conclusions about cognitive permeability and the motor system:  First, although it is clearly sensitive to the content of the proximal intentions that it takes as input (in this case the intention to aim for the Tn), it is not always, or only weakly so, to the distal intentions that those very proximal intentions serve—in this case the intention to hit the Tp. If this is correct, it may be that the motor system lacks sensitivity to the structure of practical reasoning that often guides an agent’s present action in the background. In this case, the motor system seems not to register that the agent intends to hit the Tp by way ofaiming and reaching for the Tn.

Second, given that aftereffects persist for some time even once the explicit aiming strategy (and therefore the intention to aim for the Tn) has been abandoned, we may conclude that the motor system is only sensitive to the content of proximal intentions to a limited degree in that it takes time for it to properly update its performance relative to the agent’s current proximal intention. The implicit adaptation, indexed to the earlier intention, cannot be overridden immediately.

Third, this degree of sensitivity is not fixed, but rather can vary over time as the result of an agent’s interventions, as determined in Taylor & Ivry’s study, where the drift was eventually reversed after a sufficiently large number of trials wherein the agent continuously adjusted their aiming strategy.

To close, I’d like to outline what I take to be a couple of important upshots of the preceding discussion for neighbouring philosophical debates:

  1. Recent discussions of skilled action have sought to determine “how far down” action control is intelligent (see, e.g., Fridland 2014, 2017; Levy 2017; Shepherd 2017). And, on at least some views, this is a function of the degree to which the motor system is sensitive to the content of an agent’s intentions. Here we see that this sensitivity is sometimes limited, but can also improve over time. In my view, this reveals another important dimension of the motor system’s intelligence that goes beyond mere sensitivity, and that pertains to its ability to adapt to an agent’s present goals through learning processes that exhibit a reasonable degree of both stability and flexibility.
  2. Recently, action theorists have turned their attention to solving the so-called “interface problem”, that is, the problem of how intentions and motor representations successfully coordinate given their (arguably) different representational formats (see, e.g., Butterfill & Sinigaglia 2014; Burnston 2017; Fridland 2017; Mylopoulos & Pacherie 2017, 2018; Shepherd 2017; Ferretti & Caiani 2018). The preceding discussion may suggest a more limited degree of interfacing than one might have thought—obtaining only between an agent’s most proximal intentions and the motor system. It may also suggest that successful interfacing depends on both the learning mechanism(s) of the motor system (for maximal smoothness and stability) as well as a continuous interplay between its outputs and the agent’s own practical reasoning for how best to achieve their goals (for maximal flexibility).


Burnston, D. (2017). Interface problems in the explanation of action. Philosophical Explorations, 20(2), 242-258.

Butterfill, S. A. & Sinigaglia, C. (2014). Intention and motor representation in purposive action. Philosophy and Phenomenological Research, 88, 119–145.

Ferretti, G. & Caiani, S.Z. (2018). Solving the interface problem without translation: the same format thesis. Pacific Philosophical Quarterly,doi: 10.1111/papq.12243

Fodor, J. (1983). The modularity of mind: An essay on faculty psychology. Cambridge: The MIT Press.

Fridland, E. (2014). They’ve lost control: Reflections on skill. Synthese, 91(12), 2729–2750.

Fridland, E. (2017). Skill and motor control: intelligence all the way down. Philosophical Studies, 174(6), 1539-1560.

Krakauer J. W. (2009). Motor learning and consolidation: the case of visuomotor rotation. Advances in experimental medicine and biology629, 405-21.

Levy, N. (2017). Embodied savoir-faire: knowledge-how requires motor representations. Synthese, 194(2), 511-530.

MacPherson, F. (2012). Cognitive penetration of colour experience: Rethinking the debate in light of an indirect mechanism. Philosophy and Phenomenological Research,84(1). 24-62.

Mazzoni, P. & Krakauer, J. W. (2006). An implicit plan overrides an explicit strategy during visuomotor adaptation. The Journal of Neuroscience, 26(14): 3642-3645.

Mylopoulos, M. & Pacherie, E. (2017).  Intentions and motor representations: The interface challenge. Review of Philosophy and Psychology, 8(2), pp. 317–336.

Mylopoulos, M. & Pacherie, E. (2018). Intentions: The dynamic hierarchical model revisited. WIREs Cognitive Science, doi: 10.1002/wcs.1481

Shepherd, J. (2017). Skilled action and the double life of intention. Philosophy and Phenomenological Research, doi:10.1111/phpr.12433

Taylor, J.A. and Ivry, R.B. (2011). Flexible cognitive strategies during motor learning. PLoS Computational Biology7(3), p.e1001096.

The Cinderella of the Senses: Smell as a Window into Mind and Brain?

Ann-Sophie Barwich – Visiting Assistant Professor in the Cognitive Science Program at Indiana University Bloomington

Smell is the Cinderella of our Senses. Traditionally dismissed as communicating merely subjective feelings and brutish sensations, the sense of smell never attracted critical attention in philosophy or science. The characteristics of odor perception and its neural basis are key to understanding the mind through the brain, however.

This claim might sound surprising. Human olfaction acquired a rather poor reputation throughout most of Western intellectual history. “Of all the senses it is the one which appears to contribute least to the cognitions of the human mind,” commented the French philosopher of the Enlightenment, Étienne Bonnot de Condillac, in 1754. Immanuel Kant (1798) even called smell “the most ungrateful” and “dispensable” of the senses. Scientists were not more positive in their judgment either. Olfaction, Charles Darwin concluded (1874), was “of extremely slight service” to mankind. Further, statements about people who paid attention to smell frequently mixed with prejudice about sex and race: Women, children, and non-white races – essentially all groups long excluded from the rationality of white men – were found to show increased olfactory sensitivity (Classen et al. 1994). Olfaction, therefore, did not appear to be a topic of reputable academic investment – until recently.

Scientific research on smell was catapulted into mainstream neuroscience almost overnight with the discovery of the olfactory receptor genes by Linda Buck and Richard Axel in 1991. It turned out that the olfactory receptors constitute the largest protein gene family in most mammalian genomes (except for dolphins), exhibiting a plethora of properties significant for structure-function analysis of protein behavior (Firestein 2001; Barwich 2015). Finally, the receptor gene discovery provided targeted access to probe odor signaling in the brain (Mombaerts et al. 1996; Shepherd 2012). Excitement soon kicked in, and hopes rose high to crack the coding principles of the olfactory system in no time. Because the olfactory pathway has a notable characteristic, one that Ramon y Cajal highlighted as early as 1901/02: Olfactory signals require only two synapses to go straight into the core cortex (forming almost immediate connections with the amygdala and hypothalamus)! To put this into perspective, in vision two synapses won’t get you even out of the retina. You can follow the rough trajectory of an olfactory signal in Figure 1 below.

Three decades later and the big revelation still is on hold. A lot of prejudice and negative opinion about the human sense of smell have been debunked (Shepherd 2004; Barwich 2016; McGann 2017). But the olfactory brain remains a mystery to date. It appears to differ markedly in its neural principles of signal integration from vision, audition, and somatosensation (Barwich 2018; Chen et al. 2014). The background to this insight is a remarkable piece of contemporary history of science. (Almost all actors key to the modern molecular development of research on olfaction are still alive and actively conducting research.)

Olfaction – unlike other sensory systems – does not maintain a topographic organization of stimulus representation in its primary cortex (Stettler and Axel 2009; Sosulski et al. 2011). That’s neuralese for: We actually do not know how the brain organizes olfactory information so that it can tell what kind of perceptual object or odor image an incoming signal encodes. You won’t find a map of stimulus representation in the brain, such that chemical groups like ketones would sit next to aldehydes or perceptual categories like rose were right next to lavender. Instead, axons from the mitral cells in the olfactory bulb (the first neural station of olfactory processing at the frontal lobe of the brain) project to all kinds of areas in the piriform cortex (the largest domain of the olfactory cortex, previously assumed to be involved in odor object formation). In place of a map, you find a mosaic (Figure 1).

What does this tell us about smell perception and the brain in general? Theories of perception, in effect, always have been theories of vision. Concepts originally derived from vision were made fit to apply to what’s usually sidelined as “the other senses.” This tendency permeates neuroscience as well as philosophy (Matthen 2005). However, it is a deeply problematic strategy for two reasons. First, other sensory modalities (smell, taste, and touch but also the hidden senses of proprioception and interoception) do not resonate entirely with the structure of the visual system (Barwich 2014; Keller 2017; Smith 2017b). Second, we may have narrowed our investigative lens and overlooked important aspects also of the visual system that can be “rediscovered” if we took a closer look at smell and other modalities. Insight into the complexity of cross-modal interactions, especially in food studies, suggests that much already (Smith 2012; Spence and Piqueras-Fiszman 2014). So the real question we should ask is:

How would theories of perception differ if we extended our perspective on the senses; in particular, to include features of olfaction?

Two things stand out already. The first concerns theories of the brain, the other the permeable border between processes of perception and cognition.

First, when it comes to the principles of neural organization, not everything in vision that appears crystal clear really is. The cornerstone of visual topography has been called into question more recently by the prominent neuroscientist Margaret Livingstone (who, not coincidentally, trained with David Hubel: one half of the famous duo of Hubel and Wiesel (2004) whose findings led to the paradigm of neural topography in vision research in the first place). Livingstone et al. (2017) found that the spatially discrete activation patterns in the fusiform face area of macaques were contingent upon experience – both in their development and, interestingly, partly also their maintenance. In other words, learning is more fundamental to the arrangement of neural signals in visual information processing and integration than previously thought. The spatially discrete patterns of the visual system may constitute more of a developmental byproduct than simply a genetically predetermined Bauplan. From this perspective, figuring out the connectivity that underpins non-topographic and associative neural signaling, such as in olfaction, offers a complementary model to determine the general principles of brain organization.

Second, emphasis on experience and associative processing in perceptual object formation (e.g., top-down effects in learning) also mirrors current trends in cognitive neuroscience. Smell has long been neglected from mainstream theories of perception precisely because of the characteristic properties that make it subject to strong contextual and cognitive biases. Consider a wine taster, who experiences wine quality differently by focusing on distinct criteria of observational likeness in comparison with a layperson. She can point to subtle flavor notes that the layperson may have missed but, after paying attention, is also able to perceive (e.g., a light oak note). Such influence of attention and learning on perception, ranging from normal perception to the acquisition of perceptual expertise, is constitutive of odor and its phenomenology (Wilson and Stevenson 2006; Barwich 2017; Smith 2017a). Notably, the underlying biases (influenced by semantic knowledge and familiarity) are increasingly studied as constitutive determinants of brain processes in recent cognitive neuroscience; especially in forward models or models of predictive coding where the brain is said to cope with the plethora of sensory data by anticipating stimulus regularities on the basis of prior experience (e.g., Friston 2010; Graziano 2015). While advocates of these theories have centered their work on vision, olfaction now serves as an excellent model to further the premise of the brain as operating on the basis of forecasting mechanisms (Barwich 2018); blurring the boundary between perceptual and cognitive processes with the implicit hypothesis that perception is ultimately shaped by experience.

These are ongoing developments. Unknown as yet is how the brain makes sense of scents. What is becoming increasingly clear is that theorizing about the senses necessitates a modernized perspective that admits other modalities and their dimensions. We cannot explain the multitude of perceptual phenomena with vision alone. To think otherwise is not only hubris but sheer ignorance. Smell is less evident in its conceptual borders and classification, its mechanisms of perceptual constancy and variation. It thus requires new philosophical thinking, one that reexamines traditional assumptions about stimulus representation and the conceptual separation of perception and judgment. However, a proper understanding of smell – especially in its contextual sensitivity to cognitive influences – cannot succeed without also taking an in-depth look at its neural underpinnings. Differences in coding, concerning both receptor and neural levels of the sensory systems, matter to how incoming information is realized as perceptual impressions in the mind, along with the question of what these perceptions are and communicate in the first place.

Olfaction is just one prominent example of how misleading historic intellectual predilections about human cognition can be. Neuroscience fundamentally opened up possibilities regarding its methods and outlook, in particular over the past two decades. It is about time that we adjust our somewhat older philosophical conjectures of mind and brain accordingly.


Barwich, AS. 2014. “A Sense So Rare: Measuring Olfactory Experiences and Making a Case for a Process Perspective on Sensory Perception.” Biological Theory9(3): 258-268.

Barwich, AS. 2015. “What is so special about smell? Olfaction as a model system in neurobiology.” Postgraduate Medical Journal92: 27-33.

Barwich, AS. 2016. “Making Sense of Smell.” The Philosophers’ Magazine73: 41-47.

Barwich, AS. 2017. “Up the Nose of the Beholder? Aesthetic Perception in Olfaction as a Decision-Making Process.” New Ideas in Psychology47: 157-165.

Barwich, AS. 2018. “Measuring the World: Towards a Process Model of Perception.” In: Everything Flows: Towards a Processual Philosophy of Biology. (D Nicholson, and J Dupré, eds). Oxford University Press, pp. 337-356.

Buck, L, and R Axel. 1991. “A novel multigene family may encode odorant receptors: a molecular basis for odor recognition.” Cell65(1): 175-187.

Cajal R y. 1988[1901/02]. “Studies on the Human Cerebral Cortex IV: Structure of the Olfactory Cerebral Cortex of Man and Mammals.” In: Cajal on the Cerebral Cortex. An Annotated Translation of the Complete Writings, ed. by J DeFelipe and EG Jones. Oxford University Press.

Chen, CFF, Zou, DJ, Altomare, CG, Xu, L, Greer, CA, and S Firestein. 2014. “Nonsensory target-dependent organization of piriform cortex.” Proceedings of the National Academy of Sciences111(47): 16931-16936.

Classen, C, Howes, D, and A Synnott. 1994.Aroma: The cultural history of smell.Routledge.

Condillac, E B d. 1930 [1754]. Condillac’s treatise on the sensations. (MGS Carr, trans). The Favil Press.

Darwin, C. 1874. The descent of man and selection in relation to sex(Vol. 1). Murray.

Firestein, S. 2001. “How the olfactory system makes sense of scents.” Nature413(6852): 211.

Friston, K. 2010. “The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience11(2): 127.

Graziano, MS, and TW Webb. 2015. “The attention schema theory: a mechanistic account of subjective awareness.” Frontiers in Psychology6: 500.

Hubel, DH, and TN Wiesel. 2004. Brain and visual perception: the story of a 25-year collaboration. Oxford University Press.

Kant, I. 2006 [1798]. Anthropology from a pragmatic point of view(RB Louden, ed). Cambridge University Press.

Keller, A. 2017. Philosophy of Olfactory Perception. Springer.

Livingstone, MS, Vincent, JL, Arcaro, MJ, Srihasam, K, Schade, PF, and T Savage. 2017. “Development of the macaque face-patch system.”Nature Communications8: 14897.

Matthen, M. 2005. Seeing, doing, and knowing: A philosophical theory of sense perception. Clarendon Press.

McGann, JP. 2017. “Poor human olfaction is a 19th-century myth.” Science356(6338): eaam7263.

Mombaerts, P, Wang, F, Dulac, C, Chao, SK, Nemes, A, Mendelsohn, Edmondson, J, and R Axel. 1996. “Visualizing an olfactory sensory map”  Cell87(4): 675-686.

Shepherd, GM. 2004. “The human sense of smell: are we better than we think?” PLoS Biology2(5): e146.

Shepherd, GM. 2012. Neurogastronomy: how the brain creates flavor and why it matters. Columbia University Press.

Smith, BC. 2012. “Perspective: complexities of flavour.” Nature486(7403): S6-S6.

Smith BC. 2017a. “Beyond Liking: The True Taste of a Wine?” The World of Wine58: 138-147.

Smith, BC. 2017b. “Human Olfaction, Crossmodal Perception, and Consciousness.” Chemical Senses 42(9): 793-795.

Spence, C, and B Piqueras-Fiszman. 2014. The perfect meal: the multisensory science of food and dining.John Wiley & Sons.

Sosulski, DL, Bloom, ML, Cutforth, T, Axel, R, and SR Datta. 2011. “Distinct representations of olfactory information in different cortical centres.” Nature472(7342): 213.

Stettler, DD, and R Axel. 2009. “Representations of odor in the piriform cortex.” Neuron63(6): 854-864.

Wilson, DA, and RJ Stevenson. 2006.Learning to smell: olfactory perception from neurobiology to behavior.JHU Press.

Representing the Self in Predictive Processing

Elmarie Venter – PhD candidate, Department of Philosophy, Ruhr-Universität Bochum

Who do you think you are? Or, less confrontationally, what ingredients (e.g. memories, beliefs, desires) go into the model of your self? In this post, I explore different conceptions of how the self is represented in the predictive processing (PP) framework. At the core of PP is the notion that the brain is in the business of making predictions about the world, and that the brain is primarily an organ that functions to minimize prediction error (i.e. the difference between predictions about the state of the world and the observed state of the world) (Clark, 2017, p.727). Predictive processing necessitates modeling the causes of our sensory perturbations and since agents themselves are also such causes, a self-model is required under PP. The internal models of the self will include “…representations of the agent’s own body and its trajectories and interactions with other causes in the world” (Hohwy & Michael, 2017, p.367).

In this post I will discuss accounts of how the self is modelled under two PP camps: Conservative PP and Radical PP. Broadly speaking, Conservative PP holds that the mind is inferentially secluded from the environment – the body also forms part of the external environment. All prediction error minimization occurs behind an ‘evidentiary boundary’ which implies that the brain reconstructs the state of the world (Hohwhy, 2016, p.259). In contrast, Radical PP holds that representations of the world are a matter of embodied and embedded cognition (Dolega, 2017, p.6). Perceiving my self, other agents, and the world, is not a process of reconstruction but rather a coupled process between perception and action. How does the view of a self-model align with these versions of predictive processing? I will argue that Radical PP’s account of self-modelling is preferable because it avoids two key concerns that arise from Conservative PP’s modeling of the self.

On the side of Conservative PP, Hohwy & Michael (2017) conceive of the self-model as one that captures “…representations of the agent’s own body…” as well as hidden, endogenous causes, such as “…character traits, biases, reaction patterns, affections, standing beliefs, desires, intentions, base-level internal states, and so on” (Hohwy & Michael, 2017, p.369). On this view, the self is just another set of causes that is modeled in order to minimize prediction error. This view likens the model of the self to models of the environment and other people (and their mental states), and is in line with the Conservative PP account advocated by Hohwy (2016) under which there is an ‘evidentiary boundary’ between mind and world, behind which prediction error minimization takes place. Any parts of our body “…that are not functionally sensory organs are beyond the boundary… [and are] just the kinds of states that should be modeled in internal, hierarchical models of a (prediction error minimization) system.” (Hohwy, 2016, p.269).

As I see it, Conservative PP’s self-modeling (as described by Hohwy & Michael (2017)) is problematic in two ways:

1) Our access to information about our own body is neglected by Conservative PP. Agents typically have access to certain information about their body that is immune to error through misidentification; this immunity does not extend to information about the world and other agents.

2) Conservative PP ignores the marked difference in how we represent ourselves and other agents. Other agents can only enter our intentional states as part of the content, whereas we ourselves can also enter our intentional states in another way.

In dealing with these concerns I propose that the self is represented along two dimensions: as-subject and as-object (a distinction that can be traced back to Wittgenstein’s Blue Book, and which can be fleshed out by appeal to debates on reference and intentionality). The fundamental idea here is that there is a certain kind of error – in identifying the person that something is true of (e.g. a bodily position or a mental state) – that can occur when identifying the self as-object which cannot occur in identifying the self as-subject (Longuenesse, 2017, p.20; Evans 1982). Imagine that I perceive a coffee mug in front of me, and once I have seen it I reach out my hand to grasp the mug in order to drink from it. Now envision a similar situation, in which I am acting like this while at the same time looking at myself in a mirror. In the latter situation I have two sources of information for obtaining knowledge about myself grasping the cup of coffee. One source of information is proprioceptive and kinesthetic, and therefore provides me with information about myself from the inside. The other source of information is visual, and provides me with information from the outside. The latter source could provide me with information about the actions of other agents as well, whereas the former can only be a source of information about my own self.

Since I am represented in the content of my visual experience in the mirror scenario, I can misrepresent myself as the intentional object of that very visual experience. I could be mistaken with respect to whom I am seeing in the mirror grasping the coffee mug; I may mistakenly believe that I am in fact observing someone else grasping the cup. No such mistake is possible in the contrast case, in which I gain information about grasping the mug from a proprioceptive and kinesthetic source. A more radical example of this distinction between self as-object and as-subject comes from individuals with somatoparaphrenia. Such individuals do not identify some parts of their body as their own, e.g. they may believe that their arm belongs to someone else, but they are not mistaken about who is identifying their arm as belonging to someone else (Kang, 2016; Vallar & Ronchi, 2009). Recanati (2007, pp.147-148) spells out this difference by distinguishing between the content and mode of an intentional state: “The content is a relativized proposition, true at a person, and the internal mode determines the person relative to which that relativized content is evaluated: myself”. With this distinction in mind, the problems with Conservative PP becomes clear: the agent and their body are not represented in the same way as any other distal state in the world. Instead of the agent and their body only forming part of the content of an intentional state (as Hohwy & Michael’s account would imply), they enter the state through the mode of perception as well.

Clark (2017, p.729) provides an analogy that illustrates the first problem with self-modeling under Conservative PP: “The predicting brain seems to be in somewhat the same predicament as the imprisoned agents in Plato’s “allegory of the cave”.” That is, under Conservative PP, distal states can only be inferred by the secluded brain, just as the prisoners in the cave can only infer what the shadows on the walls are shadows of. The consequence of this is that we have no direct (and, therefore, error-immune) access to our own bodies. However, as has been illustrated above, the self enters intentional states through mode (perceiving, imagining, remembering, etc.) as well as content, and this provides us with certain information that is immune from error. In contrast, Radical PP does not conceive of the body as a distal object. Instead, the agent’s body plays an active role in determining the sensory information that we have access to; it plays a fundamental role in how we sample, and act in, the world. This active role is such that certain information is available to us error free – even if I am mistaken about another agent grasping the cup, I cannot be mistaken that it is me that is seeing someone grasp the cup. In this sense, Radical PP provides us with a preferable story about how whole embodied agents are models of the environment and minimize prediction error through a variety of adaptive strategies (Clark, 2017,  p.742).

 The two dimensions of self can also shed light on the second concern with Conservative PP because this distinction illustrates how we perceive and interact with other agents. As discussed above, the self as-object enters intentional states as part of the content, and the self as-subject enters such states through mode. The world, including other agents and their mental states, only ever form part of the content of our intentional states. Referring back to the example spelled out above: another agent can only ever play the same role in perception as I do in the mirror case, i.e. as content of the intentional structure. I do not have access to other agents “from the inside,” however. For instance, I do not have the same access to the reasons behind others’ actions (are they grasping the cup to drink from it, to clear it from the table, to see if there is still coffee in it?), nor do I have access to whether the other agent will successfully grasp the mug (is their grip wide enough, do they have enough strength in their wrist?). There is thus a dimension of the self to which one has privileged access. We only have access to other agents through perceptual inference (i.e. by observing their behavior and inferring its causes), whereas we have both perceptual and active inferential access to our own behaviours. Though Conservative PP proponents maintain that the secluded brain only has perceptual inferential access to our own body (Hohwy, 2016, p.276), there is something markedly different in what enables us to model the causes of our own behavior and mental states to that of other agents. I have proprioceptive, kinesthetic, and interoceptive access to information about myself; I only have exteroceptive information about other agents.

 For Conservative PP, the body (and by extension, the self) is just another object in the world that receives commands to act in service of prediction error minimization. I have highlighted two concerns about this view: the body is treated as a distal object, and the body (and self) placed on the same side of the evidentiary boundary as other agents. This means that the dimension of self which is immune to error through misidentification is not accommodated, and the marked difference in our access to information about our own states and those of other agents is ignored. Radical PP, however, avoids both concerns by taking into account the two representational dimensions of the self and employing an embodied approach to cognition. The Radical PP account therefore provides a more refined version of self-modeling. My beliefs, desires, and bodily shape can all be inferred in the model of self-as-object, but self-as-subject captures the part of the self that is not inferred: it contains information about me and my body from the inside, which is an essential part of who we think we are.


Clark, A., 2017. Busting Out: Predictive Brains, Embodied Minds, and The Puzzle of The Evidentiary Veil. Noûs, 51(4): 727-753.

Dolega, K., 2017. Moderate Predictive Processing. In T. Metzinger & W. Wiese (Eds.) Philosophy and Predictive Processing. Frankfurt Am Main: MIND Group.

Evans, J., 1982. The Varieties of Reference. Oxford: Clarendon Press.

Friston, K. J. and Stephan, K. E., 2007. Free-energy and the Brain. Synthese, 159(3): 417-458.

Hohwy, J., 2016. The Self-Evidencing Brain. Noûs, 50(2): 259-285.

Hohwy, J. and Michael, J., 2017. Why Should Any Body Have A Self? In F. de Vignemont & A. Alsmith (Eds.) The Subject’s Matter: Self-Consciousness And The Body. Cambridge, Massachusetts: MIT Press.

Kang, S. P., 2016. Somatoparaphrenia, the Body Swap Illusion, and Immunity to Error through Misidentification. The Journal of Philosophy, 113(9): 463-471.

Longuenesse, B., 2017. I, Me, Mine: Back To Kant, And Back Again. New York: Oxford University Press.

Michael, J. and De Bruin, L., 2015. How Direct is Social Perception. Consciousness and Cognition, 36: 373-375.

Recanati, F., 2007. Perspectival Thought: A Plea For (Moderate) Relativism. Clarendon Press.

Thompson, E. and Varela, F. J., 2001. Radical Embodiment: Neural Dynamics and Consciousness. Trends in Cognitive Sciences, 5(10): 418-425.

Vallar, G. and Ronchi, R., 2009. Somatoparaphrenia: A Body Delusion. A Review of the Neuropsychological Literature. Experimental Brain Research, 192(3): 533-551.

Wittgenstein, L. 1960. Blue Book. Oxford: Blackwell.


The frustrating family of pain

Sabrina Coninx – PhD candidate, Department of Philosophy, Ruhr-Universität Bochum

What is pain? At first glance this question seems straightforward – almost everyone knows what it feels like to be in pain. We have all felt that shooting sensation when hitting the funny bone, or the dull throb of a headache after a stressful day. There is also much common ground within the scientific community with respect to this question. Typically, pain is taken to be best defined as a certain kind of mental phenomenon experienced by subjects as pain. For instance, this corresponds to the (still widely accepted) definition of pain given by the International Association for the Study of Pain (1986). Most cognitive scientists are not merely interested in knowing that various phenomenal experiences qualify as pain from a first-person perspective, however. Instead, pain researchers primarily focus on searching for necessary and sufficient conditions for pain, such that a theory can be developed which allows for informative discriminations and ideally far-reaching generalizations. Pain has proven to be a surprisingly frustrating object of research in this regard. In the following, I will outline one of the main reasons for this frustration, namely the lack of a sufficient and necessary neural correlate for pain. Subsequently, I will briefly review three solutions to this challenge, arguing that the third is the most promising option.

Neuroscientifically speaking, pain is typically understood as an integrated phenomenon which emerges with the interaction of simultaneously active neural structures that are widely distributed across cortical and subcortical areas (e.g. Apkarian et al., 2005; Peyron et al., 1999). Interestingly, and perhaps surprisingly, the activation of these neural structures is neither sufficient nor necessary for the experience of pain (Wartolowska, 2011). Those neural structures that are highly correlated with the experience of pain are not pain-specific (e.g. Apkarian, Bushnell, & Schweinhardt, 2013), and even the activation of the entire neural network is not sufficient for pain. For instance, itch and pain are processed in the same anatomically defined network (Mochizuki & Kakigi, 2015). There also does not seem to be any neural structure whose activation is necessary for pain (Tracey, 2011). Even patients with substantial lesions in those neural structures that are often regarded as most central for pain processing are still able to experience pain (e.g. Starr et al., 2009).

Figure 1 Human brain processing pain, retrieved from Apkarian et al. (2005). Original picture caption: Cortical and subcortical regions involved in pain perception, their inter-connectivity and ascending pathways. Location of brain regions involved in pain perception are color-coded in a schematic drawing and in an example MRI. (a) Schematic shows the regions, their inter-connectivity and afferent pathways. The schematic is modified from Price (2000) to include additional brain areas and connections. (b) The areas corresponding to those shown in the schematic are shown in an anatomical MRI, on a coronal slice and three sagittal slices as indicated at the coronal slice. The six areas used in meta-analysis are primary and secondary somatosensory cortices (SI, SII, red and orange), anterior cingulate (ACC, green), insula (blue), thalamus (yellow), and prefrontal cortex (PC, purple). Other regions indicated include: primary and supplementary motor cortices (M1 and SMA), posterior parietal cortex (PPC), posterior cingulate (PCC), basal ganglia (BG, pink), hypothalamus (HT), amygdala (AMYG), parabrachial nuclei (PB), and periaqueductal grey (PAG).

Given the difficulties of characterizing pain by appeal to unique neural structures or a specialized network, some researchers have attempted to characterize pain by appeal to neurosignatures. ‘Neurosignature’ refers to the spatio-temporal activity pattern generated by a network of interacting neural structures (Melzack, 2001). Thus, neurosignatures are less about the mere involvement of an anatomically defined neural network, but rather about how involved structures are activated and how their activity is coordinated (Reddan & Wager, 2017). Most interestingly, it has been shown that the neurosignature of pain differs from the neurosignature of other somatosensory stimulations, such as itch and warmth (Forster & Handwerker, 2014; Wager et al., 2013).

Unfortunately, different kinds of pain substantially differ with respect to their underlying neurosignatures. For instance, neurosignatures found in patients with chronic pain substantially differ from those of healthy subjects experiencing acute pain (Apkarian, Baliki, & Geha, 2009), because the central nervous system of subjects who live in persisting pain is continuously reorganized as the brain’s morphology, plasticity and chemistry change over time (Kuner & Flor, 2016; Schmidt-Wilcke, 2015). At most, therefore, we can state that a particular coordination of neural activity is sufficient to distinguish a particular kind of pain from certain non-pain phenomena. However, there seems to be no single neurosignature that is necessary for pain to emerge.

We have arrived at the dilemma that makes pain such a frustrating object of research. On one hand, researchers mostly agree that all and only pains are best defined by means of them being subjectively experienced as pains. On the other hand, cognitive scientists are unable to identify a single set of neural processes that capture the circumstances under which all and only pains are experienced as such. Thus, the scientific community has been unable to provide an informative and generalizable account of pain. Two solutions to this dilemma have been offered in the literature.

The first solution involves relinquishing the notion of pain as a certain kind of phenomenal experience, which is an antecedence for most cognitive scientists. Instead, neuroscientific data alone are supposed to be the primary criterion for the identification of pain (e.g. Hardcastle, 2015). This solution therefore eliminates the first part of the dilemma. There are two main problems faced by this solution. Firstly, neural data do not reveal the function of neural structures, networks or signatures by themselves. The function of these neural characteristics are only revealed by their being correlated with some sort of additional data (which, in the case of pain, is typically the subject’s qualification of their own experience as pain). Thus, removing the subjective aspect from pain is analogous to biting the hand that feeds you. Secondly, serious ethical problems arise when subjective experience is no longer treated as the decisive criterion for the identification of pain. Because neural data may differ from the subjective qualification, this approach may lead to a rejection of medical support for patients that undergo a phenomenal experience of pain. This is a consequence that the majority of contemporary researchers are – for good reasons – unwilling to take (Davis et al., 2018).

As a second solution, one might relinquish the idea that it is possible to develop a single theory of pain. Instead, researchers should focus on the development of separate theories for separate kinds of pain (see, for instance, Jennifer Corns, 2016, 2017). An analogy might illustrate this approach. The gem class ‘jade’ is unified due to the apparent properties of the respective stones, such as color and texture. However, in scientific terms the class of jade is composed of jadeite and nephrite, which are of different chemical compositions. Thus, it is possible to develop a theory that enables a distinct characterization with far-reaching generalizations for either jadeite or nephrite, but not for jade itself (which lacks a sufficient and necessary chemical composition). Similarly, this solution to the pain dilemma holds that all and only pains are unified due to their phenomenal experience as pain, but they cannot be captured in terms of a single scientific theory. Instead, we need a multiplicity of theories for pain which refer to those subclasses that reveal a necessary and sufficient neural profile.

This solution avoids the methodological and ethical problems faced by the first solution because it is compatible with pains being defined as a certain subjective mental phenomenon. However, because this solution denies that it is possible to develop a single theory of pain, the phenomenon that the scientific community is interested in studying could not thereby be completely accounted for. If we did develop multiple theories of pain (one for acute pain and one for chronic pain, say), it is far from clear that these theories could explain why all and only pains are subjectively experienced as pain. At best, this might explain why certain cases are acute or chronic pains, but not why they are both pains. What is missing is a theoretical link that connects the different kinds of pain that, according to this solution, emerge only as independent neural phenomena in separated theories. In terms of the previous analogy, we need something which plays the role of the resemblances in chemical composition between jadeite and nephrite that explains why both of them appear as ‘jade’.

I would like to offer a third solution to the dilemma which avoids the concerns faced by the first solution, and which provides the missing theoretical link required by the more promising second solution. This is to hold a family resemblance theory of pain. The idea of family resemblance comes from Ludwig Wittgenstein (1953) (although he develops this idea with respect to the meaning of concepts rather than the properties of natural phenomena). A family resemblance theory of pain takes the phenomenal character of pain to unify all and only pains; one’s own subjective experience of pain as such is the criterion of identification that picks out members of the ‘family’ of pain. Moreover, the family resemblance theory of pain denies the presence of an underlying sufficient and necessary neural condition for pain; there is no neural process that distinctively and essentially characterizes pain. Thus, the subjective qualification identifies all and only cases of pain, although they do not share any further necessary or sufficient neural feature. Nonetheless, a family resemblance theory further claims that it is still possible to develop a scientifically useful neurally-based theory of pain that accounts for the phenomenon that the scientific community is interested in.

For this third solution, all and only those phenomena that are experienced as pain are connected through a structure of systematic resemblances that hold between their divergent neural profiles. For instance, consider, again, acute and chronic pain. Both are experienced as pain, and they are substantially different from each other from a neural perspective when directly compared. However, the transformation from acute to chronic pain is a gradual process, whereby the respective duration of pain correlates with the extent of differences in their neural profile (Apkarian, Baliki, & Geha, 2009). Thus, the neural process of a pain’s first occurrence is relatively similar to its second occurrence, which itself only slightly differs from its third occurrence, and so forth, until it has transformed into some completely different neural phenomenon. This connection of resemblances over time enables us, however, to explain why subjects experience all of these kinds of pain as pain: acute and chronic pain are bound together under the family resemblance theory through the resemblance relations that hold between the variety of pains that connect them.

Moreover, the family resemblance theory motivates the investigation of pain’s resemblance relations which might prove theoretically as well as practically useful. In further developing research projects of this kind, it appears plausible that, for instance, pains that are more similar to each other are more responsive to the same kind of treatment, even though they do not share a necessary and sufficient neural core property. Understanding the gradual transition within the resemblance relations that lead from acute to chronic pain might also offer new possibilities of intervention. Thus, instead of developing a separate theory for different kinds of pain, this third approach motivates the investigation of the diversity of neural profiles that occur within the family of pain and of the exact structure of their resemblance relations, and indeed first steps in this direction are already being taken (e.g. Roy & Wager, 2017).

In sum, when it comes to mental phenomena, such as pain, the underlying neural substrate reaches a complexity and diversity which prevents the identification of necessary and sufficient neural conditions. The family of pain therefore constitutes a frustrating research object. However, we do not need to throw out the baby with the bathwater and relinquish the definition of pain as a certain kind of mental phenomenon, or the idea of a scientifically useful theory of pain. Of course, a family resemblance theory will be limited with respect to its discriminative and predictive value, since it acknowledges that there is no necessary or sufficient neural substrate for pain. However, it is the most reductive theory of pain that can be developed in accordance with recent empirical data, and which can account for the fact that all and only pains are experienced as pain.



Apkarian, A. V, Bushnell, M. C., Treede, R.-D., & Zubieta, J.-K. (2005). Human brain mechanisms of pain perception and regulation in health and disease. European Journal of Pain, 9(4), 463–484.

Apkarian, A. V., Baliki, M. N., & Geha, P. Y. (2009). Towards a theory of chronic pain. Progress in Neurobiology, 87(2), 81-97.

Apkarian, A. V., Bushnell, M. C., & Schweinhardt, P. (2013). Representation of pain in the brain. In S. B. McMahon, M. Koltzenburg, I. Tracey, & D. C. Turk (Eds.), Wall and Melzack’s Textbook of Pain (6th ed., pp. 111–128). Philadelphia: Elsevier Ltd.

Corns, J. (2016). Pain eliminativism: scientific and traditional. Synthese, 193(9), 2949–2971.

Corns, J. (2017). Introduction: pain research: where we are and why it matters. In J. Corns (Ed.), The Routledge Handbook of Philosophy of Pain (pp. 1–13). London; New York: Routledge.

Davis, K. D., Flor, H., Greely, H. T., Iannetti, G. D., Mackey, S., Ploner, M., Pustilnik, A., Tracey, I., Treede, R.-F., & Wager, T. D. (2018). Brain imaging tests for chronic pain: medical, legal and ethical issues and recommendations. Nature Reviews Neurology, in press.

Forster, C., & Handwerker, H. O. (2014). Central nervous processing of itch and pain. In E. E. Carstens & T. Akiyama (Eds.), Itch: Mechanisms and Treatment (pp. 409–420). Boca Raton (FL): CRC Press/Taylor & Francis.

Hardcastle, V. G. (2015). Perception of pain. In M. Matthen (Ed.), The Oxford Handbook of Philosophy of Perception (pp. 530–542). Oxford: Oxford University Press.

IASP Subcommitte on Classification. (1986). Pain terms: a current list with definitions and notes on usage. Pain, 24(suppl. 1), 215–221.

Kuner, R., & Flor, H. (2016). Structural plasticity and reorganization in chronic pain. Nature Reviews Neuroscience, 18(1), 20–30.

Melzack, R. (2001). Pain and the neuromatrix in the brain. Journal of Dental Education, 65(12), 1378–1382.

Mochizuki, H., & Kakigi, R. (2015). Central mechanisms of itch. Clinical Neurophysiology, 126(9), 1650-1660.

Peyron, R., García-Larrea, L., Grégoire, M. C., Costes, N., Convers, P., Lavenne, F., Maugière, F., Michel, D., & Laurent, B. (1999). Haemodynamic brain responses to acute pain in humans. Sensory and attentional networks. Brain, 122(9), 1765-1779.

Reddan, M. C., & Wager, T. D. (2017). Modeling pain using fMRI: from regions to biomarkers. Neuroscience Bulletin, 34(1), 208–215.

Roy, M., & Wager, T. D. (2017). Neuromatrix theory of pain. In J. Corns (Ed.), Routledge Handbook of Philosophy of Pain (pp. 87–97). London; New York: Routledge.

Schmidt-Wilcke, T. (2015). Neuroimaging of chronic pain. Best Practice and Research: Clinical Rheumatology, 29(1), 29–41.

Starr, C. J., Sawaki, L., Wittenberg, G. F., Burdette, J. H., Oshiro, Y., Quevedo, A. S., & Coghill, R. C. (2009). Roles of the insular cortex in the modulation of pain: insights from brain lesions. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 29(9), 2684–2694.

Tracey, I. (2011). Can neuroimaging studies identify pain endophenotypes in humans? Nature Reviews. Neurology, 7(3), 173–181.

Wager, T. D., Atlas, L. Y., Lindquist, M. A., Roy, M., Woo, C.-W., & Kross, E. (2013). An fMRI-based neurologic signature of physical pain. The New England Journal of Medicine, 368(15), 1388–1397.

Wartolowska, K. (2011). How neuroimaging can help us to visualize and quantify pain? European Journal of Pain Supplements, 5(2), 323–327.

Wittgenstein, L. (1953). Philosophical investigations. G. E. M. Anscombe & R. Rhees (Eds.). Oxford: Blackwell Publishing.

Can a visual experience be biased?

by Jessie Munton – Junior Research Fellow at St John’s College, Cambridge

Beliefs and judgements can be biased: my expectations of someone with a London accent might be biased by my previous exposure to Londoners or stereotypes about them; my confidence that my friend will get the job she is interviewing for may be biased by my loyalty; and my suspicion that it will rain tomorrow may be biased by my exposure to weather in Cambridge over the past few days. What about visual experiences? Can visual experiences be biased?

That’s the question I explore in this blog post. In particular, I’ll ask whether a visual experience could be biased, in the sense of exemplifying forms of racial prejudice. I’ll suggest that the answer to this question is a tentative “yes”, and that that presents some novel challenges to how we think of both bias and visual perception.

According to a very simplistic way of thinking about visual perception, it presents the world to us just as it is: it puts us directly in touch with our environment, in a manner that allows it to play a unique, possibly foundational epistemic role. Perception in general, and visual experience with it, is sometimes treated as a kind of given: a sourceof evidence that is immune to the sorts of rational flaws that beset our cognitive responses to evidence. This approach encourages us to think of visual experience as a neutral corrective to the kinds of flaws that can arise in belief, such as bias or prejudice: there is no roomin the processes that generate visual experience for the kinds of influence that cause belief to be biased or prejudiced.

But there is a tension between that view and certain facts about the subpersonal processes that support visual perception in creatures like ourselves. In particular, our visual system faces an underdetermination challenge: the light signals received by the retina fail, on their own, to determine a unique external stimulus (Scholl 2005). To resolve the resulting ambiguity, the visual system must rely on prior information about the environment, and likely stimuli within it. But those priors are not fixed and immutable: the visual system updates them in light of previous experience (Chalk et al 2010, Chun &Turk-Browne 2008). In this way, the visual system learns from the idiosyncratic course that the individual takes through the world.

Equally, the visual system is overwhelmed with possible input: the information available from the environment at any one moment far surpasses what the brain can process (Summerfield & Egner 2009). It must selectively attend to certain objects or areas within the visual field, in order to prioritise the highest value information. Preexisting expectations and priorities determine the salience of information within a given scene. The nature and content of the visual experience you are having at any moment in part depends on the relative value you place on the information in your environment.

We perceive the world, then, in light of our prior expectations, and past exposure to it. Those processes of learning and adaptation, of developing skills that fit a particular environmental context, leave visual perception vulnerable to a kind of visual counterpart to bias: we do not come to the world each time with fresh eyes. If we did, we would see less accurately and efficiently than we do.

Cognitive biases often emerge as a response to particular environmental pressures: they persist because they lend some advantage in certain circumstances, but come at the expense of sensitivity to certain other information (Kahneman & Tversky 1973). Similarly, the capacity of the visual system to develop an expertise within a particular context can restrict its sensitivity to certain sorts of information. We can see this kind of structure in the specialist abilities we develop to see faces.

You might naturally think that we perceive high-level features of faces, such as the emotion they display or the racial category they belong to, not directly, but only in virtue of, or perhaps via some kind of subpersonal inference from, their lower-level features: the arrangement of facial features, for instance, or the color and shading that let us pick out those features. In fact, there’s good evidence that we perceive the social category of a face, or the emotion it displays directly. For instance, we demonstrate “visual adaptation” to facial emotion: after seeing a series of angry faces, a neutral face appears happy. And those adaptation effects are specific to the gender and race of the face,suggesting that these categories of faces may be coded for my different neural populations (Jaquet, Rhodes, & Hayward 2007, 2008; Jacquet & Rodes 2005, Little, DeBruine, & Jones 2005).

Moreover, our skills at face perception seem to be systematically arranged along racial lines: most people are better at recognizing own-race and dominant-race faces, (Meissner & Brigham 2001), the result of a process of specialisation that emerges over the first 9 months of life as infants gradually lose the capacity to recognize faces of different or non-dominant races (Kelly et al. 2007). A White adult in a majority white society will generally be better at recognizing other white faces than Black or Asian faces, for instance, whereas a Black person living in a majority Black society will conversely be less good at recognizing White than Black faces.  This extends to the identification of emotion from faces, as well as their recognition: subjects are more accurate at identifying the emotion displayed on dominant or same-race faces than other-race faces (Elfenbeim & Ambady 2002).

One way of understanding this profile of skills is to think of faces as arranged within a multidimensional “face space” depending on their similarity to one another. We hone our perceptual capacities within that area of face space to which we have most exposure. That area of face space becomes, in effect, stretched, allowing for finer grained distinctions between faces. (Valentine 1991; Valentine, Lewis and Hills 2016). The greater “distance” between faces in the area of face space in which we are most specialized renders those faces more memorable and easier to distinguish from one another. Another way of thinking of this is in terms of “norm-based coding” (Rhodes and Leopold 2011): faces are encoded relative to the average face encountered. Faces further from the norm suffer in terms of our visual sensitivity to the information they carry.

On the one hand, it isn’t hard to see how this kind of facial expertise could help us extract maximal information from the faces we most frequently encounter.  But the impact of this “same-race face effect” more generally is potentially highly problematic: a White person in a majority White society will be less likely to accurately recognise a Black individual, and less able to accurately perceive their emotions from their face. That diminution of sensitivity to faces of different races paves the way for a range of downstream impacts. Since the visual system fails to advertise this differential sensitivity, the individual is liable to reason as though they have read their emotions with equal perspicuity, and to draw conclusions on that basis (that the individual feels less perhaps, when the emotion in question is simply visually obscure to them). Relatedly, the lack of information extracted perceptually from the face makes it more likely that the individual will fill that shortfall of information by drawing on stereotypes about the relevant group: that Black people are aggressive, for instance, (Shapiro et al. 2009; Brooks and Freeman 2017). And restrictions on the ability to accurately recall certain faces will bring with them social costs for those individuals.

Compare this visual bias to someone writing a report about two individuals, one White and one Black. The report about the White person is detailed and accurate, whilst the report on the Black person is much sparser, lacking information relevant to downstream tasks. In such a case, we would reasonably regard the report writer as biased, particularly if their report writing reflected this kind of discrepancy between White and Black targets more generally. If the visual system displays a structurally similar bias in the information it provides us with, should we regard it, too, as biased?

To answer that question, we need to have an account of what it is for anythingto be biased, be it a visual experience, a belief, or a disposition to behave or reason in some way or other. We use ‘bias’ in many different ways. In particular, we need to distinguish here what I call formal bias from prejudicial bias. In certain contexts, a bias may be relatively neutral. A ship might be deliberately given a bias to list towards the port side, for instance, by uneven distribution of ballast. Similarly, any system that resolves ambiguity in incoming signal on the basis of information it has encountered in the past is biased by that prior information. But that’s a bias that, for the most part, enhances rather than detracts from the accuracy of the resulting judgements or representations.  We could call biases of this kind formal biases.

Bias also has another, more colloquial usage, according to which it picks out something distinctively negative, because it indicates an unfairor disproportionatejudgement, a judgement subject to an influence that is distinctively illegitimate in some way. Bias in this sense often involves undue influence by demographic categories, for instance. We might describe an admissions process as biased in this way if it disproportionately excludes working-class candidates, or women, or people with red hair. We can call bias of this kind prejudicial bias.

The visual system is clearly capable of exhibiting the first kind of bias. As a system that systematically learns from past experiences in order to effectively prioritise and process new information, it is a formally biased system. Similarly, the same-race face effect in face perception involves the systematic neglect of certain information as the result of task-specific expertise. That renders it an instance of formal bias.

To decide whether this also constitutes an instance of prejudicial bias, we need to ask: is that neglect of information illegitimate? And if so, on what grounds? Two difficulties present themselves at this juncture. The first is that we are, for the most part, not used to assessing the processes involved in visual perception as legitimate, or illegitimate (though that has come under increasing pressure recently, in particular in Siegel (2017).) We need to develop a new set of tools for this kind of critique. The second difficulty is the way in which formal bias, including the development of perceptual expertise of the kind demonstrated in the same race face effect, is a virtue of visual perception. It makes visual perception not just efficient, but possible. Acknowledging that can seem to restrict our ability to condemn the bias in question as not just formal, but prejudicial.

This throws us up against the question: what is the relationship between formal and prejudicial bias? Formal bias is often a virtue: it allows for the more efficient extraction of information, by drawing on relevant post information. Prejudicial bias on the other hand is a vice: it limits the subjects’ sensitivity to relevant information in a way that seems intuitively problematic. What are the circumstances under which the virtue of formal bias becomes the vice of prejudicial bias?

In part, this seems to depend on the context in which the process in question is deployed, and the task at hand. The virtues of formal biases rely on stability in both the individual’s environment and goals: that’s when reliance on past information and expertise developed via consistent exposure to certain stimuli is helpful. The same-race face effect develops as the visual system learns to extract information from those faces it most frequently encounters. The resulting expertise cannot adapt at the same pace as our changing, complex social goals across a range of contexts. As a result, this kind of formal perceptual expertise results in a loss of important information in certain contexts: an instance of prejudicial bias. If that’s right, then the distinction between formal and prejudicial bias isn’t one that can be identified just by looking at a particular cognitive process in isolation, but only by looking at that process across a dynamic set of contexts and tasks.



Brooks, J. A., & Freeman, J. B. (2017). Neuroimaging of person perception: A social-visual interface. Neuroscience Letters.

Chalk, M. S., & Series, A. R. (2010). Rapidly learned stimulus expectations alter perception of motion.Journal of Vision, 108(2), 1–18.

Chun, M. M., & Turk-Browne, N. B. (2008). Associative Learning Mechanisms in Vision. Oxford University Press. Retrieved from

Elfenbein, H. A., & Ambady, N. (2003). When familiarity breeds accuracy: cultural exposure and facial emotion recognition. Journal of Personality and Social Psychology,85(2), 276.

Jaquet, E., & Rhodes, G. (2008). Face aftereffects indicate dissociable, but not distinct, coding of male and female faces. Journal of Experimental Psychology. Human Perception and Performance, 34(1), 101–112.

Jaquet, E., Rhodes, G., & Hayward, W. G. (2007). Opposite aftereffects for Chinese and Caucasian faces are selective for social category information and not just physical face differences. The Quarterly Journal of Experimental Psychology, 60(11), 1457–1467.

Jaquet, E., Rhodes, G., & Hayward, W. G. (2008). Race-contingent aftereffects suggest distinct perceptual norms for different race faces. Visual Cognition, 16(6), 734–753.

Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review,80, 237–251.

Kelly, D. J., Quinn, P. C., Slater, A. M., Lee, K., Ge, L., & Pascalis, O. (2007). The other-race effect develops during infancy: Evidence of perceptual narrowing. Psychological Science, 18(12), 1084–1089.

Little, A. C., DeBruine, L. M., & Jones, B. C. (2005). Sex-contingent face after-effects suggest distinct neural populations code male and female faces. Proceedings of the Royal Society B: Biological Sciences, 272(1578), 2283–2287.

Meissner, C. A., & Brigham, J. C. (2001). Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review. Pyschology, Public Policy and Law,7(1), 3–35.

Rhodes, G., & Leopold, D. A. (2011). Adaptive Norm-Based Coding of Face Identity.

Scholl, B. J. (2005). Innateness and (Bayesian) visual perception. In P. Carruthers (Ed.), The Innate Mind: Structure and Contents(p. 34). New York: Oxford University Press.

Shapiro, J. R., Ackerman, J. M., Neuberg, S. L., Maner, J. K., Becker, D. V., & Kenrick, D. T. (2009). Following in the Wake of Anger: When Not Discriminating Is Discriminating. Personality & Social Psychology Bulletin, 35(10), 1356–1367.

Siegel, S. (2017). The Rationality of Perception.Oxford University Press

Summerfield, C., & Egner, T. (2009). Expectation (and attention) in visual cognition. Trends in Cognitive Science, 13(9), 403–409.

Valentine, T. (1991). A unified account of the effects of distinctiveness, inversion, and race in face recognition. The Quarterly Journal of Experimental Psychology, 43(2), 161–204.

Valentine, T., Lewis, M. B., & Hills, P. J. (2016). Face-space: A unifying concept in face recognition research. The Quarterly Journal of Experimental Psychology, 69(10), 1996–2019.


How can I credibly commit to others?

Francesca Bonalumi – PhD candidate, Department of Cognitive Science, Central European University



Imagine that you plan to go to the gym with your friend Kate. You decide together to meet in the locker room at 6pm. Why would you expect that Kate will honour this agreement to meet you at the gym? Now, imagine that at 5.30pm you discover that some other friends are gathering at 6pm, and you would love to join them. What restrains you from joining them, even if this is now your preferred option? Your answers to these kinds of dilemmas that are faced in everyday life will probably involve some reference to the fact that a commitment was in place between you and Kate.

The notion of a commitment is worth investigating, in part, because it applies to such a wide variety of cases: we are committed to our partners, our faith, our work, our promises, our goals, and even ourselves. Although there is an obvious similarity between all these situations, I will restrict this post to instances of interpersonal commitment, namely those commitments that are made by one individual to another individual (cfr. Clark 2006). According to a standard philosophical definition of interpersonal commitment, a commitment is a relation among one committed agent, one agent to whom the commitment has been made, and an action which the committed agent is obligated to perform (Searle 1969; Scanlon 1998).

The ability to make and assess interpersonal commitments is crucial in supporting our prosocial behaviour: being motivated to comply with those courses of action that we have committed to, and being able to assess whether we can rely on others’ commitments, enables us to perform a wide range of jointly coordinated and interpersonal activities that wouldn’t otherwise be feasible (Michael & Pacherie, 2015). This ability requires psychological mechanisms that induce individuals to follow rules or plans even when it is not in their short-term interests: this can sustain phenomena from the inhibition of short-term self-interested actions to the motivation for moral behaviour. I will focus on one key, yet underappreciated, aspect of this relation which sustains the whole act of committing: how the committed agent gives assurance to the other agent that she will perform the relevant action. That is, how she makes her commitment credible.

Making a commitment can be defined as an act that aims to influence another agent’s behaviour by changing her expectations (e.g. my committing to help a friend influences my friend’s behaviour, insofar as she can now rely on my help), and by this act the committer gains additional motivation for performing the action that she committed to (Nesse 2001; Schelling 1980). The key element in all of this is credibility: how do I credibly persuade someone that I will do something that I wouldn’t do otherwise? And why would I remain motivated to do something that is no longer in my interest to do? Indeed, a dilemma faced by recipients in any communicative interaction is determining whether they can rely on the signal of the sender (i.e. how to rule out the possibility that the sender is sending a fake signal) (Sperber et al., 2010). Likewise, in a cooperative context the problem for any agent is how to distinguish between a credible commitment and a fake commitment, and how to signal a credible commitment without being mistaken for a defector (Schelling, 1980).

The most persuasive way to make my commitment credible is to discard alternative options in order to change my future incentives, such that compliance with my commitments will remain in my best interests (or be my only possible choice). Odysseus instructing his crew to tie him to the mast of the vessel and to ignore his future orders is one strong example of committing to resist the Sirens’ call in this manner; avoiding coffee while trying to quit smoking (when having a cigarette after a coffee was a well-established habit) is another example.

How can we persuade others that our commitments are credible when incentives are less tangible, and alternative options cannot be completely removed? Consider a marriage, in which both partners rely on the fact that the other will remain faithful even if future incentives change. Emotions might be one way of signalling my willingness to guarantee the execution of the commitment (Frank 1998; Hirshleifer 2001). If two individuals decide to commit to a relationship, the emotional ties that they form ensure that neither will reconsider the costs and benefits of the relationship[1].  Likewise, if, during a fight, one individual displays uncontrollable rage, she is giving her audience reason to believe that she won’t give up the fight even if continuing to fight is to her disadvantage. One reason that emotions are taken to be credible is because they are allegedly hard to convincingly fake: some studies suggest that humans are intuitively able to recognize the appropriate emotions when observing a face (Elfenbein & Ambady, 2002), and to some extent humans are able to effectively discriminate between genuine and fake emotional expression (Ekman, Davidson, & Friesen, 1990; Song, Over, & Carpenter, 2016).

Formalising a commitment by making promises, oaths or vows is another way of increasing the credibility of your commitment. Interestingly, with such formalised declarations people not only manifest an emotional attachment to the object of the commitment; they also signal a willingness to put their reputation at risk. This is because the more public the commitment is (and the more people are aware of the commitment), the higher the reputational stakes will be for the committed individual.

Securing a commitment by altering your incentives, by risking your reputation, or by expressing it via emotional displays are importantly similar: the original set of material payoffs for performing each action changes, because now the costs of smoking or untying yourself from the mast of a vessel are too high (if it is even still possible to pay these costs). But we can imagine the emotional costs paid in case of a failure (e.g. the disappointment from slipping back into our undesirable habit of smoking), as well as the social costs (e.g. damage to our reputation as a reliable individual), as incentives to comply with the action that was committed to (Fessler & Quintelier 2014).



Cheating Non-cheating
Before the commitment p -p
After the commitment p – (m + r + e) -p

Fig.1 Payoff matrix of the decision to cheat on your partner: p is the pleasure you get out of cheating, whereas m is the material costs paid in such cases (e.g. a costly divorce), r is the reputational costs and e is the emotional burden that will be paid in such cases. When p is not higher than the sum of r, m and e, and the individual accurately predict the likelihood of these outcomes, we’ll have a situation in which breaking a commitment is not worthwhile.


Consistent with the idea that commitments change your payoff matrix (see Fig.1), several studies have shown that commitments facilitate coordination and cooperation in multiple economic games. Promises were found to increase an agent’s trustworthy behaviour as well as her partner’s predictions about her behaviour in a trust game (Charness and Dufwenberg 2006), and they were found to increase one’s rates of donations in a dictator game (Sally 1995; Vanberg 2008). Spontaneous promises have also been found to be predictive of cooperative choices in a Prisoner’s Dilemma game (Belot, Bhaskar & Van de Ven 2010). The willingness to be bound to a specific course of action (e.g. as Ulysses) has also been found to be highly beneficial in Hawk-Dove and Battle-of-Sexes games, as committed players are more likely to obtain their preferred outcomes (Barclay 2017).

Interestingly, the payoff structures that an agent faces when they make a commitment is similar to the payoff structure of a threat: If you are involved in a drivers’ game of chicken, the outcome you want is the one in which you don’t swerve. But your partner prefers the outcome in which she does not swerve, and the worst outcome would be the one in which the two cars crash because neither of you swerved. The key factor is, again, whether you can credibly signal to the other driver that you won’t spin the wheel, no matter what.

Some of the same means by which credibility can be conveyed in cases commitment apply to threats as well. For instance, one efficacious way by which you can credibly persuade the other driver is by removing the steering wheel and throwing it out of the window, thereby physically preventing yourself from changing the direction of your car (Kahn 1965); another is by playing a war of nerves, conveying the idea that you are so emotionally connected to your goal that you would be willing to pay the highest cost if necessary.

Threat is an interesting phenomenon to consider when investigating the role of credibility in commitment because it might help us to understand how commitment works, and how threat and commitment might have evolved in similar fashion. What leads a non-human animal to credibly signal an intention to behave in a certain way to its audience, and what lead its audience to rely on this signal, is highly relevant for investigating commitment. It is still uncertain just how threat signals have stabilized evolutionarily, given that a selective pressure for faking the threat would also be evolutionarily advantageous (Adams & Mesterton-Gibbons 1995). The same selective pressure apply to human threats and commitments: if the goal is to signal future compliance to an action in order to change the audience’s behaviour (by changing her expectations), what motivates us to then comply to that signal instead of, say, simply taking advantage of the change in our audience’s behaviour?

In other words, the phenomenon of commitment is intrinsically tied to the problem of recognising (and maybe even producing) fake signals, and deceiving others, just as in the case of making a threat. That being said, it is worth keeping in mind that the phenomenon of threat differs importantly from the phenomenon of commitment, insofar as the former does not entail any motivation for prosocial behaviour. In this respect, the phenomena of quiet calls and natal attraction, in which animals signal potential cooperation or a disposition not to engage in a fight, are also worth investigating further for the sake of better understanding how credibility can be established in the case of commitment (Silk 2001).

Most of our social life is built upon commitments that are either implicit or explicitly expressed. We expect people to do things even in the absence of a verbal agreement to do so, and we act in accordance with these expectations. Investigating the factors that carry this motivational force, such as credibility, is the next big challenge in better grasp the complexities of this important notion, and would help us to better understand its ontogenetic and phylogenetic development.



Adams, E. S., & Mesterton-Gibbons, M. (1995). The cost of threat displays and the stability of deceptive communication. Journal of Theoretical Biology, 175(4), 405–421.

Barclay, P. (2017). Bidding to Commit. Evolutionary Psychology, 15(1), 1474704917690740.

Belot, M., Bhaskar, V., & van de Ven, J. (2010). Promises and cooperation: Evidence from a TV game show. Journal of Economic Behavior & Organization, 73(3), 396-405.

Charness, G., & Dufwenberg, M. (2006). Promises and Partnership. Econometrica, 74, 1579–1601.

Clark, H. H. (2006). Social actions, social commitments. In S.C. Levinson, N.J. Enfield (Eds.), Roots of human sociality: Culture, cognition and interaction, (pp. 126-150). New York: Bloomsbury.

Ekman, P., Davidson, R. J., & Friesen, W. V. (1990). The Duchenne smile: Emotional expression and brain physiology: II. Journal of Personality and Social Psychology, 58(2), 342–353.

Elfenbein, H. A., & Ambady, N. (2002). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128(2), 203–235.

Fessler, D. M. T., & Quintelier, K. (2014). Suicide Bombers, Weddings, and Prison Tattoos. An Evolutionary Perspective on Subjective Commitment and Objective Commitment. In R. Joyce, K. Sterelny, & B. Calcott (Eds.), Cooperation and its evolution (pp. 459–484). Cambridge, MA: The MIT Press.

Frank, R. H. (1988). Passion within reason. New York, NY: W.W. Norton & Company.

Hirshleifer, J. (2001). On the Emotions as Guarantors of Threats and Promises. In The Dark Side of the Force: Economic Foundations of Conflict Theory (pp. 198–219). Cambridge, MA: Cambridge University Press.

Kahn, H. (1965). On Escalation: Metaphors and Scenarios. New York, NY: Praeger Publ. Co.

Michael, J., & Pacherie, E. (2015). On Commitments and Other Uncertainty Reduction Tools in Joint Action. Journal of Social Ontology, 1(1).

Nesse, R. M. (2001). Natural Selection and the Capacity for Subjective Commitment. In R. M. Nesse (Ed.), Evolution and the Capacity for Commitment (pp. 1–43). New York, NY: Russell Sage Foundation.

Sally, D. (1995). Conversation and cooperation in social dilemmas a meta-analysis of experiments from 1958 to 1992. Rationality and society, 7(1), 58-92.

Scanlon, T. M. (1998). What We Owe to Each Other. Cambridge, MA: Harvard University Press.

Schelling, T. C. (1980). The Strategy of Conflict. Cambridge, MA: Harvard University Press.

Searle, J. R. (1969). Speech Acts: An essay in the philosophy of language. Cambridge, MA: Cambridge University Press.

Silk, J. B. (2001). Grunts, Girneys, and Good Intentions: The Origins of Strategic Commitment in Nonhuman Primates. In R. M. Nesse (Ed.), Evolution and the Capacity for Commitment (pp. 138–157). New York, NY: Russell Sage Foundation.

Song, R., Over, H., & Carpenter, M. (2016). Young children discriminate genuine from fake smiles and expect people displaying genuine smiles to be more prosocial. Evolution and Human Behavior, 37(6), 490–501.

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind and Language, 25(4), 359–93.

Vanberg, C. (2008). Why Do People Keep Their Promises? An Experimental Test of Two Explanations. Econometrica, 76(6), 1467–1480.


[1] Indeed, marriage itself may be a way of increasing the likelihood that a commitment will be respected in the future. This is because formalising the relationship in this manner increases the exit costs of a relationship.

Is the Future More Valuable than the Past?

Alison Fernandes – Post-Doctoral Fellow on the AHRC project ‘Time: Between Metaphysics and Psychology’, Department of Philosophy, University of Warwick


We differ markedly in our attitudes towards the future and past. We look forward in anticipation to tonight’s tasty meal or next month’s sunny holiday. While we might fondly remember these pleasant experiences, we don’t happily anticipate them once they’re over. Conversely, while we might dread the meeting tomorrow, or doing this year’s taxes, we feel a distinct sort of relief when they’re done. We seem to also prefer pleasant experiences to be in the future, and unpleasant experiences to be in the past. While we can’t swap tomorrow’s meeting and make it have happened yesterday, we might prefer that it had happened yesterday and was over and done with.

Asymmetries like these in how we care about the past and future can seem to make a lot of sense. After all, what’s done is done, and can’t be changed. Surely we’re right to focus our care, effort and attention on what’s to come. But do we sometimes go too far in valuing past and future events differently? In this post I’ll consider one particular temporal asymmetry of value that doesn’t look so rational, and how its apparent irrationality speaks against certain metaphysical ways of explaining the asymmetry.

Eugene Caruso, Daniel Gilbert, and Timothy Wilson, investigated a temporal asymmetry in how we value past and future events (2008). Suppose that I ask you how much compensation would be fair to receive for undertaking 5 hours of data entry work. The answer that you give seems to depend crucially on when the work is described as taking place. Subjects judged that they should receive 101% more money if the work is described as taking place one month in the future ($125.04 USD on average), compared to one month in the past ($62.20 USD on average). Even for purely hypothetical scenarios, where no one actually expects the work to take place, we judge future work to be worth much more than past work.

This value asymmetry appears in other scenarios as well (Caruso et al., 2008). Say your friend is letting you borrow their vacation home for a week. How expensive a bottle of wine do you buy as a thank you gift? If the holiday is described as taking place in the future, subjects select wine that is 37% more expensive. Suppose that you help your neighbour move. What would be an appropriate thank you gift for you to receive? Subjects judge they should receive 71% more expensive bottles of wine for helping in the future, compared to the past. Say you’re awarding damages for the suffering of an accident victim. Subjects judge that victims should be awarded 42% more compensation when they imagine their suffering as taking place in the future, compared to the past.

Philosophers like Craig Callender (2017) have become increasingly interested in the value asymmetry studied by Caruso and his colleagues. This is partly because there has been a long history of using asymmetries in how we care about past and future events to argue for particular metaphysical views about time (Prior, 1959). For example, say you hold a ‘growing block’ view of time, according to which the present and past exist (and are therefore ‘fixed’) while future events are not yet real (so the future is unsettled and ‘open’). One might argue that a metaphysical picture with an open future like this is needed to make sense of why we care about future events more than past events. If past events are fixed, they’re not worth spending our time over—so we value them less. But because future events are up for grabs, we reasonably place greater value in them in the present.

Can one argue from the value asymmetry Caruso and his team studied, to a metaphysical view about time? Much depends on what features the asymmetry has, and how these might be explained. When it comes to explaining the temporal value asymmetry, Caruso and his team discovered that it is closely aligned to another asymmetry: a temporal emotional asymmetry. More specifically, we tend to feel stronger emotions when contemplating future events, compared to contemplating past events.

These asymmetries are correlated in such a way as to suggest the emotional asymmetry is a cause of the value asymmetry. Part of the evidence comes from the fact that the emotional and value asymmetry share other features in common. For example, we tend to feel stronger emotions when contemplating our own misfortunes, or those of others close to us, than we do contemplating the misfortune of strangers. The value asymmetry shares this feature. It is also much more strongly pronounced for events that concern oneself, compared to others. Subjects judge their own 5 hours of data entry work to be worth nearly twice as much money when it takes place in the future, compared to the past. But they judge the equivalent work of a stranger to be worth similar amounts of money, independently of whether the work is described as taking place in the future or in the past.

The same features that point towards an emotional explanation of the value asymmetry also point away from a metaphysical explanation. The value asymmetry is, in a certain sense, ‘perspectival’—it is strongest concerning oneself. But if metaphysical facts were to explain why future events were more valuable than past ones, it would make little sense for the asymmetry to be perspectival. After all, on metaphysical views of time like the growing block view, events are either future or not. If future events being ‘open’ is to explain why we value them more, the asymmetry in value shouldn’t depend on whether they concern oneself or others. Future events are not only open when they concern me – they are also open when they concern you. So the metaphysical explanation of the value asymmetry does not look promising.

If we instead explain the value asymmetry by appeal to an emotional asymmetry, we can also trace the value asymmetry back to further asymmetries. Philosophers and psychologists have given evolutionary explanations of why we feel stronger emotions towards future events than past events (Maclaurin & Dyke, 2002; van Boven & Ashworth, 2007). Emotions help focus our energies and attention. If we generally need to align our efforts and attention towards the future (which we can control) rather than being overly concerned with the past (which we can’t do anything about), then it makes sense that we’re geared to feel stronger emotions when contemplating future events than past ones. Note that this evolutionary explanation requires that our emotional responses to future and past events ‘overgeneralise’. Even when we’re asked about future events we can’t control, or purely hypothetical future events, we still feel more strongly about them than comparative past events, because feeling more strongly about the future in general is so useful when the future events are ones that we can control.

A final nail in the coffin for a metaphysical explanation of the value asymmetry comes from thinking about whether subjects take the value asymmetry to be rational. I began with some examples of asymmetries that do seem rational. It seems rational to prefer past pains to future ones, and to feel relief when unpleasant experiences are over. Whether asymmetries like these are in fact rational is a topic of controversy in philosophy (Sullivan, forth.; Dougherty, 2015). Regardless, there is strong evidence that the value asymmetry that Caruso studied is taken to be irrational, even by subjects whose judgements display the asymmetry.

The methodology Caruso used involved ‘counterbalancing’: some subjects were asked about the future event first, some were asked about the past event first. When the results within any single group were considered, no value asymmetry was found. That is, when you ask a single person how they value an event (say, using a friend’s vacation home for a week) they think its value now shouldn’t depend on whether the event is in the past or future. It is only when you compare results across the two groups that the asymmetry emerges (see Table 1). It’s as if we apply a consistency judgement and think that future and past events should be worth the same. But when we can’t make the comparison, we value them differently. This strongly suggests that the asymmetry is not being driven by a conscious judgement that the future is really is worth more than the past, or by a metaphysical picture according to which it is. If it were, we would expect the asymmetry to be more pronounced when subjects were asked about both the past and the future. Instead, the asymmetry disappears.


Order of evaluation
Use of a friend’s vacation home Past event first Future event first
Past $89. 17 $129.06
Future $91.73 $121.98

Table 1: Average amount of money (USD) that subjects judge they would spend on a thank you gift for using a friend’s vacation home in the past or future (Caruso et al., 2008).


Investigations into how temporal asymmetries in value arise are allowing philosophers and psychologists to build up a much more detailed picture of how we think about time. It can seem intuitive to think of the past as fixed, and the future as open. Such intuitions have long been used to support certain metaphysical views about time. But, while metaphysical views might seem to rationalise asymmetries in our attitudes, their actual explanation seems to lie elsewhere, in much deeper evolution-driven responses. We may even be adopting metaphysical views as rationalisers of our much more basic emotional responses. If this is right, the value asymmetry not only provides a case study of how we can get by explaining asymmetric features of our experience without appeal to metaphysics. It suggests that psychology can help explain why we’re so tempted towards certain metaphysical views in the first place.



Callender, Craig. 2017. What Makes Time Special. Oxford: Oxford University Press.

Caruso, Eugene M. Gilbert, D. T., and Wilson, T. D. 2008. A wrinkle in time: Asymmetric valuation of past and future events. Psychological Science 19(8): 796–801.

Dougherty, Tom. 2015. Future-Bias and Practical Reason. Philosophers’ Imprint. 15(30): 1−16.

Maclaurin, James & Dyke, Heather. 2002. ‘Thank Goodness That’s Over’: The Evolutionary Story. Ratio 15 (3): 276–292.

Prior, Arthur. N. 1959. Thank Goodness That’s Over. Philosophy. 34(128): 12−17.

Sullivan, Meghan. forth. Time Biases: A Theory of Rational Planning and Personal Persistence. New York: Oxford University Press.

Van Boven, Leaf & Ashworth, Laurence. 2007. Looking Forward, Looking Back: Anticipation Is More Evocative Than Retrospection. Journal of Experimental Psychology. 136(2): 289–300.

What hand gestures tell us about the evolution of language

Suzanne Aussems – Post-Doctoral Fellow/Early Career Fellow, Language & Learning Group, Department of Psychology, University of Warwick

Imagine that you are visiting a food market abroad and you want to buy a slice of cake. You know how to say “hello” in the native language, but otherwise your knowledge of the language is limited. When it is your turn to order, you greet the vendor and point at the cake of your choice. The vendor then places his knife on the cake and looks at you to see if you approve of the size of the slice. You quickly shake both of your hands and indicate that you would like a smaller width for the slice using your thumb and index finger. The vendor then cuts a smaller piece for you and you happily pay for your cake. In this example, you achieved successful communication with the help of three gestures: a pointing gesture, a conventional gesture, and an iconic gesture.

As humans, we are the only species that engage in the communication of complex and abstract ideas. This abstractness is even present in a seemingly simple example such as indicating the size of a slice of cake you desire. After all, size concepts such as ‘small’ and ‘large’ are learnt during development. What makes this sort of communication possible are the language and gestures that we have at our disposal. How is it that we came to develop language when other animals did not, and what is the role of gesture in this? In this blogpost, I introduce one historically dominant theory about the origins of human language – the gesture-primacy hypothesis (see Hewes, 1999 for an historic overview).

According to the gesture-primacy hypothesis, humans first communicated in a symbolic way using gesture (e.g. movement of the hands and body to express meaning). Symbolic gestures are, for example, pantomimes that signify actions (e.g., shooting an arrow) or emblems (e.g., raising an index finger to your lips to indicate “be quiet”) that facilitate social interactions (McNeil, 1992; 2000). The gesture-primacy hypothesis suggests that spoken language emerged through adaptation of gestural communication (Corballis, 2002, Hewes, 1999). Central to this view is the idea that gesture and speech emerged sequentially.

Much of the evidence in favour of the gesture-primacy hypothesis comes from studies on nonhuman primates and great apes. Within each monkey or ape species, individuals seem to have the same basic vocal repertoire. For instance, individuals raised in isolation and individuals raised by another species still produce calls that are typical for their own species, but not calls that are typical for the foster species (Tomasello, 2008, p. 16). This suggests that these vocal calls are not learned, but are innate in nonhuman primates and great apes. Researchers believe that controlled, complex verbal communication (such as that found in humans) could not have evolved from these limited innate communicative repertoires (Kendon, 2017). This line of thinking is partly confirmed by failed attempts to teach apes how to speak, and failed attempts to teach them to produce their own calls on command (Tomasello, 2008, p. 17).

However, the repertoire of ape gestures seems to vary much more per individual than the vocal repertoire (Pollick & de Waal, 2007), and researchers have succeeded in teaching chimpanzees manual actions with the help of symbolic gestures that were derived from American Sign Language (Gardner & Gardner, 1969). Moreover, bonobos have been observed to use gestures to communicate more flexibly than they can use calls (Pollick & de Waal, 2007). The degree of flexibility in the production and understanding of gestures, especially in great apes, makes this communicative tool seem a more plausible medium through which language could have first emerged than vocalisation.

In this regard, it is notable that great apes that have been raised by humans point at food, objects, or toys they desire. For example, some human-raised apes point to a locked door when they want access to what’s behind it, so that the human will open it for them (Tomasello, 2008). It is thus clear that human-raised apes understand that humans can be led to act in beneficial ways via attention-directing communicative gestures. Admittedly, there does seem to be an important type of pointing that apes seem incapable of; namely, declarative pointing (i.e., pointing for the sake of sharing attention, rather than merely directing attention) (Kendon, 2017). Nonetheless, gesture seems to be a flexible and effective communicative medium that is available to non-human primates. This fact, and the fact that vocalisations seem to be relatively inflexible in these species, play a significant role in making the gesture-primacy hypothesis a compelling theory for the origins of human language.

What about human evidence that might support the gesture-primacy hypothesis? Studies on the emergence of speech and gesture in human infants show that babies produce pointing gestures before they produce their first words (Butterworth, 2003). Shortly after their first birthday, when most children have already started to produce some words, they produce combinations of pointing gestures (point at bird) and one-word utterances (“eat”). These gesture and speech combinations occur roughly three months before producing two-word utterances (“bird eats”). From an ontogenetic standpoint, then, referential behaviour appears in pointing gestures before it shows in speech. Many researchers therefore consider gesture to pave the way for early language development in babies (Butterworth, 2003; Iverson & Goldin-Meadow, 2005).

Further evidence concerns the spontaneous emergence of sign language in deaf communities (Senghas, Kita, & Özyürek, 2004). When sign language is passed on to new generations, children use richer and more complex structures than adults from the previous generation, and so they build upon the existing sign language. This phenomenon has led some researchers to believe that the development of sign language over generations could be used as a model for the evolution of human language more generally (Senghas, Kita, & Özyürek, 2004). The fact that deaf communities spontaneously develop fully functional languages using their hands, face, and body, further supports the gesture-primacy hypothesis.

Converging evidence also comes from the field of neuroscience. Xu and colleagues (2009) used functional MRI to investigate whether symbolic gesture and spoken language are processed by the same system in the human brain. They showed participants meaningful gestures, and the spoken language equivalent of these gestures. The same specific areas in the left side of the brain lit up for mapping symbolic gestures and spoken words onto common, corresponding conceptual representations. Their findings suggest that the core of the brain’s language system is not exclusively used for language processing, but functions as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are spoken words or symbolic gestures.

In this post, I have discussed compelling evidence in support of the gesture-primacy hypothesis. An intriguing question that remains unanswered is why our closest evolutionary relatives, chimpanzees and bonobos, can flexibly use gesture, but not speech, for communication. Further comparative studies could shed light on the evolutionary history of the relation between gesture and speech. One thing is certain: gesture plays an important communicative role in our everyday lives, and further studying the phylogeny and ontogeny of gesture is important for understanding how human language emerged. And it may also come in handy when ordering some cake on your next holiday!



Butterworth, G. (2003). Pointing is the royal road to language for babies. In S. Kita (Ed.) Pointing: Where Language, Culture, and Cognition Meet (pp. 9-34). Mahwah, NJ: Lawrence Erlbaum Associates.

Corballis, M. C. (2002). From hand to mouth: The origins of language. Princeton, NJ: Princeton University Press.

Gardner, R. A., & Gardner, B. (1969). Teaching sign language to a chimpanzee. Science, 165, 664–672.

Hewes, G. (1999). A history of the study of language origins and the gestural primacy hypothesis. In: A. Lock, & C.R. Peters (Eds.), Handbook of human symbolic evolution (pp. 571-595). Oxford, UK: Oxford University Press, Clarendon Press.

Iverson, J. M., & Goldin-Meadow, S. (2005). Gesture paves the way for language development. Psychological Science, 16(5), 367-371. Doi: 10.1111/j.0956-7976.2005.01542.x

Kendon, A. (2017). Reflections on the “gesture-first” hypothesis of language origins. Psychonomic Bulletin & Review, 24(1), 163-170. Doi: 10.3758/s13423-016-1117-3

McNeill, D. (1992). Hand and mind. Chicago, IL: Chicago University Press.

McNeill, D. (Ed.). (2000). Language and gesture. Cambridge, UK: Cambridge University Press.

Pollick, A., & de Waal, F. (2007). Ape gestures and language evolution. PNAS, 104(19), 8184-8189. Doi: 10.1073/pnas.0702624104

Senghas, A., Kita, S., & Özyürek, A. (2004). Children creating core properties of language: evidence from an emerging sign language in Nicaragua. Science, 17, 305(5691), 1779-82. Doi: 10.1126/science.1100199

Tomasello, M. (2008). The origins of human communication. Cambridge, MA: MIT Press.

Xu, J., Gannon, P. J., Emmorey, K., Smith, J. F., & Braun, A. R. (2009). Symbolic gestures and spoken language are processed by a common neural system. PNAS, 106(49), 20664–20669. Doi: 10.1073/pnas.0909197106

Conceptual short-term memory: a new tool for understanding perception, cognition, and consciousness

Henry Shevlin, Research Associate, Leverhulme Centre for the Future of Intelligence at The University of Cambridge

The notion of memory, as used in ordinary language, may seem to have little to do with perception or conscious experience. While perception informs us about the world as it is now, memory almo­­st by defi­­nition tells us about the past. Similarly, whereas conscious experience seems like an ongoing, occurrent phenomenon, it’s natural to think of memory as being more like an inert store of information, accessible when we need it but capable of lying dormant for years at a time.

However, in contemporary cognitive science, memory is taken to include almost any psychological process that functions to store or maintain information, even if only for very brief durations (see also James, 1890). In this broader sense of the term, connections between memory, perception, and consciousness are apparent. After all, some mechanism for the short-term retention of information will be required for almost any perceptual or cognitive process, such as recognition or inference, to take place: as one group of psychologists put it, “storage, in the sense of internal representation, is a prerequisite for processing” (Halford, Phillips, & Wilson, 2001). Assuming, then, as many theorists do, that perception consists at least partly in the processing of sensory information, short-term memory is likely to have an important role to play in a scientific theory of perception and perceptual experience.

In this latter sense of memory, two major forms of short-term store have been widely discussed in relation to perception and consciousness. The first of these is the various forms of sensory memory, and in particular iconic memory. Iconic memory was first described by George Sperling, who in 1960 demonstrated that large amounts of visually presented information were retained for brief intervals, far more than subjects were able to actually utilize for behaviour during the short window in which they were available (Figure 1). This phenomenon, dubbed partial report superiority, was brought to the attention of philosophers of mind via the work of Fred Dretske (1981) and Ned Block (1995, 2007). Dretske suggested that the rich but incompletely accessible nature of information presented in Sperling’s paradigm was a marker of perceptual rather than cognitive processes. Block similarly argued that sensory memory might be closely tied to perception, and further, suggested that such sensory forms of memory could serve as the basis for rich phenomenal consciousness that ‘overflowed’ the capacity for cognitive access.

A second form of short-term term that has been widely discussed by both psychologists and philosophers is working memory. Very roughly, working memory is a short-term informational store that is more robust than sensory memory but also more limited in capacity. Unlike information in sensory memory, which must be cognitively accessed in order to be deployed for voluntary action, information in working memory is immediately poised for use in such behaviour, and is closely linked to notions such as cognition and cognitive access. For reasons such as these, Dretske seemed inclined to treat this kind of capacity-limited process as closely tied or even identical to thought, a suggestion followed by Block.[1] Psychologists such as Nelson Cowan (2001: 91) and Alan Baddeley (2003: 836) take encoding in working memory to be a criterion of consciousness, while global workspace theorists such as Stanislas Dehaene (2014: 63) have regarded working memory as intimately connected – if not identical – with global broadcast.[2]

The foregoing summary is over-simplistic, but hopefully serves to motivate the claim that scientific work on short-term memory mechanisms may have important roles to play in understanding both the relation between perception and cognition and conscious experience. With this idea in mind, I’ll now discuss some recent evidence for a third important short-term memory mechanism, namely Molly Potter’s proposed Conceptual Short-Term Memory. This is a form of short-term memory that serves to encode not merely the sensory properties of objects (like sensory memory), but also higher-level semantic information such as categorical identity. Unlike sensory memory, it seems somewhat resistant to interference by the presentation of new sensory information; whereas iconic memory can be effaced by the presentation of new visual information, CSTM seems somewhat robust. In these respects, it is similar to working memory. Unlike working memory, however, it seems to have both a high capacity and a brief duration; information in CSTM that is not rapidly accessed by working memory is lost after a second or two (for a more detailed discussion, see Potter 2012).

Evidence for CSTM comes from a range of paradigms, only two of which I discuss here (interested readers may wish to consult Potter, Staub, & O’Connor, 2004; Grill-Spector and Kanwisher, 2005; and Luck, Vogel, & Shapiro, 1996). The first particularly impressive demonstration is a 2014 experiment examining subjects’ ability to identify the presence of a given semantic target (such as “wedding” or “picnic”) in a series of rapidly presented images (see Figure 2).

A number of features of this experiment are worth emphasizing. First, subjects in some trials were cued to identify the presence of a target only after presentation of the images, suggesting that their performance did indeed rely on memory rather than merely, for example, effective search strategies. Second, a relatively large number of images were displayed in quick succession, either 6 or 12, in both cases larger than the normal capacity of working memory. Subjects’ performance in the 12-item trials was not drastically worse than in the 6-item trials, suggesting that they were not relying on normal capacity-limited working memory alone. Third, because the images were displayed one after another in the same location in quick succession, it seems unlikely that they were relying on sensory memory alone; as noted earlier, sensory memory is vulnerable to overwriting effects. Finally, the fact that subjects were able to identify not merely the presence of certain visual features but the presence or absence of specific semantic targets suggests that they were not merely encoding low-level sensory information about the images, but also their specific categorical identities, again telling against the idea that subjects’ performance relied on sensory memory alone.

Another relevant experiment for the CSTM hypothesis is that of Belke et al. (2008). In this experiment, subjects were presented with a single array of either 4 or 8 items, and asked whether a given category of picture (such as a motorbike) was present. In some trials in which the target was absent, a semantically related distractor (such as a motorbike helmet) was present instead. The surprising result of this experiment, which involved an eye-tracking camera, was that subjects reliably fixated upon either targets or semantically related distractors with their initial eye movements, and were just as likely to do whether the arrays contained 4 or 8 items, and even when assigned a cognitive load task beforehand (see Figure 3).

Again, these results arguably point to the existence of some further memory mechanism beyond sensory memory and working memory: if subjects were relying on working memory to direct their eye movements, then one would expect such movements to be subject to interference from the cognitive load, whereas the hypothesis that subjects were relying on exclusively sensory mechanisms runs into the problem that such mechanisms do not seem to be sensitive to high-level semantic properties of stimuli such as their specific category identity, whereas in this trial, subjects’ eye movements were sensitive to just such semantic properties of the items in the array.[3]

Interpretation of experiments such as these is a tricky business, of course (for a more thorough discussion, see Shevlin 2017). However, let us proceed on the assumption that the CSTM hypothesis is at least worth taking seriously, and that there may be some high-capacity semantic buffer in addition to more widely accepted mechanisms such as iconic memory and working memory. What relevance might this have for debates in philosophy and cognitive science? I will now briefly mention three such topics. Again, I will be oversimplifying somewhat, but my goal will be to outline some areas where the CSTM hypothesis might be of interest.

The first such debate concerns the nature of the contents of perception. Do we merely see colours, shapes, and so on, or do we perceive high-level kinds such as tables, cats, and Donald Trump (Siegel, 2010)? Taking our cue from the data on CSTM, we might suggest that this question can be reframed in terms of which forms of short-term memory are genuinely perceptual. If we take there to be good grounds for confining perceptual representation to the kinds of representations in sensory memory, then we might be inclined to take an austere view of the contents of experience. By contrast, if the kind of processing involved in encoding in CSTM is taken to be a form of late-stage perception, then we might have evidence for the presence of high-level perceptual content. It might reasonably be objected that this move is merely ‘kicking the can down the road’ to questions about the perception-cognition boundary, and does not by itself resolve the debate about the contents of perception. However, more positively, this might provide a way of grounding largely phenomenological debates in the more concrete frameworks of memory research.

A second key debate where CSTM may play a role concerns the presence of top-down effects on perception. A copious amount of experimental data (dating back to early work by psychologists such as Perky, 1910, but proliferating especially in the last two decades) has been produced in support of the idea that there are indeed ‘top-down’ effects on perception, which in turn has been taken to suggest that our thoughts, beliefs, and desires can significantly affect how the world appears to us. Such claims have been forcefully challenged by the likes of Firestone and Scholl (2015), who have suggested that the relevant effects can often be explained in terms of, for example, postperceptual judgment rather than perception proper.

However, the CSTM hypothesis may again offer a third compromise position. By distinguishing core perceptual processes (namely those that rely on sensory buffers such as iconic memory) from the kind of later categorical processing performed by CSTM, there may be other positions available in the interpretation of alleged cases of top-down effects on perception. For example, Firestone and Scholl claim that many such results fail to properly distinguish perception from judgment, suggesting that, in many cases, experimentalists’ results can be interpreted purely in terms of strictly cognitive effects rather than as involving effects on perceptual experience. However, if CSTM is a distinct psychological process operative between core perceptual processes and later central cognitive processes, then appeals to things such as ‘perceptual judgments’ may be better founded than Firestone and Scholl seem to think. This would allow us to claim that at least some putative cases of top-down effects went beyond mere postperceptual judgments while also respecting the hypothesis that early vision is encapsulated; see Pylyshyn, 1999).

A final debate in which CSTM may be of interest is the question of whether perceptual experience is richer than (or ‘overflows’) what is cognitively accessed. As noted earlier, Ned Block has argued that information in sensory forms of memory may be conscious even if it is not accessed – or even accessible – to working memory (Block, 2007). This would explain phenomena such as the apparent ‘richness’ of experience; thus if we imagine standing in Times Square, surrounded by chaos and noise, it is phenomenologically tempting to think we can only focus on and access a tiny fraction of our ongoing experiences at any one moment. A common challenge to this kind of claim is that it threatens to divorce consciousness from personal level cognitive processing, leaving us open to extreme possibilities such as the ‘panpsychic disaster’ of perpetually inaccessible conscious experience in very early processing areas such as the LGN (Prinz, 2007). Again, CSTM may offer a compromise position. As noted earlier, the capacity of CSTM does indeed seem to overflow the sparse resources of working memory. However, it also seems rely on personal level processing, such as an individual’s store of learned categories. Thus one new position, for example, might claim that information must at least reach the stage of CSTM to be conscious, thus allowing that perceptual experience may indeed overflow working memory while also ruling it out in early sensory areas.

These are all bold suggestions in need of extensive clarification and argument, but it is my hope that I have at least demonstrated to the reader how CSTM may be a hypothesis of interest not merely to psychologists of memory, but also those interested in broader issues of mental architecture and consciousness. And while I should also stress that CSTM remains a working hypothesis in the psychology of memory, it is one that I think is worth exploring on grounds of both scientific and philosophical interest.



Baddeley, A.D. (2003). Working memory: Looking back and looking forward.Nature Reviews Neuroscience, 4(10), 829-839.

Belke, E., Humphreys, G., Watson, D., Meyer, A. and Telling, A., (2008). Top-down effects ofsemantic knowledge in visual search are modulated by cognitive but not perceptual load. Perception & Psychophysics, 70 8, 1444 – 1458.

Bergström, F., & Eriksson, J. (2014). Maintenance of non-consciously presented information engages the prefrontal cortex. Frontiers in Human Neuroscience 8:938.

Block, N. (2007). Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience, Behavioral and Brain Sciences 30, pp. 481–499.

Cowan, N., (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences 241, 87-114.

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking Press, 2014.

Dretske, F. (1981). Knowledge and the Flow of Information. MIT Press.

Firestone, C., & Scholl, B.J. (2015). Cognition does not affect perception: Evaluating the evidence for ‘top-down’ effects. Behavioral and Brain Sciences:1-77.

Grill-Spector, K., & Kanwisher, N. (2005). Visual Recognition. Psychological Science, 16(2), 152-160.

Halford, G. S., Phillips, S., & Wilson, W. H. (2001). Processing capacity limits are not explained by storage limits. Behavioral and Brain Sciences 24 (1), 123-124.

James, W. (1890). The Principles of Psychology. Dover Publications.

Luck, S. J., Vogel, E. K., & Shapiro, K. L. (1996). Word meanings can be accessed but not reported during the attentional blink. Nature, 383(6601), 616-618.

Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory. Nature Neuroscience, 17(3), 347-356.

Potter, M. C. (2012). Conceptual Short Term Memory in Perception and Thought. Frontiers in Psychology, 3:113.

Potter, M. C., Staub, A., & O’Connor, D. H. (2004). Pictorial and conceptual representation of glimpsed pictures. Journal of Experimental Psychology: Human Perception and Performance, 30, 478-489.

Prinz, J. (2007). Accessed, accessible, and inaccessible: Where to draw the phenomenal line. Behavioral and Brain Sciences, 305-6.

Pylyshyn, Z. (1999). Is vision continuous with cognition?: The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences, 22(03).

Shevlin, H. (2017). Conceptual Short-Term Memory: A Missing Part of the Mind? Journal of Consciousness Studies, 24, No. 7–8.

Siegel, S. (2010). The Contents of Visual Experience. Oxford.

Sperling, G. (1960). The Information Available in Brief Visual Presentations, Psychological Monographs: General and Applied 74, pp. 1–29.


[1] Note that Dretske does not use the term working memory in this context, but clearly has some such process in mind, as made clear by his reference to capacity-limited mechanisms for extracting information.

[2] A complicating factor in discussion of working memory comes from the recent emergence of variable resource models of working memory (Ma et al., 2014) and the discovery that some forms of working memory may be able to operate unconsciously (see, e.g., Bergström & Eriksson, 2014).

[3] Given that the arrays remained visible to subjects throughout the experiment, one might wonder why this experiment has relevance for our understanding of memory. However, as noted earlier, I take it that any short-term processing of information presumes some kind of underlying temporary encoding mechanism.

Functional Localization—Complicated and Context-Sensitive, but Still Possible

Dan Burnston—Assistant Professor, Philosophy Department, Tulane University, Member Faculty in the Tulane Brain Institute

The question of whether functions are localizable to distinct parts of the brain, aside from its obvious importance to neuroscience, bears on a wide range of philosophical issues—reductionism and mechanistic explanation in philosophy of science; cognitive ontology and mental representation in philosophy of mind, among many others. But philosophical interest in the question has only recently begun to pick up (Bergeron, 2007; Klein, 2012; McCaffrey, 2015; Rathkopf, 2013).

I am a “contextualist” about localization: I think that functions are localizable to distinct parts of the brain, and that different parts of the brain can be differentiated from each other on the basis of their functions (Burnston, 2016a, 2016b). However, I also think that what a particular part of the brain does depends on behavioral and environmental context. That is, a given part of the brain might perform different functions depending on what else is happening in the organism’s internal or external environment.

Embracing contextualism, as it turns out, involves questioning some deeply held assumptions within neuroscience, and connects the question of localization with other debates in philosophy. In neuroscience, localization is generally construed in what I call absolutist terms. Absolutism is a form of atomism—it suggests that localization can be successful only if 1-1 mappings between brain areas and functions can be found. Since genuine multifunctionality is antithetical to atomist assumptions it has historically not been a closely analyzed concept in systems or cognitive neuroscience.

In philosophy, contextualism takes us into questions about what constitutes good explanation—in this case, functional explanation. Debates about contextualism in other areas of philosophy, such as semantics and epistemology (Preyer & Peter, 2005), usually shape up as follows. Contextualists are impressed by data suggesting contextual variation in the phenomenon of interest (usually the truth values of statements or of knowledge attributions). In response, anti-contextualists worry that there are negative epistemic consequences to embracing this variation. The resulting explanations will not, on their view, be sufficiently powerful or systematic (Cappelen & Lepore, 2005). We end up with explanations that do not generalize beyond individual cases. Hence, according to anti-contextualists, we should be motivated to come up with theories that deny or explain away the data that seemingly support contextual variation.

In order to argue for contextualism in the neural case, then, one must first establish the data that suggests contextual variation, then articulate a variety of contextualism that (i) succeeds at distinguishing brain areas in terms of their distinct functions, and (ii) describes genuine generalizations.

Usually, in systems neuroscience, the goal is to correlate physiological responses in particular brain areas with particular types of information in the world, supporting the claim that the responses represent that information. I have pursued a detailed case study of perceptual area MT (also known as “V5” or the “middle temporal” area). The textbook description of MT is that it represents motion—it has specific responses to specific patterns of motion, and variations amongst its cellular responses represent different directions and velocities. Hence, MT has the univocal function of representing motion: an absolutist description.

However, MT research in the last 20 years has uncovered data which strongly suggests that MT is not just a motion detector. I will only list some of the relevant data here, which I discuss exhaustively in other places. Let’s consider a perceptual “context” as a combination of perceptual features—including shape/orientation, depth, color, luminance/brightness, and motion. On the traditional hierarchy, each of these features has its own area dedicated to representing it. Contextualism, alternatively, starts from the assumption that different combinations of these features might result in a given area representing different information.

  • Despite the traditional view that MT is “color blind” (Livingstone & Hubel, 1988), MT in fact responds to the identity of colors when color is useful in disambiguating a motion stimulus. Now in this case, MT still arguably represents motion, but it does use color as a contextual cue for doing so.
  • Over 93% of MT cells represent coarse depth (the rough distance of an object away from the perceiver. Their tuning for depth is independent of their tuning for motion, and many cells represent depth even in stationary These depth signals are predictive of psychophysical results.
  • A majority of MT cells also have specific response properties for fine depth (depth signals resulting from the 3-d shape and orientation of objects) features of tilt and slant, and these can be cued by a variety of distinct features, including binocular disparity and relative velocity.

How do these results support contextualism? Consider a particular physiological response to a stimulus in MT. If the data is correct, then this signal might represent motion, or it might represent depth—and indeed, either coarse or fine depth—depending on the context. Or, it might represent a combination of those influences.[1]

The contextualism I advocate focuses on the type of descriptions we should invoke in theorizing about the functions of brain areas. First, our descriptions should be conjunctive: the function of an area should be described as a conjunction of the different representational functions it serves and the contexts in which it serves those functions. So, MT represents motion in a particular range of contexts, but also represents other types of information in different contexts—including absolute depth in both stationary and moving stimuli, and fine depth in contexts involving tilt and slant, as defined by either relative disparity or relative velocity.

When I say that a conjunction is “open,” what I mean is that we shouldn’t take the functional description as complete. We should see it as open to amendment as we study new contexts. This openness is vital—it is an induction on the fact that the functional description of MT has changed as new contexts have been explored—but also leads us precisely into what bothers anti-contextualists (Rathkopf, 2013). The worry is that open-descriptions do not have the theoretical strength that supports good explanations. I argue that this worry is mistaken.

First, note that contextualist descriptions can still functionally decompose brain areas. The key to this is the indexing of functions to contexts. Compare MT to V4. While V4 also represents “motion” construed broadly (in “kinetic” or moving edges), color, and fine depth, the contexts in which V4 does so differ from MT. For instance, V4 represents color constancies which are not present in MT responses. V4’s specific combination of sensitivities to fine depth and curvature allows it to represent protuberances—curves in objects that extend towards the perceiver—which MT cannot represent. So, the types of information that these areas represent, along with the contexts in which they represent them, tease apart their functions.

Indexing to contexts also points the way to solving the problem of generalization, so long as we appropriately modulate our expectations. For instance, on contextualism it is still a powerful generalization that MT represents motion. This is substantiated by the wide range of contexts in which it represents motion—including moving dots, moving bars, and color-segmented patterns. It’s just that representing motion is not a universal generalization about its function. It is a generalization with more limited scope. Similarly, MT represents fine depth in some contexts (tilt and slant, defined by disparity or velocity), but not in all of them (protuberances). Of course, if the function of MT is genuinely context sensitive, then universal generalizations about its function will not be possible. Hence, insisting on universal generalizations is not an open strategy for an absolutist—at least not without question begging.

The real crux of the debate, I believe, is about the notion of projectability. We want our theories not just to describe what has occurred, but to inform future hypothesizing about novel situations. Absolutists hope for a powerful form of law-like projectability, on which a successful functional description tells us for certain what that area will do in new contexts. The “open” structure of contextualism precludes this, but this doesn’t bother the contextualist. This situation might seem reminiscent of similar stalemates regarding contextualism in other areas of philosophy.

There are two ways I have sought to break the stalemate. First is to define a notion of projectability that informs scientific practice, but is compatible with contextualism. Second is to show that even very general absolutist descriptions fail to deliver on the supposed explanatory advantages of absolutism. The key to a contextualist notion of projectability, in my view, is to look for a form of projectability that structures investigation, rather than giving lawful predictions. The basic idea is this: given a new context, the null hypothesis for an area’s function in that context should be that it performs its previously known function (or one of its known functions). I call this role a minimal hypothesis, and the idea is that currently known functional properties structure hypothesizing and investigation in novel contexts, by providing three options: (i) either the area does not function at all in the novel context (perhaps MT does not make any functional contribution to, say, processing emotional valence); (ii) it functions in one of its already known ways, in which case another context gets indexed to, and generalizes, an already known conjunct, or (iii) it performs a new function in that context, forcing a new conjunct to be added to the overall description of the area (indexed to the novel context, of course). While I won’t go into details here, I argue in (Burnston, 2016a) that this kind of reasoning has shaped the progress of understanding MT function.

One option open to a defender of absolutism is to attempt to explain away the data suggesting contextual variation by changing the type of functional description that is supposed to generalize over all contexts (Anderson, 2010; Bergeron, 2007; Rathkopf, 2013). For instance, rather than saying that a part of the brain represents a specific type of information, maybe we should say that it performs the same type of computation, whatever information it is processing. I have called this kind of approach “computational absolutism” (Burnston, 2016b).

While computational neuroscience is an important theoretical approach, it can’t save absolutism. My argument against the view starts from an empirical premise—in modeling MT, there is not one computational description that describes everything MT does. Instead, there are a range of theoretical models that each provide good descriptions of aspects of MT function. Given this lack of universal generalization, the computational absolutist has some options. They might move towards more general levels of computational description, hoping to subsume more specific models. The problem with this is that the most general computational descriptions in neuroscience are what are called canonical computations (Chirimuuta, 2014)—descriptions that can apply to virtually all brain areas. But if this is the case, then these descriptions won’t successfully differentiate brain areas based on their function. Hence, they don’t contribute to functional localization.

On the other hand, suggesting that it is something about the way these computations are applied in particular contexts runs right into the problem of contextual variation. Producing a model that predicts what, say, MT will do in cases of pattern motion or reverse-phi phenomena simply does not predict what functional responses MT will have to depth—not, at least, without investigating and building in knowledge about its physiological responses to those stimuli. So, even if general models are helpful in generating predictions in particular instances, they don’t explain what goes on in them. If this description is right, then the supposed explanatory gain of CA is an empty promise, and contextual analysis of function is necessary. My view of the role of highly general models mirrors those offered by Cartwright (1999) and Morrison (2007) in the physical sciences.

Some caveats are in order here. I’ve only talked about one brain area, and as McCaffrey (2015) points out, different areas might be amenable to different kinds of functional analysis. Perceptual areas are important, however, because they are paradigm success cases for functional localization. If contextualism works here, it can work elsewhere, as well as for other units of analysis, such as cell populations and networks (Rentzeperis, Nikolaev, Kiper, & van Leeuwen, 2014). I share McCaffrey’s pluralist leanings, but I think that a place for contextualist functional analysis must be made if functional decomposition is to succeed. The contextualist approach is also compatible with other frameworks, such as Klein’s (2017) focus on “difference-making” in understanding the function of brain areas.

I’ll end with a teaser about my current project on these topics (Burnston, in prep). Note that, if the function of brain areas can genuinely shift with context, this is not just a theoretical problem, but a problem for the brain. Other parts of the brain must interact with MT differently depending on whether it is currently representing motion, coarse depth, fine depth, or some combination. If this is the case, we can expect there to be mechanisms in the brain that mediate these shifting functions. Unsurprisingly, I am not the first to note this problem. Neuroscientists have begun to employ concepts from communication and information technology to show how physiological activity from the same brain area might be interpreted differently in different contexts, for instance by encoding distinct information in distinct dynamic properties of the signal (Akam & Kullmann, 2014). Contextualism informs the need for this kind of approach. I am currently working on explicating these frameworks and showing how they allow for functional decomposition even in dynamic and context-sensitive neural networks.


[1] The high proportion and regular organization of depth-representing cells in MT resists the temptation to try to save informational specificity by subdividing MT into smaller units, as is normally done for V1. V1 is standardly separated into distinct populations of orientation, wavelength, and displacement-selective cells, but this kind of move is not available for MT.



Akam, T., & Kullmann, D. M. (2014). Oscillatory multiplexing of population codes for selective communication in the mammalian brain. Nature Reviews Neuroscience, 15(2), 111-122.

Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. The Behavioral and brain sciences, 33(4), 245-266; discussion 266-313. doi: 10.1017/S0140525X10000853

Bergeron, V. (2007). Anatomical and functional modularity in cognitive science: Shifting the focus. Philosophical Psychology, 20(2), 175-195.

Burnston, D. C. (2016a). Computational neuroscience and localized neural function. Synthese, 1-22. doi: 10.1007/s11229-016-1099-8

Burnston, D. C. (2016b). A contextualist approach to functional localization in the brain. Biology & Philosophy, 1-24. doi: 10.1007/s10539-016-9526-2

Burnston, D. C. (In preparation). Getting over atomism: Functional decomposition in complex neural systems.

Cappelen, H., & Lepore, E. (2005). Insensitive semantics: A defense of semantic minimalism and speech act pluralism: John Wiley & Sons.

Cartwright, N. (1999). The dappled world: A study of the boundaries of science: Cambridge University Press.

Chirimuuta, M. (2014). Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience. Synthese, 191(2), 127-153. doi: 10.1007/s11229-013-0369-y

Klein, C. (2012). Cognitive Ontology and Region- versus Network-Oriented Analyses. Philosophy of Science, 79(5), 952-960.

Klein, C. (2017). Brain regions as difference-makers. Philosophical Psychology, 30(1-2), 1-20.

Livingstone, M., & Hubel, D. (1988). Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240(4853), 740-749.

McCaffrey, J. B. (2015). The brain’s heterogeneous functional landscape. Philosophy of Science, 82(5), 1010-1022.

Morrison, M. (2007). Unifying scientific theories: Physical concepts and mathematical structures: Cambridge University Press.

Preyer, G., & Peter, G. (2005). Contextualism in philosophy: Knowledge, meaning, and truth: Oxford University Press.

Rathkopf, C. A. (2013). Localization and Intrinsic Function. Philosophy of Science, 80(1), 1-21.

Rentzeperis, I., Nikolaev, A. R., Kiper, D. C., & van Leeuwen, C. (2014). Distributed processing of color and form in the visual cortex. Frontiers in Psychology, 5.