iCog Blog

Conceptual short-term memory: a new tool for understanding perception, cognition, and consciousness

Henry Shevlin, Research Associate, Leverhulme Centre for the Future of Intelligence at The University of Cambridge

The notion of memory, as used in ordinary language, may seem to have little to do with perception or conscious experience. While perception informs us about the world as it is now, memory almo­­st by defi­­nition tells us about the past. Similarly, whereas conscious experience seems like an ongoing, occurrent phenomenon, it’s natural to think of memory as being more like an inert store of information, accessible when we need it but capable of lying dormant for years at a time.

However, in contemporary cognitive science, memory is taken to include almost any psychological process that functions to store or maintain information, even if only for very brief durations (see also James, 1890). In this broader sense of the term, connections between memory, perception, and consciousness are apparent. After all, some mechanism for the short-term retention of information will be required for almost any perceptual or cognitive process, such as recognition or inference, to take place: as one group of psychologists put it, “storage, in the sense of internal representation, is a prerequisite for processing” (Halford, Phillips, & Wilson, 2001). Assuming, then, as many theorists do, that perception consists at least partly in the processing of sensory information, short-term memory is likely to have an important role to play in a scientific theory of perception and perceptual experience.

In this latter sense of memory, two major forms of short-term store have been widely discussed in relation to perception and consciousness. The first of these is the various forms of sensory memory, and in particular iconic memory. Iconic memory was first described by George Sperling, who in 1960 demonstrated that large amounts of visually presented information were retained for brief intervals, far more than subjects were able to actually utilize for behaviour during the short window in which they were available (Figure 1). This phenomenon, dubbed partial report superiority, was brought to the attention of philosophers of mind via the work of Fred Dretske (1981) and Ned Block (1995, 2007). Dretske suggested that the rich but incompletely accessible nature of information presented in Sperling’s paradigm was a marker of perceptual rather than cognitive processes. Block similarly argued that sensory memory might be closely tied to perception, and further, suggested that such sensory forms of memory could serve as the basis for rich phenomenal consciousness that ‘overflowed’ the capacity for cognitive access.

A second form of short-term term that has been widely discussed by both psychologists and philosophers is working memory. Very roughly, working memory is a short-term informational store that is more robust than sensory memory but also more limited in capacity. Unlike information in sensory memory, which must be cognitively accessed in order to be deployed for voluntary action, information in working memory is immediately poised for use in such behaviour, and is closely linked to notions such as cognition and cognitive access. For reasons such as these, Dretske seemed inclined to treat this kind of capacity-limited process as closely tied or even identical to thought, a suggestion followed by Block.[1] Psychologists such as Nelson Cowan (2001: 91) and Alan Baddeley (2003: 836) take encoding in working memory to be a criterion of consciousness, while global workspace theorists such as Stanislas Dehaene (2014: 63) have regarded working memory as intimately connected – if not identical – with global broadcast.[2]

The foregoing summary is over-simplistic, but hopefully serves to motivate the claim that scientific work on short-term memory mechanisms may have important roles to play in understanding both the relation between perception and cognition and conscious experience. With this idea in mind, I’ll now discuss some recent evidence for a third important short-term memory mechanism, namely Molly Potter’s proposed Conceptual Short-Term Memory. This is a form of short-term memory that serves to encode not merely the sensory properties of objects (like sensory memory), but also higher-level semantic information such as categorical identity. Unlike sensory memory, it seems somewhat resistant to interference by the presentation of new sensory information; whereas iconic memory can be effaced by the presentation of new visual information, CSTM seems somewhat robust. In these respects, it is similar to working memory. Unlike working memory, however, it seems to have both a high capacity and a brief duration; information in CSTM that is not rapidly accessed by working memory is lost after a second or two (for a more detailed discussion, see Potter 2012).

Evidence for CSTM comes from a range of paradigms, only two of which I discuss here (interested readers may wish to consult Potter, Staub, & O’Connor, 2004; Grill-Spector and Kanwisher, 2005; and Luck, Vogel, & Shapiro, 1996). The first particularly impressive demonstration is a 2014 experiment examining subjects’ ability to identify the presence of a given semantic target (such as “wedding” or “picnic”) in a series of rapidly presented images (see Figure 2).

A number of features of this experiment are worth emphasizing. First, subjects in some trials were cued to identify the presence of a target only after presentation of the images, suggesting that their performance did indeed rely on memory rather than merely, for example, effective search strategies. Second, a relatively large number of images were displayed in quick succession, either 6 or 12, in both cases larger than the normal capacity of working memory. Subjects’ performance in the 12-item trials was not drastically worse than in the 6-item trials, suggesting that they were not relying on normal capacity-limited working memory alone. Third, because the images were displayed one after another in the same location in quick succession, it seems unlikely that they were relying on sensory memory alone; as noted earlier, sensory memory is vulnerable to overwriting effects. Finally, the fact that subjects were able to identify not merely the presence of certain visual features but the presence or absence of specific semantic targets suggests that they were not merely encoding low-level sensory information about the images, but also their specific categorical identities, again telling against the idea that subjects’ performance relied on sensory memory alone.

Another relevant experiment for the CSTM hypothesis is that of Belke et al. (2008). In this experiment, subjects were presented with a single array of either 4 or 8 items, and asked whether a given category of picture (such as a motorbike) was present. In some trials in which the target was absent, a semantically related distractor (such as a motorbike helmet) was present instead. The surprising result of this experiment, which involved an eye-tracking camera, was that subjects reliably fixated upon either targets or semantically related distractors with their initial eye movements, and were just as likely to do whether the arrays contained 4 or 8 items, and even when assigned a cognitive load task beforehand (see Figure 3).

Again, these results arguably point to the existence of some further memory mechanism beyond sensory memory and working memory: if subjects were relying on working memory to direct their eye movements, then one would expect such movements to be subject to interference from the cognitive load, whereas the hypothesis that subjects were relying on exclusively sensory mechanisms runs into the problem that such mechanisms do not seem to be sensitive to high-level semantic properties of stimuli such as their specific category identity, whereas in this trial, subjects’ eye movements were sensitive to just such semantic properties of the items in the array.[3]

Interpretation of experiments such as these is a tricky business, of course (for a more thorough discussion, see Shevlin 2017). However, let us proceed on the assumption that the CSTM hypothesis is at least worth taking seriously, and that there may be some high-capacity semantic buffer in addition to more widely accepted mechanisms such as iconic memory and working memory. What relevance might this have for debates in philosophy and cognitive science? I will now briefly mention three such topics. Again, I will be oversimplifying somewhat, but my goal will be to outline some areas where the CSTM hypothesis might be of interest.

The first such debate concerns the nature of the contents of perception. Do we merely see colours, shapes, and so on, or do we perceive high-level kinds such as tables, cats, and Donald Trump (Siegel, 2010)? Taking our cue from the data on CSTM, we might suggest that this question can be reframed in terms of which forms of short-term memory are genuinely perceptual. If we take there to be good grounds for confining perceptual representation to the kinds of representations in sensory memory, then we might be inclined to take an austere view of the contents of experience. By contrast, if the kind of processing involved in encoding in CSTM is taken to be a form of late-stage perception, then we might have evidence for the presence of high-level perceptual content. It might reasonably be objected that this move is merely ‘kicking the can down the road’ to questions about the perception-cognition boundary, and does not by itself resolve the debate about the contents of perception. However, more positively, this might provide a way of grounding largely phenomenological debates in the more concrete frameworks of memory research.

A second key debate where CSTM may play a role concerns the presence of top-down effects on perception. A copious amount of experimental data (dating back to early work by psychologists such as Perky, 1910, but proliferating especially in the last two decades) has been produced in support of the idea that there are indeed ‘top-down’ effects on perception, which in turn has been taken to suggest that our thoughts, beliefs, and desires can significantly affect how the world appears to us. Such claims have been forcefully challenged by the likes of Firestone and Scholl (2015), who have suggested that the relevant effects can often be explained in terms of, for example, postperceptual judgment rather than perception proper.

However, the CSTM hypothesis may again offer a third compromise position. By distinguishing core perceptual processes (namely those that rely on sensory buffers such as iconic memory) from the kind of later categorical processing performed by CSTM, there may be other positions available in the interpretation of alleged cases of top-down effects on perception. For example, Firestone and Scholl claim that many such results fail to properly distinguish perception from judgment, suggesting that, in many cases, experimentalists’ results can be interpreted purely in terms of strictly cognitive effects rather than as involving effects on perceptual experience. However, if CSTM is a distinct psychological process operative between core perceptual processes and later central cognitive processes, then appeals to things such as ‘perceptual judgments’ may be better founded than Firestone and Scholl seem to think. This would allow us to claim that at least some putative cases of top-down effects went beyond mere postperceptual judgments while also respecting the hypothesis that early vision is encapsulated; see Pylyshyn, 1999).

A final debate in which CSTM may be of interest is the question of whether perceptual experience is richer than (or ‘overflows’) what is cognitively accessed. As noted earlier, Ned Block has argued that information in sensory forms of memory may be conscious even if it is not accessed – or even accessible – to working memory (Block, 2007). This would explain phenomena such as the apparent ‘richness’ of experience; thus if we imagine standing in Times Square, surrounded by chaos and noise, it is phenomenologically tempting to think we can only focus on and access a tiny fraction of our ongoing experiences at any one moment. A common challenge to this kind of claim is that it threatens to divorce consciousness from personal level cognitive processing, leaving us open to extreme possibilities such as the ‘panpsychic disaster’ of perpetually inaccessible conscious experience in very early processing areas such as the LGN (Prinz, 2007). Again, CSTM may offer a compromise position. As noted earlier, the capacity of CSTM does indeed seem to overflow the sparse resources of working memory. However, it also seems rely on personal level processing, such as an individual’s store of learned categories. Thus one new position, for example, might claim that information must at least reach the stage of CSTM to be conscious, thus allowing that perceptual experience may indeed overflow working memory while also ruling it out in early sensory areas.

These are all bold suggestions in need of extensive clarification and argument, but it is my hope that I have at least demonstrated to the reader how CSTM may be a hypothesis of interest not merely to psychologists of memory, but also those interested in broader issues of mental architecture and consciousness. And while I should also stress that CSTM remains a working hypothesis in the psychology of memory, it is one that I think is worth exploring on grounds of both scientific and philosophical interest.

 

REFERENCES:

Baddeley, A.D. (2003). Working memory: Looking back and looking forward.Nature Reviews Neuroscience, 4(10), 829-839.

Belke, E., Humphreys, G., Watson, D., Meyer, A. and Telling, A., (2008). Top-down effects ofsemantic knowledge in visual search are modulated by cognitive but not perceptual load. Perception & Psychophysics, 70 8, 1444 – 1458.

Bergström, F., & Eriksson, J. (2014). Maintenance of non-consciously presented information engages the prefrontal cortex. Frontiers in Human Neuroscience 8:938.

Block, N. (2007). Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience, Behavioral and Brain Sciences 30, pp. 481–499.

Cowan, N., (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences 241, 87-114.

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking Press, 2014.

Dretske, F. (1981). Knowledge and the Flow of Information. MIT Press.

Firestone, C., & Scholl, B.J. (2015). Cognition does not affect perception: Evaluating the evidence for ‘top-down’ effects. Behavioral and Brain Sciences:1-77.

Grill-Spector, K., & Kanwisher, N. (2005). Visual Recognition. Psychological Science, 16(2), 152-160.

Halford, G. S., Phillips, S., & Wilson, W. H. (2001). Processing capacity limits are not explained by storage limits. Behavioral and Brain Sciences 24 (1), 123-124.

James, W. (1890). The Principles of Psychology. Dover Publications.

Luck, S. J., Vogel, E. K., & Shapiro, K. L. (1996). Word meanings can be accessed but not reported during the attentional blink. Nature, 383(6601), 616-618.

Ma, W. J., Husain, M., & Bays, P. M. (2014). Changing concepts of working memory. Nature Neuroscience, 17(3), 347-356.

Potter, M. C. (2012). Conceptual Short Term Memory in Perception and Thought. Frontiers in Psychology, 3:113.

Potter, M. C., Staub, A., & O’Connor, D. H. (2004). Pictorial and conceptual representation of glimpsed pictures. Journal of Experimental Psychology: Human Perception and Performance, 30, 478-489.

Prinz, J. (2007). Accessed, accessible, and inaccessible: Where to draw the phenomenal line. Behavioral and Brain Sciences, 305-6.

Pylyshyn, Z. (1999). Is vision continuous with cognition?: The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences, 22(03).

Shevlin, H. (2017). Conceptual Short-Term Memory: A Missing Part of the Mind? Journal of Consciousness Studies, 24, No. 7–8.

Siegel, S. (2010). The Contents of Visual Experience. Oxford.

Sperling, G. (1960). The Information Available in Brief Visual Presentations, Psychological Monographs: General and Applied 74, pp. 1–29.

 

[1] Note that Dretske does not use the term working memory in this context, but clearly has some such process in mind, as made clear by his reference to capacity-limited mechanisms for extracting information.

[2] A complicating factor in discussion of working memory comes from the recent emergence of variable resource models of working memory (Ma et al., 2014) and the discovery that some forms of working memory may be able to operate unconsciously (see, e.g., Bergström & Eriksson, 2014).

[3] Given that the arrays remained visible to subjects throughout the experiment, one might wonder why this experiment has relevance for our understanding of memory. However, as noted earlier, I take it that any short-term processing of information presumes some kind of underlying temporary encoding mechanism.

Functional Localization—Complicated and Context-Sensitive, but Still Possible

Dan Burnston—Assistant Professor, Philosophy Department, Tulane University, Member Faculty in the Tulane Brain Institute

The question of whether functions are localizable to distinct parts of the brain, aside from its obvious importance to neuroscience, bears on a wide range of philosophical issues—reductionism and mechanistic explanation in philosophy of science; cognitive ontology and mental representation in philosophy of mind, among many others. But philosophical interest in the question has only recently begun to pick up (Bergeron, 2007; Klein, 2012; McCaffrey, 2015; Rathkopf, 2013).

I am a “contextualist” about localization: I think that functions are localizable to distinct parts of the brain, and that different parts of the brain can be differentiated from each other on the basis of their functions (Burnston, 2016a, 2016b). However, I also think that what a particular part of the brain does depends on behavioral and environmental context. That is, a given part of the brain might perform different functions depending on what else is happening in the organism’s internal or external environment.

Embracing contextualism, as it turns out, involves questioning some deeply held assumptions within neuroscience, and connects the question of localization with other debates in philosophy. In neuroscience, localization is generally construed in what I call absolutist terms. Absolutism is a form of atomism—it suggests that localization can be successful only if 1-1 mappings between brain areas and functions can be found. Since genuine multifunctionality is antithetical to atomist assumptions it has historically not been a closely analyzed concept in systems or cognitive neuroscience.

In philosophy, contextualism takes us into questions about what constitutes good explanation—in this case, functional explanation. Debates about contextualism in other areas of philosophy, such as semantics and epistemology (Preyer & Peter, 2005), usually shape up as follows. Contextualists are impressed by data suggesting contextual variation in the phenomenon of interest (usually the truth values of statements or of knowledge attributions). In response, anti-contextualists worry that there are negative epistemic consequences to embracing this variation. The resulting explanations will not, on their view, be sufficiently powerful or systematic (Cappelen & Lepore, 2005). We end up with explanations that do not generalize beyond individual cases. Hence, according to anti-contextualists, we should be motivated to come up with theories that deny or explain away the data that seemingly support contextual variation.

In order to argue for contextualism in the neural case, then, one must first establish the data that suggests contextual variation, then articulate a variety of contextualism that (i) succeeds at distinguishing brain areas in terms of their distinct functions, and (ii) describes genuine generalizations.

Usually, in systems neuroscience, the goal is to correlate physiological responses in particular brain areas with particular types of information in the world, supporting the claim that the responses represent that information. I have pursued a detailed case study of perceptual area MT (also known as “V5” or the “middle temporal” area). The textbook description of MT is that it represents motion—it has specific responses to specific patterns of motion, and variations amongst its cellular responses represent different directions and velocities. Hence, MT has the univocal function of representing motion: an absolutist description.

However, MT research in the last 20 years has uncovered data which strongly suggests that MT is not just a motion detector. I will only list some of the relevant data here, which I discuss exhaustively in other places. Let’s consider a perceptual “context” as a combination of perceptual features—including shape/orientation, depth, color, luminance/brightness, and motion. On the traditional hierarchy, each of these features has its own area dedicated to representing it. Contextualism, alternatively, starts from the assumption that different combinations of these features might result in a given area representing different information.

  • Despite the traditional view that MT is “color blind” (Livingstone & Hubel, 1988), MT in fact responds to the identity of colors when color is useful in disambiguating a motion stimulus. Now in this case, MT still arguably represents motion, but it does use color as a contextual cue for doing so.
  • Over 93% of MT cells represent coarse depth (the rough distance of an object away from the perceiver. Their tuning for depth is independent of their tuning for motion, and many cells represent depth even in stationary These depth signals are predictive of psychophysical results.
  • A majority of MT cells also have specific response properties for fine depth (depth signals resulting from the 3-d shape and orientation of objects) features of tilt and slant, and these can be cued by a variety of distinct features, including binocular disparity and relative velocity.

How do these results support contextualism? Consider a particular physiological response to a stimulus in MT. If the data is correct, then this signal might represent motion, or it might represent depth—and indeed, either coarse or fine depth—depending on the context. Or, it might represent a combination of those influences.[1]

The contextualism I advocate focuses on the type of descriptions we should invoke in theorizing about the functions of brain areas. First, our descriptions should be conjunctive: the function of an area should be described as a conjunction of the different representational functions it serves and the contexts in which it serves those functions. So, MT represents motion in a particular range of contexts, but also represents other types of information in different contexts—including absolute depth in both stationary and moving stimuli, and fine depth in contexts involving tilt and slant, as defined by either relative disparity or relative velocity.

When I say that a conjunction is “open,” what I mean is that we shouldn’t take the functional description as complete. We should see it as open to amendment as we study new contexts. This openness is vital—it is an induction on the fact that the functional description of MT has changed as new contexts have been explored—but also leads us precisely into what bothers anti-contextualists (Rathkopf, 2013). The worry is that open-descriptions do not have the theoretical strength that supports good explanations. I argue that this worry is mistaken.

First, note that contextualist descriptions can still functionally decompose brain areas. The key to this is the indexing of functions to contexts. Compare MT to V4. While V4 also represents “motion” construed broadly (in “kinetic” or moving edges), color, and fine depth, the contexts in which V4 does so differ from MT. For instance, V4 represents color constancies which are not present in MT responses. V4’s specific combination of sensitivities to fine depth and curvature allows it to represent protuberances—curves in objects that extend towards the perceiver—which MT cannot represent. So, the types of information that these areas represent, along with the contexts in which they represent them, tease apart their functions.

Indexing to contexts also points the way to solving the problem of generalization, so long as we appropriately modulate our expectations. For instance, on contextualism it is still a powerful generalization that MT represents motion. This is substantiated by the wide range of contexts in which it represents motion—including moving dots, moving bars, and color-segmented patterns. It’s just that representing motion is not a universal generalization about its function. It is a generalization with more limited scope. Similarly, MT represents fine depth in some contexts (tilt and slant, defined by disparity or velocity), but not in all of them (protuberances). Of course, if the function of MT is genuinely context sensitive, then universal generalizations about its function will not be possible. Hence, insisting on universal generalizations is not an open strategy for an absolutist—at least not without question begging.

The real crux of the debate, I believe, is about the notion of projectability. We want our theories not just to describe what has occurred, but to inform future hypothesizing about novel situations. Absolutists hope for a powerful form of law-like projectability, on which a successful functional description tells us for certain what that area will do in new contexts. The “open” structure of contextualism precludes this, but this doesn’t bother the contextualist. This situation might seem reminiscent of similar stalemates regarding contextualism in other areas of philosophy.

There are two ways I have sought to break the stalemate. First is to define a notion of projectability that informs scientific practice, but is compatible with contextualism. Second is to show that even very general absolutist descriptions fail to deliver on the supposed explanatory advantages of absolutism. The key to a contextualist notion of projectability, in my view, is to look for a form of projectability that structures investigation, rather than giving lawful predictions. The basic idea is this: given a new context, the null hypothesis for an area’s function in that context should be that it performs its previously known function (or one of its known functions). I call this role a minimal hypothesis, and the idea is that currently known functional properties structure hypothesizing and investigation in novel contexts, by providing three options: (i) either the area does not function at all in the novel context (perhaps MT does not make any functional contribution to, say, processing emotional valence); (ii) it functions in one of its already known ways, in which case another context gets indexed to, and generalizes, an already known conjunct, or (iii) it performs a new function in that context, forcing a new conjunct to be added to the overall description of the area (indexed to the novel context, of course). While I won’t go into details here, I argue in (Burnston, 2016a) that this kind of reasoning has shaped the progress of understanding MT function.

One option open to a defender of absolutism is to attempt to explain away the data suggesting contextual variation by changing the type of functional description that is supposed to generalize over all contexts (Anderson, 2010; Bergeron, 2007; Rathkopf, 2013). For instance, rather than saying that a part of the brain represents a specific type of information, maybe we should say that it performs the same type of computation, whatever information it is processing. I have called this kind of approach “computational absolutism” (Burnston, 2016b).

While computational neuroscience is an important theoretical approach, it can’t save absolutism. My argument against the view starts from an empirical premise—in modeling MT, there is not one computational description that describes everything MT does. Instead, there are a range of theoretical models that each provide good descriptions of aspects of MT function. Given this lack of universal generalization, the computational absolutist has some options. They might move towards more general levels of computational description, hoping to subsume more specific models. The problem with this is that the most general computational descriptions in neuroscience are what are called canonical computations (Chirimuuta, 2014)—descriptions that can apply to virtually all brain areas. But if this is the case, then these descriptions won’t successfully differentiate brain areas based on their function. Hence, they don’t contribute to functional localization.

On the other hand, suggesting that it is something about the way these computations are applied in particular contexts runs right into the problem of contextual variation. Producing a model that predicts what, say, MT will do in cases of pattern motion or reverse-phi phenomena simply does not predict what functional responses MT will have to depth—not, at least, without investigating and building in knowledge about its physiological responses to those stimuli. So, even if general models are helpful in generating predictions in particular instances, they don’t explain what goes on in them. If this description is right, then the supposed explanatory gain of CA is an empty promise, and contextual analysis of function is necessary. My view of the role of highly general models mirrors those offered by Cartwright (1999) and Morrison (2007) in the physical sciences.

Some caveats are in order here. I’ve only talked about one brain area, and as McCaffrey (2015) points out, different areas might be amenable to different kinds of functional analysis. Perceptual areas are important, however, because they are paradigm success cases for functional localization. If contextualism works here, it can work elsewhere, as well as for other units of analysis, such as cell populations and networks (Rentzeperis, Nikolaev, Kiper, & van Leeuwen, 2014). I share McCaffrey’s pluralist leanings, but I think that a place for contextualist functional analysis must be made if functional decomposition is to succeed. The contextualist approach is also compatible with other frameworks, such as Klein’s (2017) focus on “difference-making” in understanding the function of brain areas.

I’ll end with a teaser about my current project on these topics (Burnston, in prep). Note that, if the function of brain areas can genuinely shift with context, this is not just a theoretical problem, but a problem for the brain. Other parts of the brain must interact with MT differently depending on whether it is currently representing motion, coarse depth, fine depth, or some combination. If this is the case, we can expect there to be mechanisms in the brain that mediate these shifting functions. Unsurprisingly, I am not the first to note this problem. Neuroscientists have begun to employ concepts from communication and information technology to show how physiological activity from the same brain area might be interpreted differently in different contexts, for instance by encoding distinct information in distinct dynamic properties of the signal (Akam & Kullmann, 2014). Contextualism informs the need for this kind of approach. I am currently working on explicating these frameworks and showing how they allow for functional decomposition even in dynamic and context-sensitive neural networks.

 

[1] The high proportion and regular organization of depth-representing cells in MT resists the temptation to try to save informational specificity by subdividing MT into smaller units, as is normally done for V1. V1 is standardly separated into distinct populations of orientation, wavelength, and displacement-selective cells, but this kind of move is not available for MT.

 

REFERENCES

Akam, T., & Kullmann, D. M. (2014). Oscillatory multiplexing of population codes for selective communication in the mammalian brain. Nature Reviews Neuroscience, 15(2), 111-122.

Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. The Behavioral and brain sciences, 33(4), 245-266; discussion 266-313. doi: 10.1017/S0140525X10000853

Bergeron, V. (2007). Anatomical and functional modularity in cognitive science: Shifting the focus. Philosophical Psychology, 20(2), 175-195.

Burnston, D. C. (2016a). Computational neuroscience and localized neural function. Synthese, 1-22. doi: 10.1007/s11229-016-1099-8

Burnston, D. C. (2016b). A contextualist approach to functional localization in the brain. Biology & Philosophy, 1-24. doi: 10.1007/s10539-016-9526-2

Burnston, D. C. (In preparation). Getting over atomism: Functional decomposition in complex neural systems.

Cappelen, H., & Lepore, E. (2005). Insensitive semantics: A defense of semantic minimalism and speech act pluralism: John Wiley & Sons.

Cartwright, N. (1999). The dappled world: A study of the boundaries of science: Cambridge University Press.

Chirimuuta, M. (2014). Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience. Synthese, 191(2), 127-153. doi: 10.1007/s11229-013-0369-y

Klein, C. (2012). Cognitive Ontology and Region- versus Network-Oriented Analyses. Philosophy of Science, 79(5), 952-960.

Klein, C. (2017). Brain regions as difference-makers. Philosophical Psychology, 30(1-2), 1-20.

Livingstone, M., & Hubel, D. (1988). Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240(4853), 740-749.

McCaffrey, J. B. (2015). The brain’s heterogeneous functional landscape. Philosophy of Science, 82(5), 1010-1022.

Morrison, M. (2007). Unifying scientific theories: Physical concepts and mathematical structures: Cambridge University Press.

Preyer, G., & Peter, G. (2005). Contextualism in philosophy: Knowledge, meaning, and truth: Oxford University Press.

Rathkopf, C. A. (2013). Localization and Intrinsic Function. Philosophy of Science, 80(1), 1-21.

Rentzeperis, I., Nikolaev, A. R., Kiper, D. C., & van Leeuwen, C. (2014). Distributed processing of color and form in the visual cortex. Frontiers in Psychology, 5.

A Deflationary Approach to the Cognitive Penetration Debate

Dan Burnston—Assistant Professor, Philosophy Department, Tulane University, Member Faculty in the Tulane Brain Institute

I construe the debate about cognitive penetration (CP) in the following way: are there causal relations between cognition and perception, such that the processing of the later is systematically sensitive to the content of the former? Framing the debate in this way imparts some pragmatic commitments. We need to make clear what distinguishes perception from cognition, and what resources each brings to the table. And we need to clarify what kind of causal relationship exists, and whether it is strong enough to be considered “systematic.”

I think that current debates about cognitive penetration have failed to be clear enough on these vital pragmatic considerations, and have become muddled as a result. My view is that once we understand perception and cognition aright, we should recognize as an empirical fact that there are causal relationships between them—however, these relations are general, diffuse, and probabilistic, rather than specific, targeted, and determinate. Many supporters of CP certainly seem to have the latter kind of relationship in mind, and it is not clear that the former kind supports the consequences for epistemology and cognitive architecture that these supporters suppose. My primary goal, then, rather than denying cognitive penetration per se, is to de-fuse it (Burnston, 2016, 2017a, in prep).

The view of perception, I believe, that informs most debates about CP, is that perception consists in a set of strictly bottom-up, mutually encapsulated feature detectors, perhaps along with some basic mechanisms for binding these features into distinct “proto” objects (Clark, 2004). Anything categorical, anything that involves inter-featural (to say nothing of intermodal) association, anything that involves top-down influence, or assumptions about the nature of the world, and anything that is learned or involves memory, must strictly be due to cognition.

To those of this theoretical persuasion, evidence for effects of some subset of these types in perception is prima facie evidence for CP.[1] Arguments in favor of CP move from the supposed presence of these effects, along with arguments that they are not due to either pre-perceptual attentional shifts or post-perceptual judgments, to the conclusion that CP occurs.

On reflection, however, this is a somewhat odd, or at least non-obvious move. We start out from a presupposition that perception cannot involve X. Then we observe evidence that perception does in fact involve X. In response, instead of modifying our view of perception, we insist that only some other faculty, like cognition, must intervene and do for perception that for which it, on its own, lacks. My arguments in this debate are meant to undermine this kind of intuition by showing that, given a better understanding of perception, not only is positing CP not required, it is also (in its stronger forms anyway) simply unlikely.

Consider the following example, the Cornsweet illusion (also called the Craik-O’Brien-Cornsweet illusion).

Figure 1. The Cornsweet illusion.

In this kind of stimulus, subjects almost universally perceive the patch on the left as darker than the patch on the right, despite the fact that they have the exact same luminance, aside from the dark-to-light gradient on the left of the center line (the “Cornsweet edge”) and the light-to-dark gradient on the right. The standard view of the illusion in perceptual science is that perception assumes that the object is extended towards the perceiver in depth, with light coming from the right. If this were true, then one would expect the patch on the left to be darker: such effects are the result of “an extraordinarily powerful strategy of vision” (Purves, Shimpi, & Lotto, 1999, p. 8549).[2]

Why construe the strategy as visual? There are a number of related considerations. First, the phenomenon involves fine-grained associations between particular features (luminance, discontinuity, and contrast, in particular configurations) that vary systematically and continuously with the amount of evidence for the interpretation. If one increases the depth-interpretation by foreshortening or “bowing” the figure, the effect is enhanced, and with further modulation one can get quite pronounced effects. It is unclear at best when we would have come by such fine-grained beliefs about these stimuli. Moreover, the effects are mandatory, and operate insensitively to changes in our occurrent beliefs. Fodor is (still) right, in my view, that this kind of manditoriness supports a perceptual reading.

According to Jonathan Cohen and me (Burnston & Cohen, 2012, 2015), current perceptual science reveals effects like this to be the norm, at all levels of perception. If this “integrative” view of perception is true, then embodying assumptions in complex associations is no evidence for CP—in fact it is part-and-parcel of what perception does.

What about categorical perception? Consider the following example from Gureckis and Goldstone (2008), of what is commonly referred to as a morphspace.

Figure 2. Categories for facial perception.

According to current views (Gauthier & Tarr, 2016; Goldstone & Hendrickson, 2010), categorical perception involves higher-order associations between correlated low-level features. So, recognizing a particular category of faces (for instance, an individual’s face, a gender, or a race) involves being able to notice correlations between a number of low-level facial features such as lightness, nose or eye shape, etc., as well as their spatial configurations (e.g., the distance between the eyes or between the nose and the upper lip). A wide range of perceptual categories have been shown to operate similarly.

Interestingly, forming a category can morph these spaces, to group exemplars together along the relevant dimensions. In Gureckis and Goldstone’s example, once subjects learn to discriminate A from B faces (defined by the arbitrary center line), novel examples of A faces will be judged to be more alike each other along diagnostic dimension A than they were prior to learning. Despite these effects being categorical, I suggest that they are strongly analogous to the cases above—they involve featural associations that are fine-grained (a dimension is “morphed” a particular amount during the course of learning) and mandatory (it is hard not to see, e.g., your brother’s face as your brother) in a similar way to those above. Moreover, subjects are often simply bad at describing their perceptual categories. In studies such as Gureckis and Goldstone’s, subjects have trouble saying much about the dimensional associations that inform their percepts. As such, and given the resources of the integrative view, a way is opened to seeing these categorical effects as occurring within perception.[3]

If being associative, assumption-involving, or categorical doesn’t distinguish a perceptual from a cognitive representation, what does? While there are issues cashing out the distinction in detail, I suggest that the best way to mark the perception/cognition distinction is in terms of representational form. Cognitive representations are discrete and language-like, while perceptual representations represent structural dimensions of their referents—these might include shape dimensions (tilt, slant, orientation, curvature, etc.), the dimensions that define the phenomenal color space, or higher-order dimensions such as the ones in the face case above. The form distinction captures the kinds of considerations I’ve advanced here, as well as being compatible with wide range of related ways of drawing the distinction in philosophy and cognitive science.

With these distinctions in place, we can talk about the kinds of cases that proponents of CP take as evidence. On Macpherson’s example, Delk and Fillenbaum’s studies purporting to show that “heart” shapes are perceived as a more saturated red than non-heart shapes. Let’s put aside for a moment the prevalent methodological critiques of these kinds of studies (Firestone & Scholl, 2016). Even so, there is no reason to read the effect as one of cognitive penetration. Simply the belief “hearts are red,” according to the form distinction, does not represent the structural properties of the color space, and thus has no resources to inform perception to modify itself any particular way. Of course, one might posit a more specific belief—say, that this particular heart is a particular shade of red—but this belief would have to be based on perceptual evidence about the stimulus. If perception couldn’t represent this stimulus as this shade on its own, we wouldn’t come by the belief. Moreover, on the integrative view this is the kind of thing perception does anyway. Hence, there is no reason to see the percept as being the result of cognitive intervention.

In categorical contexts, one strong motivation for cognitive penetration is the idea that perceptual categories are learned, and often this learning is informed by prior beliefs and instructions (Churchland, 1988; Siegel, 2013; Stokes, 2014). There are problems with these views, however, both empirical and conceptual. The empirical problem is that learning can occur without any cognitive influence whatsoever. Perceivers can become attuned to diagnostic dimensions for entirely novel categories simply by viewing exemplars (Folstein, Gauthier, & Palmeri, 2010). Here, subjects have no prior beliefs or instructions for how to perceive the stimulus, but perceptual learning occurs anyway. In many cases, however, even when beliefs are employed in learning a category, it’s obvious that the belief does not encode any content that is useful for informing the specific percept. In Goldstone and Gureckis’ case above, subjects were shown exemplar faces and told “this is an A” or “this is a B”. But this indexical belief does not describe anything about the category they actually learn.

One might expect that more detailed instructions or prior beliefs can inform more detailed categories—for instance Siegel’s suggestion that novitiate arborists be told to look at the shape of leaves in order to distinguish (say) pines from birches. However, this runs directly into the conceptual problem. Suppose that pine leaves are pointy while birch leaves are broad. Learners already know what pointy and broad things look like. If these beliefs are all that’s required, then subjects don’t need to learn anything perceptually in order to make the discrimination. However, if the beliefs are not sufficient to make the discrimination—either because it is a very fine-grained discrimination of shape, or because pine versus birch perceptions in fact require the kind of higher-order dimensional structure discussed above—then their content does not describe what perception learns when subjects do learn to make the distinction perceptually.[4] In either case, there is a gap between the content of the belief and the content of the learned perception—a gap that is supported by studies of perceptual learning and expertise (for further discussion, see Burnston, 2017a, in prep). So, while beliefs might be important causal precursors to perceptual learning, they do not penetrate the learning process.

So, the situation is this: we have seen that, on the integrative view and the form distinction, cognition does not have the resources to determine the kind of perceptual effects that are of interest in debates about CP. In both synchronic and diachronic cases, perception can do much of the heavy lifting itself, thus rendering CP unnecessary to explain the effects. A final advantage of this viewpoint, especially the form distinction, is that it brings particular forms of evidence to bear on the debate—particularly evidence about what happens when processing of lexical/amodal symbols does in fact interact with processing of modal ones. The details are too much to go through here, but I argue that the key to understanding the relationship between perception and cognition is to give up the notion that there are ever direct relationships between the tokening of a particular cognitive content and a specific perceptual outcome (Burnston, 2016, 2017b). Instead, I suggest that tokening a cognitive concept biases perception towards a wide range of possible outcomes. Here, rather than determinate casual relationships, we should expect highly probabilistic, highly general, and highly flexible interactions, where cognition does not force perception to act a certain way, but can shift the baseline probability that we’ll perceive something consistent with the cognitive content. This brings priming, attentional, and modulatory effects under a single rubric, but not one on which cognition tinkers with the internal workings of specific perceptual processes to determine how they work in a given instance. I thus call it the “external effect” view of the cognition-perception interface.

Now it is open to the defender of cognitive penetration to define this diffuse interaction as an instance of penetration—penetration is a theoretical term one may define as one likes. I think, however, that this notion is not what most cognitive penetration theorists have in mind, and it does not obviously carry any of the supposed consequences for modularity, theoretical neutrality, or the epistemic role of perception that proponents of CP assume (Burnston, 2017a; cf. Lyons, 2011). The kind of view I’ve offered captures, in the best available empirical and pragmatic way, the range of phenomena at issue, and does so very differently than standard discussions of penetration.

 

REFERENCES

Burnston, D. C. (2016). Cognitive penetration and the cognition–perception interface. Synthese, 1-24. DOI: doi:10.1007/s11229-016-1116-y

Burnston, D. C. (2017a). Is aesthetic experience evidence for cognitive penetration? New Ideas in Psychology. DOI: https://doi.org/10.1016/j.newideapsych.2017.03.012

Burnston, D. C. (2017b). Interface problems in the explanation of action. Philosophical Explorations, 20 (2), 242-258. DOI: http://dx.doi.org/10.1080/13869795.2017.1312504

Burnston, D. C. (In preparation). There is no diachronic cognitive penetration.

Burnston, D., & Cohen, J. (2012). Perception of features and perception of objects. Croatian Journal of Philosophy (36), 283-314.

Burnston, D. C., & Cohen, J. (2015). Perceptual integration, modularity, and cognitive penetration Cognitive Influences on Perception: Implications for Philosophy of Mind, Epistemology, and Philosophy of Action. Oxford: Oxford University Press.

Churchland, P. M. (1988). Perceptual plasticity and theoretical neutrality: A reply to Jerry Fodor. Philosophy of Science, 55(2), 167-187.

Clark, A. (2004). Feature-placing and proto-objects. Philosophical Psychology, 17(4), 443-469. doi: 10.1080/0951508042000304171

Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and Brain Sciences, 39, 1-77.

Fodor, J. (1984). Observation reconsidered. Philosophy of Science, 51(1), 23-43.

Folstein, J. R., Gauthier, I., & Palmeri, T. J. (2010). Mere exposure alters category learning of novel objects. Frontiers in Psychology, 1, 40.

Gauthier, I., & Tarr, M. J. (2016). Object Perception. Annual Review of Vision Science, 2(1).

Goldstone, R. L., & Hendrickson, A. T. (2010). Categorical perception. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 69-78. doi: 10.1002/wcs.26

Gureckis, T. M., & Goldstone, R. L. (2008). The effect of the internal structure of categories on perception. Paper presented at the Proceedings of the 30th Annual Conference of the Cognitive Science Society.

Lyons, J. (2011). Circularity, reliability, and the cognitive penetrability of perception. Philosophical Issues, 21(1), 289-311.

Macpherson, F. (2012). Cognitive penetration of colour experience: rethinking the issue in light of an indirect mechanism. Philosophy and Phenomenological Research, 84(1), 24-62.

Nanay, B. (2014). Cognitive penetration and the gallery of indiscernibles. Frontiers in Psychology, 5.

Purves, D., Shimpi, A., & Lotto, R. B. (1999). An empirical explanation of the Cornsweet effect. The Journal of Neuroscience, 19(19), 8542-8551.

Pylyshyn, Z. (1999). Is vision continuous with cognition? The case for cognitive impenetrability of visual perception. The Behavioral and Brain Sciences, 22(3), 341-365; discussion 366-423.

Raftopoulos, A. (2009). Cognition and perception: How do psychology and neural science inform philosophy? Cambridge: MIT Press.

Rock, I. (1983). The logic of perception. Cambridge: MIT Press.

Siegel, S. (2013). The epistemic impact of the etiology of experience. Philosophical Studies, 162(3), 697-722.

Stokes, D. (2014). Cognitive penetration and the perception of art. Dialectica, 68(1), 1-34.

Yuille, A., & Kersten, D. (2006). Vision as Bayesian inference: analysis by synthesis? Trends in Cognitive Sciences, 10(7), 301-308.

 

[1] Different theorists stress different properties. Macpherson (2012) stresses effects being categorical and associational, Nanay (2014) and Churchland (1988) their being top-down. Raftopoulos (2009) cites the role of memory in categorical effects and Stokes (2014) and Siegel (2013) the importance of learning in such contexts.

[2] This kind of reading of intra-perceptual processing is extremely common across a range of theorists and perspectives in perceptual psychology (e.g., Pylyshyn, 1999; Rock, 1983; Yuille & Kersten, 2006).

[3] This view also rejects the attempt to make these effects cognitive by defining them as tacit beliefs. The problem with tacit beliefs is that they simply dictate that anything corresponding to a category or inference must be cognitive, which is exactly what’s under discussion here. The move thus doesn’t add anything to the debate.

[4] This requires assuming a “specificity” condition on the content of a purported penetrating belief—namely that a candidate penetrator must have the content that perception learns to represent. I argue in more detail elsewhere that giving this condition up trivializes the penetration thesis (Burnston, in prep).

Enactivism, Computation, and Autonomy

by Joe Dewhurst -Teaching Assistant at The University of Edinburgh

Enactivism has historically rejected computational characterisations of cognition, at least in more traditional versions. This has led to the perception that enactivist approaches to cognition must be opposed to be more mainstream computationalist approaches, which offer a computational characterisation of cognition. However, the conception of computation which enactivism rejects is in some senses quite old fashioned, and it is not so clear that enactivism need necessarily be opposed to computation, understood in a more modern sense. Demonstrating that there could be compatibility, or at least not a necessary opposition, between enactivism and computationalism (in some sense) would open the door to a possible reconciliation or cooperation between the two approaches.

In a recently published paper (Villalobos & Dewhurst 2017), my collaborator Mario and I have focused on elucidating some of the reasons why enactivism has rejected computation, and have argued that these do not necessarily apply to more modern accounts of computation. In particular, we have demonstrated that a physically instantiated Turing machine, which we take to be a paradigmatic example of a computational system, can meet the autonomy requirements that enactivism uses to characterise cognitive systems. This demonstration goes some way towards establishing that enactivism need not be opposed to computational characterisations of cognition, although there may be other reasons for this opposition, distinct from the autonomy requirements.

The enactive concept of autonomy first appears in its modern guise in Varela, Thompson, & Rosch’s 1991 book The Embodied Mind, although it has important historical precursors in Maturana’s autopoietic theory (see his 1970, 1975, 1981; see also Maturana & Varela 1980) and cybernetic work on homeostasis (see e.g. Ashby 1956, 1960). There are three dimensions of autonomy that we consider in our analysis of computation. Self-determination requires that the behaviour of an autonomous system must be determined by that system’s own structure, and not by external instruction. Operational closure requires that the functional organisation of an autonomous system must loop back on itself, such that the system possesses no (non-arbitrary) inputs or outputs. Finally, an autonomous system must be precarious, such that the continued existence of the system depends on its own functional organisation, rather than on external factors outside of its control. In this post I will focus on demonstrating that these criteria can be applied to a physical computing system, rather than addressing why or how enactivism argues for them in the first place.

All three criteria have traditionally been used to disqualify computational systems from being autonomous systems, and hence to deny that cognition (which for enactivists requires autonomy) can be computational (see e.g. Thompson 2007: chapter 3). Here it is important to recognise that the enactivists have a particular account of computation in mind, one that they have inherited from traditional computationalists. According to this ‘semantic’ account, a physical computer is defined as a system that performs systematic transformations over content-bearing (i.e. representational) states or symbols (see e.g. Sprevak 2010). With such an account in mind, it is easy to see why the autonomy criteria might rule out computational systems. We typically think of such a system as consuming symbolic inputs, which it transforms according to programmed instructions, before producing further symbolic outputs. Already this system has failed to meet the self-determination and operational closure criteria. Furthermore, as artefactual computers are typically reliant on their creators for maintenance, etc., they also fail to meet the precariousness criteria. So, given this quite traditional understanding of computation, it is easy to see why enactivists have typically denied that computational systems can be autonomous.

Nonetheless, understood according to more recent, ‘mechanistic’ accounts of computation, there is no reason to think that the autonomy criteria must necessarily exclude computational systems. Whilst they differ in some details, all of these accounts deny that computation is inherently semantic, and instead define physical computation in terms of mechanistic structures. We will not rehearse these accounts in any detail here, but the basic idea is that physical computation can be understood in terms of mechanisms that perform systematic transformations over states that do not possess any intrinsic semantic content (see e.g. Miłkowski 2013; Fresco 2014; Piccinini 2015). With this rough framework in mind, we can return to the autonomy criteria.

Even under the mechanistic account, computation is usually understood in terms of mappings between inputs and outputs, where there is a clear sense of the beginning and end of the computational operation. A system organised in this way can be described as ‘functionally open’, meaning that its functional organisation is open to the world. A functionally closed system, on the other hand, is one whose functional organisation loops back through the world, such that the environmental impact of the system’s ‘outputs’ contributes to the ‘inputs’ that it receives.

A simple example of this distinction can be found by considering two different ways that a thermostat could be used. In the first case the sensor, which detects ambient temperature, is placed in one house, and the effector, which controls a radiator, is placed in another (see figure 1). This system is functionally open, because there is only a one-way connection between the sensor and the effector, allowing us to straightforwardly identify inputs and outputs to the system.

A more conventional way of setting up a thermostat is with both the sensor and the effector in the same house (see figure 2). In this case the apparent ‘output’ (i.e. control of the radiator) loops back way round to the apparent ‘input’ (i.e. ambient temperature), forming a functionally closed system. The ambient air temperature in the house is effectively part of the system, meaning that we could just as well treat the effector as providing input and the sensor as producing output – there is no non-arbitrary beginning or end to this system.

Whilst it is typical to treat a computing mechanism more like the first thermostat, with a clear input and output, we do not think that this perspective is essential to the mechanistic understanding of computation. There are two possible ways that we could arrange a computing mechanism. The functionally open mechanism (figure 3) reads from one tape and writes onto another, whilst the functionally closed mechanism (figure 4) reads and writes onto the same tape, creating a closed system analogous to the thermostat with its sensor and effector in the same house. As Wells (1998) suggests, a conventional Turing machine is actually arranged in the second way, providing an illustration of a functional closed computing mechanism. Whether or not this is true of other computational systems is a distinct question, but it is clear that at least some physically implemented computers can exhibit operational closure.

The self-determination criterion requires that a system’s operations are determined by its own structure, rather than by external instructions. This criterion applies straightforwardly to at least some computing mechanisms. Whilst many computers are programmable, their basic operations are nonetheless determined by their own physical structure, such that the ‘instructions’ provided by the programmer only make sense in the context of the system itself. To another system, with a distinct physical structure, those ‘instructions’ would be meaningless. Just as the enactive automaton ‘Bittorio’ brings meaning to a meaningless sea of 1s and 0s (see Varela 1988; Varela, Thompson, & Rosch 1991: 151-5), so the structure of a computing mechanism bring meaning to the world that it encounters.

Finally, we can turn to the precariousness criterion. Whilst the computational systems that we construct are typically reliant upon us for continued maintenance and a supply of energy, and play no direct role in their own upkeep, this is more a pragmatic feature of our design of those systems, rather than anything essential to computation. We could easily imagine a computing mechanism designed so that it seeks out its own source of energy and is able to maintain its own physical structure. Such a system would be precarious in just the same sense that enactivism conceives of living systems as being precarious. So there is no in-principle reason why a computing system should not be able to meet the precariousness criterion.

In this post I have very briefly argued that the enactivist autonomy criteria can be applied to (some) physically implemented computing mechanisms. Of course, enactivists may have other reasons for thinking that cognitive systems cannot be computational. Nonetheless, we think this analysis could be interesting for a couple of reasons. Firstly, insofar as computational neuroscience and computational psychology have been successful research programs, enactivists might be interested in adopting some aspects of computational explanation for their own analyses of cognitive systems. Secondly, we think that the enactivist characterisation of autonomous systems might help to elucidate the senses in which a computational system might be cognitive. Now that we have established the basic possibility of autonomous computational systems, we hope to develop future work along both of these lines, and invite others to do so too.

I will leave you with this short and amusing video of the autonomous robotic creations of the British cyberneticist W. Grey Walter, which I hope might serve as a source of inspiration for future cooperation between enactivism and computationalism.

 

References

  • Ashby, R. (1956). An introduction to cybernetics. London: Chapman and Hall.
  • Ashby, R. (1960). Design for a Brain. London: Chapman and Hall.
  • Fresco, N. (2014). Physical computation and cognitive science. Berlin, Heidelberg: Springer-Verlag.
  • Maturana, H. (1970). Biology of cognition. Biological Computer Laboratory, BCL Report 9, University of Illinois, Urbana.
  • Maturana, H. (1975). The organization of the living: A theory of the living organization. International Journal of Man-Machine studies, 7, 313-332.
  • Maturana, H. (1981). Autopoiesis. In M. Zeleny (Ed.), Autopoiesis: a theory of living organization (pp. 21-33). New York; Oxford: North Holland.
  • Maturana, H. and Varela, F. (1980). Autopoiesis and cognition: The realization of the living. Dordrecht, Holland: Kluwer Academic Publisher.
  • Miłkowski, M. (2013). Explaining the computational mind. Cambridge, MA: MIT Press.
  • Piccinini, G. (2015). Physical Computation. Oxford: OUP.
  • Sprevak, M. (2010). Computation, Individuation, and Received View on Representations. Studies in History and Philosophy of Science, 41: 260-70.
  • Thompson, E. (2007). Mind in Life: Biology, phenomenology, and the sciences of mind. Cambridge, MA: Harvard University Press.
  • Varela F. 1988. Structural Coupling and the Origin of Meaning in a Simple Cellular Automation. In Sercarz E. E., Celada F., Mitchison N.A., Tada T. (eds.), The Semiotics of Cellular Communication in the Immune System, pp. 151-61. New York: Springer-Verlag.
  • Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.
  • Villalobos, M. & Dewhurst, J. (2017). Enactive autonomy in computational systems. Synthese, doi:10.1007/s11229-017-1386-z
  • Wells, A. J. (1998). Turing’s Analysis of Computation and Theories of Cognitive Architecture. Cognition, 22(3), 269-94.

 

Are olfactory objects spatial?

by Solveig Aasen – Associate Professor of Philosophy at the University of Oslo

On several recent accounts of orthonasal olfaction, olfactory experience does (in some sense) have a spatial aspect. These views open up novel ways of thinking about the spatiality of what we perceive. For while olfactory experience may not qualify as spatial in the way visual experience does, it may nevertheless be spatial in a different way. What way? And how does it differ from visual spatiality?

It is often noted that, by contrast to what we see, what we smell is neither at a distance nor at a direction from us. Unlike animals such as rats and the hammerhead shark, which have their nostrils placed far enough apart that they can smell in stereo (much like we can see and hear in stereo), we humans are not able to tell which direction a smell is coming from (except perhaps under special conditions (Radil and Wysocki 1998; Porter et al. 2005), or if we individuate olfaction so as to include the trigeminal nerve (Young et al. 2014)). Nor are we able to tell how a smell is distributed around where we are sitting (Batty 2010a p. 525; 2011, p. 166). Nevertheless, it can be argued that what we smell can be spatial in some sense. Several suggestions to this effect are on offer.

Batty (2010a; 2010b; 2011; 2014) holds that what we smell (olfactory properties, according to her) is presented as ‘here’. This is not a location like any other. It is the only location at which olfactory properties are ever presented, for olfactory experience, on Batty’s view, lacks spatial differentiation. Moreover, she emphasises that, if we are to make room for a certain kind of non-veridical olfactory experience, ‘here’ cannot be a location in our environment; it is not to be understood as ‘out there’ (Batty 2010b, pp. 20-21). This latter point contrasts with Richardson’s (2013) view. She observes that, because olfactory experience involves sniffing, it is part of the phenomenology of olfactory experience that something (odours, according to Richardson) seems to be brought into the nostrils from outside the body. Thus, the object of olfactory experience seems spatial in the sense that what we smell is coming from without, although it is not coming from any particular location. It is interesting that although Batty and Richardson claims contrast, they both seem to think that they are pointing out a spatial aspect of olfactory experiences when claiming that what we smell is, respectively, ‘here’ or coming from without.

Another view, compatible with the claim that what we smell is neither at a distance nor direction from us, is presented by Young (2016). He emphasises the fact that the molecular structure of chemical compounds determines which olfactory quality subjects experience. It is precisely this structure within an odour plume, he argues, that is the object of olfactory experience. Would an olfactory experience of the molecular structure have a spatial aspect? Young does not specify this. But since the structure of the molecule is spatial, one can at least envisage that experiencing molecular structure is, in part, to experience the spatial relations between molecules. If so, we can envisage spatiality without perspective. For, presumably, the spatial orientation the molecules have relative to each other and to the perceiver would not matter to the experience. Presumably, it would be their internal spatial structure that is experienced, regardless of their orientation relative to other things.

The claim that what we smell is neither at a direction nor distance from us can, however, be disputed. As Young (2016) notes, this claim neglects the possibility of tracking smells over time. Although the boundaries of the cloud of odours are less clear than for visual objects, the extension of the cloud in space and the changes in its intensity seem to be spatial aspects of our olfactory experiences when we move around over time. Perhaps one would object that the more fundamental type of olfactory experience is synchronic and not diachronic. The synchronic variety has certainly received the most attention in the literature. But if one’s interested in an investigation of our ordinary olfactory experiences, it is not clear why diachronic experiences should be less worthy of consideration.

Perhaps one would think that an obvious spatial aspect of olfactory experience is the spatial properties of the source, i.e. the physical object from which the chemical compounds in the air originate. But there is a surprisingly widespread consensus in the literature that the source is not part of what we perceive in olfaction. Lycan’s (1996; 2014) layering view may be an exception. He claims that we smell sources by smelling odours. But, as Lycan himself notes, there is a question as to whether the ‘by’-relation is an inference relation. If it is, his claim is not necessarily substantially different from Batty’s (2014, pp. 241-243) claim that olfactory properties are locked onto source objects at the level of belief, but that sources are not perceived.

Something that makes evaluation of the abovementioned ideas about olfactory spatiality complicated is that there is a variety of facts about olfaction that can be taken to inform an account of olfactory experience. As Stevenson and Wilson (2006) note, chemical structure has been much studied. But even though the nose has about 300 receptors ‘which allow the detection of a nearly endless combination of different odorants’ (ibid., p. 246), how relevant is the chemical structure to the question ‘what we can perceive?’, when the discriminations we as perceivers report are much less detailed? What is the relevance of facts about the workings and individuation of the olfactory system? Is it a serious flaw if our conclusions about olfactory experience contradict the phenomenology? Different contributors to the debate seem to provide or presuppose different answers to questions like these. This makes comparison complicated. Comparison aside, however, some interesting ideas about olfactory spatiality can, as briefly shown, be appreciated on their own terms.

 

 

References:

Batty, C. 2014. ‘Olfactory Objects’. In D. Stokes, M. Matthen and S. Biggs (eds.), Perception and Its Modalities. Oxford: Oxford University Press.

Batty, C. 2011. ‘Smelling Lessons’. Philosophical Studies 153: 161-174.

Batty, C. 2010a. ‘A Representationalist Account of Olfactory Expereince’. Canadian Journal of Philosophy 40(4): 511-538.

Batty, C. 2010b. ‘What the Nose Doesn’t Know: Non-veridicality and Olfactory Experience’. Journal of Consciousness Studies 17: 10-27.

Lycan, W. G. 2014. ‘The Intentionality of Smell’. Frontiers in Psychology 5: 68-75.

Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: Bradford Books/MIT Press.

Radil, T. and C. J. Wysocki. 1998. ‘Spatiotemporal masking in pure olfaction’. Olfaction and Taste 12(855): 641-644.

Richardson, L. 2013. ‘Sniffing and Smelling’. Philosophical Studies 162: 401-419.

Porter, J. Anand, T., Johnson, B. N., Kahn, R. M., and N. Sobel. 2005. ‘Brain mechanisms for extracting spatial information from smell’. Neuron 47: 581-592.

Young, B. D. 2016. ‘Smelling Matter’. Philosophical Psychology 29(4): 520-534.

Young, B. D., A. Keller and D. Rosenthal. 2014. ‘Quality-space Theory in Olfaction’. Frontiers in Psychology 5: 116-130.

Wilson, D. A. and R. J. Stevenson. 2006. Learning to Smell. Olfactory Perception from Neurobiology to Behaviour. Baltimore, MD: The John Hopkins University Press.

How stereotypes shape our perceptions of other minds

 by Evan Westra – Ph.D. Candidate, University of MarylandAmbiguous Pictures Task

McGlothlin & Killen (2006) showed groups of (predominantly white) American elementary school children from ages 6 to 10 a series of vignettes depicting children in ambiguous situations. For instance, one picture (above) showed two children by a swing set, with one on the ground frowning, and one behind the swing with a neutral expression. Two things might be going on in this picture: i) the child on the ground may have fallen off by accident (neutral scenario), or ii) the child on the ground may have been intentionally pushed by the one standing behind the swing (harmful scenario). Crucially, McGlothlin and Killen varied the race of the children depicted in the image, such that some children saw a white child standing behind the swing (left), and some saw a black child (right). Children were asked to explain what had just happened in the scenario, to predict what would happen next, and to evaluate the action that had just happened. Overwhelmingly, children were more likely to give the harmful scenario interpretation – that the child behind the swing intentionally pushed the other child – when the child behind the swing was black than when she was white. The race the child depicted, it seems, influenced whether or not participants made an inference to harmful intentions.

This is yet another depressing example of how racial bias can warp our perceptions of others. But this study (and others like it: Sagar & Schofield 1990; Burnham & Harris 1992; Condry et al. 1985)) also hint at a relationship between two forms of social cognition that are not often studied together: mindreading and stereotyping. The stereotyping component is clear enough. The mindreading component comes from the fact that race didn’t just affect kids’ attitudes towards the target – it affected what they thought was going on in the target’s mind. Although these two ways of thinking about other people – mindreading and stereotyping – both seem to play an important role in how we navigate the social world, curiously little attention has been paid to understanding the way they relate to one another. In this post, I want to explore this relationship. I’ll first briefly explain what I mean by “mindreading” and “stereotyping.” Next, I’ll discuss one existing proposal about the relationship between mindreading and stereotyping, and raise some problems for it. Then I will lay out the beginnings of a different way of cashing out this relationship.

*          *          *

 First, lets get clear on what I mean by “mindreading” and “stereotyping.”

Mindreading:

In order to achieve our goals in highly social environments, we need to be able to accurately predict what other people will do, and how they will react to us. To do this, our brains generate complex models of other people’s beliefs, desires, and intentions, which we use to predict and interpret their behavior. This capacity to represent other minds is known various as theory of mind, mindreading, mentalizing, and folk psychology. In human beings, this ability begins to emerge very early in development. As adults, we use it constantly, in a fast, flexible, and unconscious fashion. We use it in many important social activities, including communication, social coordination, and moral judgment.

Stereotyping:

Stereotypes are ways of storing generic information about social groups (including races, genders, sexual orientation, age-groups, nationalities, professions, political affiliation, physical or mental ability, and so on) (Amodio 2014). A particularly important aspect of stereotypes is that they often contain information about stable personality traits. Unfortunately, it is all too easy for us to think of stereotypes about how certain social groups are lazy, or greedy, or aggressive, or submissive, and so on. According to Susan Fiske and colleagues’ Stereotype Content Model (SCM), there is an underlying pattern to the way we attribute personality traits to groups (Cuddy et al. 2009; Cuddy et al. 2007; Fiske et al. 2002; Fiske 2015). Personality trait attribution, on this view, varies along two primary dimensions: warmth and competence. The warmth dimension includes traits like (dis-)honesty, (un-)trustworthiness, and (un-)friendliness. These are traits that tell you whether or not someone is liable to help you or harm you. The competence dimension contains traits like (un-)intelligence, skillfulness, persistence, laziness, clumsiness, etc. These traits tell you how effectively someone is at achieving their goals.

Together, these two dimensions combine to yield four distinct clusters of traits, each of which picks out a different kind of stereotype:

the stereotype content model

*          *          *

So what do stereotyping and mindreading have to do with one another? There are some obvious differences, of course: stereotypes are mainly about groups, while mindreading is mainly about individuals. But intuitively, it seems like knowing about somebody’s social group membership could tell you a lot about what they think: if I tell you that I am a liberal, for instance, that should tell you a lot about my beliefs, values, and social preferences – valuable information, when it comes to predicting and interpreting my behavior.

Some philosophers and psychologists, such as Kristin Andrews, Anika Fiebich and Mark Coltheart, have suggested that stereotypes and mindreading may actually be alternative strategies for predicting and interpreting behavior (Andrews 2012; Fiebich & Coltheart 2015). That is, it may be that sometimes we use stereotypes instead of mindreading to figure out what a person is going to do. According to one such proposal (Fiebich & Coltheart 2015), stereotypes allow us to predict behavior because they encode associations between social categories, situations, and behaviors. Thus, one might form a three-way association between the social category police, the situation donut shops, and the behavior eating donuts, which would lead one to predict that, when one sees a police officer in a donut shop, he or she will likely be eating a donut. A more complex version of this associationist approach would be to associate social groups with particular traits labels (as per the SCM), and thus consist in four-way associations between social cateogires, trait labels, situations, and behaviors (Fiebich & Coltheart 2015; Andrews 2012). Thus, one might come to the trait of generosity with leaving large tips in restaurants, and associate the social category of uncles with generosity, and thereby come to expect uncles to leave large tips in restaurants. One might then explain this behavior by referring to the uncle’s generosity. The key thing to notice about these accounts is that their predictions do not rely at all upon mental-state attributions. This is by design: these proposals are meant to show that we often don’t need mindreading to predict or interpret behavior.

One problem for this sort of view comes from its invocation of “situations.” What information, one might wonder, is contained within the scope of a particular “situation”? Surely, a situation does not include everything about the state of the world at a given moment. Situations are probably meant to pick out local states of affairs. But not all the facts about a local state of affairs will be relevant to behavior prediction. The presence of mice in the kitchen of a restaurant, for instance will not affect your predictions about the size of your uncle’s tip. It might, however, affect our predictions about the behavior of the health inspector, should one suddenly arrive. Which local facts are predictively useful will ultimately depend upon their relevance to the agent whose behavior we are predicting. But whether or not a fact is relevant to an agent will depend upon that agent’s beliefs about the local state of affairs, as well as her goals and desires. If this is how representations of predictively useful situations are computed, then the purportedly non-mentalistic proposal given above really includes a tacit appeal to mindreading. If this is not how situations are computed, then we are owed an explanation for how the non-mentalistic behavior-predictor arrives at predictively useful representations of situations that do not depend upon considerations of relevance.

*          *          *

Instead of treating mindreading and stereotypes as separate forms of behavior-prediction and interpretation, we might instead explore the ways in which stereotypes might inform mindreading. The key to this approach, I suggest, lies in the fact that stereotypes encode information about personality traits. In many ways, personality traits are like mental states: they are unobservable mental properties of individual, and they are causally related to behavior. But they also differ in one key respect: their temporal stability. Beliefs and desires are inherently unstable: a belief that P can be changed by the observation of not-P; a desire for Q can be extinguished by the attainment of Q. Personality traits, in contrast, cannot be extinguished or abandoned based on everyday events. Rather, they tend to last throughout a person’s lifetime, and manifest themselves in many different ways across many different situations. A unique feature of personality traits, in other words, is that they are highly stable mental entities (Doris 2002). So when stereotypes ascribe traits to groups, they are ascribing a property that one could reasonably expect to remain consistent across many different situations.

The temporal properties of mental states are extremely relevant for mindreading, especially in models that employ Bayesian Predictive Coding (Kilner & Frith 2007; Koster-Hale & Saxe 2013; Hohwy & Palmer 2014; Hohwy 2013; Clark 2015). To see why, let’s start with an example:

Suppose that we believe that Laura is thirsty, and have attributed to her the goal of getting a drink (G). As goals go, this one is relatively short-term (unlike, say, the goal of getting a PhD). But we know that in order to achieve (G), we predict that Laura must form a number of even shorter-term sub-goals: (G1) get the juice from the fridge, and (G2) pour herself a glass of juice. But each of these requires the formation of still shorter-term sub-sub-goals: (G1a) walk over to kitchen, (G1b) open fridge door, (G1c) remove juice container, (G2a) remove cup from cupboard, (G2b) pour juice into cup. Predicting Laura’s behavior in this context thus begins with the ascription of a longer-duration mental state (G), followed by the ascription of successively shorter-term mental-state attributions (G1, G2, G1a, G1b, G1c, G2a, G2b).

As mindreaders, we can use attributions of more abstract, temporally extended mental states to make top-down inferences about more transient mental states. At each level in this action-prediction hierarchy, we use higher-level goal-attributions to constrain the space of possible sub-goals that the agent might form. We then use our prior experience to select the most likely sub-goal from the hypothesis space, and the process repeats itself. Ultimately, this yields fairly fine-grained expectations about motor-intentions, which manifest themselves as mirror-neuron activity (Kilner & Frith 2007; Csibra 2008). Action-prediction thus plays out as a descent from more stable mental-state attributions to more transient ones, which ultimately bottom out in highly concrete expectations about behavior.

Personality traits, which are distinguished by their high degree of temporal stability, fit naturally into the upper levels of this action-prediction hierarchy. Warmth traits, for instance, can tell us about the general preferences of an agent: a generous person probably has a general preference for helping others, while a greedy person probably has a general desire to enrich herself. These broad preference-attributions can in turn inform more immediate goal-attributions, which can then be used to predict behavior.

This role for representations of personality traits in mental-state inference fits well with what we know about how we reason about traits more generally. For instance, we often make extremely rapid judgments about the warmth and competence traits of individuals based on fairly superficial evidence, such as facial features (Todorov et al. 2008); we also tend to over attribute the causes of behavior to personality traits, rather than situational factors – a phenomenom commonly known as the “fundamental attribution error” or the “correspondence bias (Gawronski 2004; Ross 1977; Gilbert et al. 1995). Prioritizing personality traits makes a lot of sense if they form the inferential basis for more complex forms of behavior prediction. It also makes sense that this aspect of mindreading would need to rely on fast, rough-and-ready heuristics, since personality trait information would need to be inferred very quickly in order to be useful in action-planning.

From a computational perspective, thus, using personality traits to make inferences about behavior makes a lot of sense, and might make mindreading more efficient. But in exchange for this efficiency, we make a very disturbing trade. Stereotypes, which can be activated rapidly based on easily available perceptual cues provide the mindreading system with a rapid means for storing trait information (Mason et al. 2006; Macrae et al. 1994). With this speed comes one of the most morally pernicious forms of human social cognition, one that helps to perpetuate discrimination and social inequality.

*          *          *

 The picture I’ve painted in this post is, admittedly, rather pessimistic. But just because the roots of discrimination are cognitively deep, we should not conclude that it is inevitable. More recent work from McGlothlin and Killen (2010) should give us some hope: while children from racially homogeneous schools (who had little direct contact with members of other races) tended to show signs of biased intention-attribution, McGlothlin and Killen also found that children from racially heterogeneous schools (who had regular contact with members of other races) did not display such signs of bias. Evidently, intergroup contact is effective in curbing the development of stereotypes – and, by extension, biased mindreading.

 

References:

Amodio, D.M., 2014. The neuroscience of prejudice and stereotyping. Nature Reviews: Neuroscience, 15(10), pp.670–682.

Andrews, K., 2012. Do apes read minds?: Toward a new folk psychology, Cambridge, MA: MIT Press.

Burnham, D.K. & Harris, M.B., 1992. Effects of Real Gender and Labeled Gender on Adults’ Perceptions of Infants. Journal of Genetic Psychology, 15(2), pp.165–183.

Clark, A., 2015. Surfing uncertainty: Prediction, action, and the embodied mind, Oxford: Oxford University Press.

Condry, J.C. et al., 1985. Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children Development Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children. Child Development, 56(1), pp.225–233.

Csibra, G., 2008. Action mirroring and action understanding: an alternative account. In P. Haggard, Y. Rossetti, & M. Kawato, eds. Sensorymotor Foundations of Higher Cognition. Attention and Performance XXII. Oxford: Oxford University Press, pp. 435–459.

Cuddy, A.J.C. et al., 2009. Stereotype content model across cultures: Towards universal similarities and some differences. British Journal of Social Psychology, 48(1), pp.1–33.

Cuddy, A.J.C., Fiske, S.T. & Glick, P., 2007. The BIAS map: behaviors from intergroup affect and stereotypes. Journal of personality and social psychology, 92(4), pp.631–48.

Doris, J.M., 2002. Lack of character: Personality and moral behavior, Cambridge, UK: Cambridge University Press.

Fiebich, A. & Coltheart, M., 2015. Various Ways to Understand Other Minds: Towards a Pluralistic Approach to the Explanation of Social Understanding. Mind and Language, 30(3), pp.235–258.

Fiske, S.T., 2015. Intergroup biases: A focus on stereotype content. Current Opinion in Behavioral Sciences, 3(April), pp.45–50.

Fiske, S.T., Cuddy, A.J.C. & Glick, P., 2002. A Model of (Often Mixed Stereotype Content: Competence and Warmth Respectively Follow From Perceived Status and Competition. Journal of personality and social psychologyersonality and social psychology, 82(6), pp.878–902.

Gawronski, B., 2004. Theory-based bias correction in dispositional inference: The fundamental attribution error is dead, long live the correspondence bias. European Review of Social Psychology, 15(1), pp.183–217.

Gilbert, D.T. et al., 1995. The Correspondence Bias. Psychological Bulletin, 117(1), pp.21–38.

Hohwy, J., 2013. The predictive mind, Oxford University Press.

Hohwy, J. & Palmer, C., 2014. Social Cognition as Causal Inference: Implications for Common Knowledge and Autism. In M. Gallotti & J. Michael, eds. Perspectives on Social Ontology and Social Cognition. Dordrecht: Springer Netherlands, pp. 167–189.

Kilner, J.M. & Frith, C.D., 2007. Predictive coding: an account of the mirror neuron system. Cognitive Processes, 8(3), pp.159–166.

Koster-Hale, J. & Saxe, R., 2013. Theory of Mind: A Neural Prediction Problem. Neuron, 79(5), pp.836–848.

Macrae, C.N., Stangor, C. & Milne, A.B., 1994. Activating Social Stereotypes: A Functional Analysis. Journal of Experimental Social Psychology, 30(4), pp.370–389.

Mason, M.F., Cloutier, J. & Macrae, C.N., 2006. On construing others: Category and stereotype activation from facial cues. Social Cognition, 24(5), p.540.

McGlothlin, H. & Killen, M., 2010. How social experience is related to children’s intergroup attitudes. European Journal of Social Psychology, 40(4), pp.625–634.

Mcglothlin, H. & Killen, M., 2006. Intergroup Attitudes of European American Children Attending Ethnically Homogeneous Schools. Child Development, 77(5), pp.1375–1386.

Ross, L., 1977. The Intuitive Psychologist And His Shortcomings: Distortions in the Attribution Process. Advances in Experimental Social Psychology, 10(C), pp.173–220.

Sagar, H.A. & Schofield, J.W., 1990. Racial and behavioral cues in Black and White children’ s perception of ambiguously aggressive acts. Journal of personality and social psychology, 39(October), pp.590–598.

Todorov, A. et al., 2008. Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences, 12(12), pp.455–460.

 

Thanks to Melanie Killen and Joan Tycko for permission to use images of experimental stimuli from McGlothlin & Killen (2006, 2010).

 

Delusions as Explanations

Opera_0

by Matthew Parrott- Lecturer in the Department of Philosophy at King’s College London

One idea that has been extremely influential within cognitive neuropsychology and neuropsychiatry is that delusions arise as intelligible responses to highly irregular experiences. For example, we might think that the reason a subject adopts the belief that a house has inserted a thought into her head is because she has in fact had an extremely bizarre experience representing a house pushing a thought into her head (the case comes from Saks 2007; see Sollberger 2014 for an account of thought insertion along these lines). If this were to happen, then delusions would arise for reasons that are familiar from cases of ordinary belief. A delusional subject would simply be endorsing or taking on board the content of her experience.

However the notion that a delusion is an understandable response to an irregular experience need not be construed along the lines of a subject accepting the content of her experience. Over a number of years, Brendan Maher advocated an influential alternative proposal, according to which an individual adopts a delusional belief because it serves as an explanation of her ‘strange’ or ‘significant’ experience (see Maher 1974, 1988, 1999). Crucially, for Maher, the content of the subject’s experience is not identical to the content of her delusional belief. Rather, the latter is determined in part by contextual factors, such as cultural background or what Maher calls ‘general explanatory systems’ (cf. 1974). Maher’s approach is often referred to as the ‘explanationist’ approach to understanding delusions (Bayne and Pacherie 2004).

Explanationist accounts have been especially popular with respect to the Capgras delusion that one’s friend or relative is really an imposter (e.g., Stone and Young 1997) and delusions of alien control (e.g., Blakemore, et. al. 2002). Yet, despite their prevalence, the explanationist approach has been called into questioned by a number of philosophers on the grounds that delusions are quite obviously very bad explanations.

For instance, Davies and colleagues argue:

‘The suggestion that delusions arise from the normal construction and adoption of an explanation for unusual features of experience faces the problem that delusional patients construct explanations that are not plausible and adopt them even when better explanations are available. This is a striking departure from the more normal thinking of non-delusional subjects who have similar experiences.’ (Davies, et. al. 2001, pg. 147; but see also Bayne and Pacherie 2004, Campbell 2001, Pacherie, et. al. 2006)

Indeed, since delusions strike most of us as highly implausible, it is hard to see how they could explain any experience, no matter how unusual. So if we want to understand delusional cognition along Maher’s lines, we will need to clarify the cognitive transition from anomalous experience to delusional belief in a way that illuminates how it could be a genuinely explanatory transition.

In what follows, I would like to distinguish three distinct ways in which a delusional belief might be thought to be explanatorily inadequate, each of which I think poses a distinct challenge for the explanationist approach.

The first concerns the phenomenal character of a delusional subject’s anomalous experience. Maher claims that the strange experiences we find in cases of delusion ‘demand’ explanations. But why is that? If the experiences that give rise to delusions do not themselves represent highly unusual states of affairs (as Maher seems to think), what is it about them that calls for or ‘demands’ an explanation? And does the particular phenomenal character of a ‘strange’ experience ‘demand’ a specific form of explanation, or are all ‘strange’ experiences relatively equal when it comes to their demands? The challenge for the explanationist is to clarify the phenomenal character of a delusional subject’s anomalous experience in such a manner that makes clear how it could be the explanadum of a delusion. Let’s call this the Phenomenal Challenge.

I actually think some very influential neuropsychological accounts have difficulty with the Phenomenal Challenge. To briefly take one example, Ellis and Young (1990) proposed that the Capgras delusion arises because of a lack of responsiveness to familiar faces in the autonomic nervous system. In non-delusional subjects, an experience of a familiar face is associated with an affective response in the autonomic nervous system, but Capgras subjects fail to have this response. Ellis and Young’s theory predicted that there would be no significant difference in the skin conductance responses of Capgras subjects when they are shown familiar verses unfamiliar faces, which has subsequently been confirmed by a number of studies. Thus it seems there is good evidence that a typical Capgras subject’s autonomic nervous system is not sensitive to familiar faces.

This seems promising but I don’t think it answers the Phenomenal Challenge because it doesn’t tell us anything about what a Capgras subject’s experience of a face is like. As John Campbell notes, ‘the mere lack of affect does not itself constitute the perception’s having a particular content.’ (2001, pg. 96) Moreover, individuals are not normally conscious of their autonomic nervous system (see Coltheart 2005). So it isn’t clear how diminished sensitivity within that system constitutes an experience that ‘demands’ an explanation involving imposters. To really understand why an anomalous experience of a familiar face calls for a delusional explanation, we need to get a better sense on what that experience is like.

A second worry raised in the previous passage is that delusional subjects adopt delusional explanations ‘even when better explanations are available’. Why does this happen? Why does a delusional subject select an inferior hypothesis from the set of those available to her? Let’s call this the Abductive Challenge.

To illustrate, let’s stick with Capgras. The explanationist proposal is that a subject adopts the belief that her friend has been replaced by an imposter in order to explain some odd experience. But even if we suppose the imposter hypothesis is empirically adequate, it is highly unlikely to be the best explanation available. As Davies and Egan remark, ‘one might ask whether there is an alternative to the imposter hypothesis that provides a better explanation of the patient’s anomalous experience. There is, of course, an obvious candidate for such a proposition.’ (2013, pg. 719) In fact, there seems to be a number of better available hypotheses; for example, that one has suffered brain injury or any hypothesis that appealed to more familiar changes affecting facial appearance, such as hair-style or illness.

Put simply, the Abductive Challenge is that even if we assume the cognitive transition from unusual experience to delusion involves something like abductive reasoning or inference to the best explanation, delusional subjects select poor explanations instead of better available alternatives. The explanationist needs to tell us why this happens (for some attempts see Coltheart et. al. 2010, Davies and Egan 2013, McKay 2012, Parrott and Koralus, 2015).

The final challenge for explanationism is, in my view, the most problematic. In the above passage, Davies and colleagues remark that delusions are extremely implausible. Along these lines, we might naturally wonder why a subject would even consider one to be a candidate explanation of her unusual experience. Why would she not instead immediately rule out a delusional hypothesis on the grounds that it is far too implausible to be given serious consideration? This concern is echoed by Fine and colleagues:

‘They explain the anomalous thought in a way that is so far-fetched as to strain the notion of explanation. The explanations produced by patients with delusions to account for their anomalous thoughts are not just incorrect; they are nonstarters. Appealing to the notion of explanation, therefore, does not clarify how the delusional belief comes about in the first place because the explanations of the delusional patients are nothing like explanations as we understand them.’ (Fine, et. al. 2005, pg. 160)

The task of explaining some target phenomenon demands cognitive resources and the idea that delusions are explanatory ‘nonstarters’ means that they normally would be immediately rejected. When engaged in an explanatory task, we know that a person considers only a restricted set of hypotheses and it seems quite natural to exclude ones that are inconsistent with one’s background knowledge. Since delusions seem to be in conflict with our background knowledge, this is perhaps why we find it difficult to understand how someone could think a delusion is even potentially explanatory (for further discussion, see Parrott 2016).

So why do subjects consider delusional explanations as candidate hypotheses? This is the final challenge for the explanationist. Let’s call it the implausibility challenge. Notice that whereas the abductive challenge asks why a subject eventually adopts one hypothesis instead of another from among a fixed set of available alternatives, the implausibility challenge is more general. It asks where these hypotheses, the ones subject to further investigation, come from in the first place.

Can these three challenges be overcome? I am optimistic and have tried to address them for the case of thought insertion (see Parrott forthcoming). However, I also think much more work needs to be done.

First, as I mentioned above, it is not clear that we have a good understanding of what it is like for an individual to have the sorts of experiences we think are implicated in many cases of delusion. Without such understanding, I think it is hard to see why some experiences make demands on a person’s cognitive explanatory resources. I also suspect that understanding what various anomalous experiences are like might shed more light on why delusional individuals tend to adopt very similar explanations.

Second, I think that addressing the implausibility challenge requires us to obtain a far better understanding of how hypotheses are generated than we currently have. In both delusional and non-delusional cognition, an explanatory task presents a computational problem. Which candidate hypotheses should be selected for further empirical testing? Although I have suggested that epistemically impossible hypotheses are normally ruled out, that doesn’t tell us how candidates are ruled in. Plausibly, there is some selection function(s) that chooses candidate explanations of a target phenomenon, but, as Thomas and colleagues note, we have very little sense of how this might work:

‘Hypothesis generation is a fundamental component of human judgment. However, despite hypothesis generation’s importance in understanding judgment, little empirical and even less theoretical work has been devoted to understanding the processes underlying hypothesis generation (Thomas, et. al. 2008, pg. 174).

The implausibility challenge strikes me as especially puzzling because I think we can easily see that certain strategies for hypothesis generation would be bad. For instance, it wouldn’t generally be good to consider hypotheses only if they have a prior probability above a certain threshold, because a hypothesis with a low prior probability might best explain a new piece of evidence.

Delusional cognition raises quite a few deep and interesting questions, many of which bear on how we think about belief formation and reasoning. And I have only scratched the surface when it comes to the kinds of puzzles that arise when we start thinking about the origins of delusion. But I hope that distinguishing these explanatory challenges will help us in thinking about the questions which need to be pursued if we are to assess the plausibility of the explanationist strategy.

 

References:

Bayne, T. and E. Pacherie. 2004. “Bottom-up or Top-down?: Campbell’s Rationalist Account of Monothematic Delusions.” Philosophy, Psychiatry, and Psychology, 11: 1-11.

Blakemore, S., D. Wolpert, and C. Frith. 2002. “Abnormalities in the Awareness of Action.” Trends in Cognitive Science, 6: 237-242.

Campbell, J. 2001. “Rationality, Meaning and the Analysis of Delusion.” Philosophy, Psychiatry and Psychology,8: 89-100.

Coltheart, M., P. Menzies, and J. Sutton. 2010. “Abductive Inference and Delusional Belief.” Cognitive Neuropsychiatry, 15: 261-87.

Coltheart, M. 2005. “Conscious Experience and Delusional Belief.” Philosophy, Psychiatry and Psychology, 12: 153-57.

Davies, M., M. Coltheart, R. Langdon, and N. Breen. 2001. “Monothematic Delusions: Towards a Two-Factor Account.” Philosophy, Psychiatry and Psychology, 8: 133-158.

Davies, M. and Egan, A. 2013. “Delusion: Cognitive Approaches, Bayesian Inference and Compartmentalization.” In K.W.M. Fulford, M. Davies, R.G.T. Gipps, G. Graham, J. Sadler, G. Stanghellini and T. Thornton (eds.), The Oxford Handbook of Philosophy of Psychiatry. Oxford: Oxford University Press.

Ellis, H. and A. Young. 1990. “Accounting for Delusional Misidentifications.” British Journal of Psychiatry, 157: 239-48.

Fine, C. M, J. Craigie, & I. Gold. 2005. “The Explanation Approach to Delusion.” Philosophy, Psychiatry, and Psychology, 12 (2): 159-163.

Maher, B. 1974. “Delusional Thinking and Perceptual Disorder.” Journal of Individual Psychology, 30: 98-113.

Maher, B. 1988. “Anomalous Experience and Delusional Thinking: The Logic of

Explanations”, in T. Oltmanns and B. Maher (eds.), Delusional Beliefs, Chichester: John Wiley and Sons, pp. 15-33.

Maher, B. 1999. “Anomalous Experience in Everyday Life: Its Significance for Psychopathology.” The Monist, 82: 547-570.

McKay, R. 2012. “Delusional Inference.” Mind and Language, 27: pp. 330-55.

Pacherie, E., M. Green, and T. Bayne. 2006. “Phenomenology and Delusions: Who Put the ‘Alien’ in Alien Control?” Consciousness and Cognition, 15: 566-577.

Parrott, M. 2016. “Bayesian Models, Delusional Beliefs, and Epistemic Possibilities.” The British Journal for the Philosophy of Science, 67: 271-296.

Parrott, M. forthcoming. “Subjective Misidentification and Thought Insertion.” Mind and Language.

Parrott, M. and P. Koralus. 2015. “The Erotetic Theory of Delusional Thinking.” Cognitive Neuropsychiatry, 20 (5): 398-415.

Saks, E. 2007. The Center Cannot Hold. New York: Hyperion.

Sollberger, M. 2014. “Making Sense of an Endorsement Model of Thought Insertion.” Mind and Language, 29: 590-612.

Stone, T. and A. Young. 1997. “Delusions and Brain Injury: the Philosophy and Psychology of Belief.” Mind and Language, 12: 327-364.

Thomas, R., M. Dougherty, A. Sprenger, and J. Harbison. 2008. “Diagnostic Hypothesis Generation and Human Judgment.” Psychological Review, 115(1): 155-185.

How much of an animal are you?

Baby chimpanzee

by Léa Salje Lecturer in Philosophy of Mind and Language at The University of Leeds

I’m an animal, and so are you. We might be rather special animals, but we are animals all the same: biological organisms operating in a particular ecological niche. For most of us, this is something we’ve known for a long time, probably since primary school. It’s perhaps surprising, then, how little it seems to permeate our everyday thinking about ourselves, for many of us at least. I’m hardly minded to earnestly contemplate the fact of my animality in my dealings with myself as I go about my daily business of coffee-ordering and Facebook-posting.

There’s also a question about how deeply the fact of our animality genuinely penetrates the conception of ourselves that guides our philosophy of mind, even among those of us happy to accept it on its surface. This was the question at the heart of the Persons as Animals project – an AHRC-funded project at the University of Leeds that I’ve been working on for the last year led by Helen Steward, that aims to explore the ways in which certain areas in philosophy of mind might be illuminated by a perspective that forefronts the fact that we are animals. A couple of things (at least) follow from taking such a perspective seriously. The first is that if we are animals, we are thereby not Cartesian egos, or brains, or systems of information, or functional systems, or bundles of mental states. We are entire embodied wholes, such that an understanding of ourselves requires a much more holistic perspective than that which is often taken in philosophy of mind. And second, if we are animals then our powers and capacities must be related in an evolutionary way to those of other creatures. This means that a decent understanding of those powers and capacities – even relatively hifalutin powers like language and the capacity to make choices – should benefit from a perspective that takes account of what is known of animal perception, cognition and agency.

Clearly, mere knowledge of the biological fact of our animality is not enough to mobilise these sorts of changes. One of the central planks of the project was that we need new and better ways to articulate our place in the animal kingdom if we are to make philosophical progress in these areas. And before we can do that, we need to understand what sorts of obstacles might have so far prevented such an animalistic self-conception from really taking hold.

To this end, the Persons as Animals project came together earlier this year with conservation social scientist Andy Moss from the education department at Chester Zoo to run a series of semi-structured focus groups, designed to explore how we think of ourselves and our relation to the animal world. What sorts of things get in the way of animalistic thinking about ourselves? How might it be encouraged? We ran 12 groups in all, 6 made up of zoo visitors, and another 6 of students from Leeds University.

What we found was a striking absence of any univocal narrative about our sense of our own animality. Instead, we found a deeply fractured and uneasy picture: we do see ourselves as animals, and we don’t. And many of us struggle to reconcile these two viewpoints.

Interestingly, this sense of unease came out in different ways for different participants. Some began with a firm sense of their own animality, often accompanied by expressions of indignation at the very suggestion that we might think otherwise. (Of course we’re animals; how dare we count ourselves as special?) The discussion of these participants tended to highlight the intelligent behaviours of other animals, and to downplay our own behaviours and capacities as largely instinct-driven under a flimsy veneer of civility.

This is, of course, to forefront the fact of our animality in a way. But by so magnifying our continuity with the rest of the animal world, these participants seemed to face a special challenge: they seemed to struggle to absorb into that animalistic self-image our alienation from and – even more troublingly – domination over the natural world around us. How can we reconcile this self-conceived status as one species of animal among others on the one hand, with the eye-watering extent of our damaging impositions on the world around us on the other? It’s one thing to think of ourselves as a special category of being, perhaps one that has the right (or even the duty) to organise things for the whole of the natural world. But that option is ruled out by a robust insistence on our lack of specialness, of continuity with other animals. The only option remaining, however, seems to be infinitely more disturbing – that we are mere animals who have simply spiralled out of control. In the end, we often found these participants adopting the rather ingenious solution of moving from first personal locutions to speaking in generalisations when discussing power asymmetries with the rest of the natural world; ‘I don’t think we’re special, but the problem is that people do’.

Others, by contrast, began from a heightened sense of fundamental distinctness from other animals. Even if we’re animals (sotto voce), we’re obviously special. No danger among these groups of failing to celebrate the special complexity of human beings. But these participants faced another challenge, of reconciling this self-conception as fundamentally different from other animals with knowledge of the biological fact of our animality.

Typically participants expressing this sort of view reported that their knowledge of their animality is highly muted or recessive as they go about their daily lives. Indeed, some reported not only that it normally faded into the background, but more strongly that it took considerable cognitive effort to bring it to mind and make it fit with how they really see themselves. In one particularly memorable articulation of this feeling, one participant recalled finding out that she was an animal, and thinking of it ‘as more of a classification like fitting everything into bubbles, like when I realised the sun was a star. It has all the same properties as the other stars and that’s weird to you because you regard them very differently in your everyday life.’ Our animality, the idea seems to be, is a matter of indisputable scientific fact which is nevertheless somehow completely at odds with our everyday conceptualisations and categorisations.

Through discussion, these groups too found creative ways of dissolving the tension. An extreme minority reaction was to give up on the claim that we are animals as simply ‘not ringing true’. Another strategy, observed in an extended discussion by a group of physics students, was to redraw the conceptual boundaries of what it is to be an animal. If we abandon the idea that animals must be biological organisms, then we create more space to comfortably hold together both the fact that we are animals and the conviction that we are importantly different from other members of the animal kingdom. To say that we are animals, after all, might now be to position ourselves us just as closely to computers as to caterpillars. A third sort of resolution was to associate animality with a very basic form of existence; one that we have, by now, transcended. We might once have been animals, the idea is, but we’ve now moved beyond it. With this response, participants were able to bracket out uncomfortable facts about our animal natures as part of our evolutionary history, rather than presenting themselves as calling for incorporation into our live self-conceptions. For the most part, however, all of these responses were given with observable unease and frank statements of felt difficulty in incorporating the fact of our animality into their everyday self-conceptions.

Among yet other participants there emerged a quite different viewpoint, this time one that seemed much better able to accommodate our claims to both animality and to distinctness. For this group, the traits, behaviours and capacities that might at first glance seem to separate us from the rest the animal kingdom are really just the results of evolutionary processes, like any other. Cinemas, religion, prog rock, i-pads, sarcasm, nuclear weapons, cryptic crosswords and Shoreditch apartments don’t cut us off from the natural world; they are part of it. We are, on this view, placed unflinchingly alongside other animals in the natural world, but not at the cost of a denial or deprecation of human complexity.

One of the central aims of the Persons as Animals project was to better understand our relationship to our own animality, so that we might in turn better understand how to instill more deep-rooted ways of thinking of ourselves as animals into our philosophy of mind. Our results seem to suggest that for many of us the answer is that the relationship is a profoundly awkward one; we seem to be far from finding a stable resting place for our sense of position in the animal world. This finding ought to put us on our guard in our philosophical practices. We are not insulated, as philosophers, from the uneasy and conflicted animalistic self-conceptions that seemingly underlie our everyday thinking about ourselves.

Is implicit cognition bad cognition?

implicit-test-large

by Sophie Stammers– incoming postdoctoral fellow on project PERFECT

A significant body of research in cognitive science holds that human cognition comprises two kinds of processes: explicit and implicit. According to this research, explicit processes operate slowly, requiring attentional guidance, whilst implicit processes operate quickly, automatically and without attentional guidance (Kahneman, 2012; Gawonski and Bodenhausen, 2014). A prominent example of implicit cognition that has seen much recent discussion in philosophy is that of implicit social bias, where associations between (often) stigmatized social groups and (often) negative traits manifest in behaviour, resulting in discrimination (see Brownstein and Saul, 2016a; 2016b). This is the case even though the individual in question isn’t directing their behaviour to be discriminatory with the use of attentional guidance, and is apparently unaware that they’re exhibiting any kind of disfavouring treatment at the time (although see Holroyd 2015 for the suggestion that individuals may be able to observe bias in their behaviour).

Examples of implicit social bias manifesting in behaviour include exhibiting greater signs of social unease, less smiling and more speech errors when conversing with a black experimenter compared to when the experimenter is white (McConnell and Leibold, 2001); less eye contact and increased blinking in conversations with a black experimenter versus their white counterpart (Dovidio et al., 1997), and reduced willingness for skin contact with a black experimenter versus a white one (Wilson et al., 2000). Implicit social biases also arise in more deliberative scenarios: Swedish recruiters who harbor implicit racial associations are less likely to interview applicants perceived to be Muslim, as compared to applicants with a Swedish name (Rooth, 2007), and doctors who harbor implicit racial associations are less likely to offer treatment to black patients with the clinical presentation of heart disease than to white patients with the same clinical presentation of the disease (Green, et al., 2007). These studies establish that there is no correlation between participants’ discriminatory behaviour and the beliefs and values that they profess to have when questioned.

Both the mechanisms of implicit bias, and implicit processes more generally, are often characterised in the language of the sup-optimal. Variously, they deliver “a more inflexible form of thinking” than explicit cognition (Pérez, 2016: 28), they are “arational” compared to the rational processes that govern belief update (Gendler, 2008a: 641; 2008b: 557), and their content is “disunified” with our set of explicit attitudes (Levy, 2014: 101-103). As such, one might be tempted to think of implicit cognition as regularly, or even necessarily bad cognition. A strong interpretation of that value-laden assessment might mean that the processes in question deliver objectively bad outputs, however these are to be defined, but we could also mean something a bit weaker, such as that outputs are not aligned with the agent’s goals, or similar. It’s easy to see why one might apply this value-laden assessment to the mechanisms which result in implicitly biased behaviour: individuals simply have no reason to discriminate against already marginalized people in the ways outlined above, and yet they do anyway – that seems like a good candidate for bad cognition. That implicitly biased behaviours are the product of what appears to be a suboptimal processing system might motivate the argument that we’re not the agents of our implicitly biased behaviors, as well as arguments that might follow from this, such as that it is not appropriate to hold people morally responsible for their implicit biases (Levy, 2014).

But I think it would be wrong to conclude that implicit cognition necessarily delivers suboptimal outputs, and that implicit bias is an example of bad cognition simply for the reason that it is implicit. Moreover, as I’ll argue below, maintaining the former claim may well do a disservice to the project of reducing implicit social biases.

Whilst explicit processes may be ‘better’ at some cognitive tasks, research suggests that implicit processes can actually deliver a more favourable performance than explicit processes in a variety of domains. For instance, non-attentional, automatic processes govern the fast motor reactions employed by skilled athletes (Kibele, 2006). Trying to bring these processes under attentional control can actually disrupt sporting performance: Fleagal and Anderson (2008) show that directing attention to their action performance significantly impairs the ability of high-skill golfers on a putting task, whilst high-skill footballers perform less proficiently when directing attention to their execution of dribbling (Beilock et al., 2002). Engaging attentional processes when learning new motor skills can also disrupt performance (McKay et al., 2015).

Meanwhile, functional MRI studies suggest that improvisation implicates non-attentional processes. One study shows that when professional jazz pianists improvise, they do so in the absence of central processes implicated in attentional guidance (Limb and Braun, 2008). Another study demonstrates that trained musicians inhibit networks associated with attentional processing during improvisation, (Berkowitz and Ansari, 2010).

Further, deliberately disengaging attentional resources can facilitate creativity, a process known as ‘incubation’. Subjects who return to work on a creative task after a period directing attentional resources to something unrelated to the task at hand often deliver enhanced outputs compared with those who continually engage their attentional resources (Dodds et al., 2003). It has been proposed that task-relevant implicit processes remain active during the incubation period and contribute to enhanced creative output (Ritter and Dijksterhuis, 2014).

So it would be wrong to suggest that implicit processes necessarily, or even typically, deliver sub-optimal outputs compared with their explicit cousins. And pertinent to our discussion of implicit social bias, implicit processes themselves can actually be recruited to inhibit the manifestation of bias. Research demonstrates that participants with genuine long-term egalitarian commitments (Moskowitz et al. 1999) as well as those in whom egalitarian commitments are activated during in an experimental task (Moskowitz and Li, 2011) actually manifest less implicit bias than those without such commitments. Crucially, the processes which bring implicit responses in line with an agent’s long-term commitments are not driven by attentional guidance, instead operating automatically to prevent the facilitation of stereotypic categories in the presence of the relevant social concepts (Moskowitz et al. 1999: 168). The suggestion here is that developing genuine commitments to egalitarian values and treatment can actually recalibrate implicit processes to deliver value-consistent behavior (see Holroyd and Kelly, 2016), without needing to effortfully override implicit responses each time one encounters social concepts that might otherwise trigger biased reactions. It would seem that the profile of implicit processes as inflexible, arational and disunified with explicit values and commitments is ill-fitted to account for this example.

So, in a number of cases it seems that implicit processes can serve our goals and values. If this is right, then we should perhaps be more willing to locate ourselves as agents not just in the behavior that arises from our explicit processes, but in that which arises from our implicit ones as well.

I think this has an important implication for practices related to implicit bias training. We should be wary of the rhetoric that distances us as agents from our implicit processes: for instance, characterizing implicit bias as “racism without racists”1 might be comforting for those of us with implicit racial biases, but disowning the implicit processes that lead to racial discrimination, while not disowning those that lead to skilled musical improvisation or creativity as above, seems somewhat inconsistent. I wonder whether greater willingness to accept one’s implicit processes as aspects of one’s agency (not necessarily as central, defining aspects of one’s agency – but somewhere in there nonetheless) might help to motivate the project of realigning one’s implicitly biased responses?

 

Footnotes:

  1. In U.S. Department of Justice. 2016. “Implicit Bias.” Community Oriented Policing Services report, page 1. Accessed 27/07/16, URL: https://uploads.trustandjustice.org/misc/ImplicitBiasBrief.pdf

 

References:

Berkowitz, A. L. and D. Ansari. 2010. “Expertise-Related Deactivation of the Right Temporoparietal Junction during Musical Improvisation.” NeuroImage 49 (1): 712–19.

Brownstein, M and J. Saul. 2016a. Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, New York: Oxford University Press.

Brownstein, M and J. Saul. 2016b. Implicit Bias and Philosophy, Volume 2: Moral Responsibility, Structural Injustice, and Ethics, New York: Oxford University Press.

Dodds R. D., T. B. Ward and S. M. Smith. 2003. “Incubation in problem solving and creativity.” in The Creativity Research Handbook, edited by Runco M. A., Cresskill, NJ: Hampton Press.

Dovidio, J. F., K. Kawakami, C. Johnson, B. Johnson and A. Howard. 1997. “On the Nature of Prejudice: Automatic and Controlled Processes.” Journal of Experimental Social Psychology 33 (5): 510–40.

Gawronski, B. and G. V. Bodenhausen. 2014. “Implicit and Explicit Evaluation: A Brief Review of the Associative-Propositional Evaluation Model: APE Model.” Social and Personality Psychology Compass 8 (8): 448–62.

Gendler, T. S. 2008a. “Alief and Belief.” The Journal of Philosophy 105 (10): 634–63.

———. 2008b. “Alief in Action (and Reaction).” Mind & Language 23 (5): 552– 85.

Green, A. R., D. R. Carney, D. J. Pallin, L. H. Ngo, K. L. Raymond, L. I. Iezzoni and M. R. Banaji. 2007. “Implicit Bias among Physicians and Its Prediction of Thrombolysis Decisions for Black and White Patients.” Journal of General Internal Medicine 22 (9): 1231–38.

Holroyd, J. 2015. “Implicit Bias, Awareness and Imperfect Cognitions.” Consciousness and Cognition 33 (May): 511–23.

Holroyd, J. and D. Kelly. 2016. “Implicit Bias, Character, and Control.” In From Personality to Virtue, edited by A. Masala and J. Webber, Oxford: Oxford University Press.

Kahneman, D. 2012. Thinking, Fast and Slow, London: Penguin Books.

Kibele, A. 2006. “Non-Consciously Controlled Decision Making for Fast Motor Reactions in sports—A Priming Approach for Motor Responses to Non-Consciously Perceived Movement Features.” Psychology of Sport and Exercise 7 (6): 591–610.

Levy, N. 2014. Consciousness and Moral Responsibility, Oxford; New York: Oxford University Press.

Limb, C. J. and A. R. Braun. 2008. “Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation.” Edited by E. Greene. PLoS ONE 3 (2): e1679.

McConnell, A. R. and J. M. Leibold. 2001. “Relations among the Implicit Association Test, Discriminatory Behavior, and Explicit Measures of Racial Attitudes.” Journal of Experimental Social Psychology 37 (5): 435–42.

McKay, B., G. Wulf, R. Lewthwaite and A. Nordin. 2015. “The Self: Your Own Worst Enemy? A Test of the Self-Invoking Trigger Hypothesis.” The Quarterly Journal of Experimental Psychology 68 (9): 1910–19.

Moskowitz, G. B., P. M. Gollwitzer, W. Wasel and B. Schaal. 1999. “Preconscious Control of Stereotype Activation Through Chronic Egalitarian Goals.” Journal of Personality and Social Psychology 77 (1): 167–184

Moskowitz, G. B., and P. Li. 2011. “Egalitarian Goals Trigger Stereotype Inhibition: A Proactive Form of Stereotype Control.” Journal of Experimental Social Psychology 47 (1): 103–16.

Pérez, E. O. 2016. Unspoken Politics: Implicit Attitudes and Political Thinking, New York, NY: Cambridge University Press.

Ritter, S. M. and A. Dijksterhuis. 2014. “Creativity–the Unconscious Foundations of the Incubation Period.” Frontiers in Human Neuroscience 8: 22–31.

Rooth, D-O. 2007. “Implicit Discrimination in Hiring: Real World Evidence.” (IZA Discussion Paper No. 2764). Bonn, Germany: Forschungsinstitut Zur Zukunft Der Arbeit (Institute for the Study of Labor).

Wilson, T. D., S. Lindsey and T. Y. Schooler. 2000. “A Model of Dual Attitudes.” Psychological Review 107 (1): 101–26.

 

 

Trusting the Uncanny Valley: Exploring the Relationship Between AI, Mental State Ascriptions, and Trust.

uncanny-valley-humanoid-android-with-creator-468x312

Henry Powell- PhD Candidate in Philosophy at the University of Warwick

Interactive artificial agents such as social and palliative robots have become increasingly prevalent in the educational and medical fields (Coradeshi et al. 2006). Different kinds of robots, however, seem to engender different kinds of interactive experiences from their users. Social robots, for example, tend to afford positive interactions that look analogous to the ones we might have with one another. Industrial robots, on the other hand, rarely, if ever, are treated in the same way. Some very lifelike humanoid robots seem to fit somewhere outside of these two spheres, inspiring feelings of discomfort or disgust from people who come into contact with them. One way of understanding why this phenomenon obtains is via a conjecture developed by the Japanese roboticist Masahiro Mori in 1970 (Mori, 1970, pp. 33-35). This so called “uncanny valley” conjecture has a number of potentially interesting theoretical ramifications. Most importantly, that it may help us to understand a set of conditions under which humans could potentially ascribe mental states to beings without minds – in this case, that trusting an artificial agent can lead one to do just that. With this in mind the aims of this post are two-fold. Firstly, I wish to provide an introduction to the uncanny valley conjecture and secondly, I want to raise doubts concerning its ability to shed light on the conditions under which mental state ascriptions occur. Specifically, in experimental paradigms that see subjects as trusting their AI coactors.

Mori’s uncanny valley conjecture proposes that as robots increase in their likeness to human beings, their familiarity likewise increases. This trend continues up to a point at which their lifelike qualities are such that we become uncomfortable interacting with them. At around 75% human likeness, robots are seen as uncannily like human beings and are viewed with discomfort, or, in more extreme cases, disgust, significantly hindering their potential to galvanise positive social interactions.

uncanny-valley-graph-450x351

This effect has been explained in a number of ways. For instance, Saygin et al. (2011, 2012), have suggested that the uncanny valley effect is produced when there is a perceived incongruence between an artificial agent’s form and its motion. If an agent is seen to be clearly robotic but move in a very human-like way, or vice-versa, there is an incompatibility effect in the predictive, action simulating cognitive mechanisms that seek to pick out and forecast the actions of humanlike and non-humanlike objects. This predictive coding mechanism is provided contradicting information by the visual system ([human agent] with [nonhuman movement]) that prevents it from carrying out predictive operations to its normal degree of accuracy (Urgen & Miller, 2015). I take it that the output of this cognitive system is presented in our experience as being uncertain and that this uncertainty accounts for the feelings of unease that we experience when interacting with these uncanny artificial agents.

Of particular philosophical interest in this regard is a strand of research that has suggested that humans can be seen to make mental state ascriptions to artificial agents that fall outside the uncanny valley in given situations. This story was posited in two studies published in 2011 and 2015 by Kurt Gray & Daniel Wegner and Maya Mathur & David Reichling respectively. As I believe that it contains the most interesting evidential basis for thinking along these lines I will limit my discussion here to the latter experiment.

Mathur & Reichling’s study saw subjects partake in an “investment game” (Berg et al. 1995) – a generally accepted experimental standard in measuring trust – with a number of artificial agents whose facial features varied in their human likeness. This was to test whether subjects were willing to trust different kinds of artificial agents depending on where they fell on the uncanny valley scale. What they found was that subjects played the game in such a way that indicated that they trusted robots with certain kinds of facial features to act in certain ways so as to reach an outcome that was mutually beneficial to both of them, rather than favouring one or the other. The authors surmised that because the subjects seemed to trust these artificial agents, in a way that suggested that they had thought about what the artificial agent’s intentions might be, the subjects had ascribed mental states to their robotic partners in these cases.

It was proposed that subjects had believed that the artificial agents had mental states encompassing intentional propositional attitudes (beliefs, desires, intentions etc.). This was because subjects seemed to assess the artificial agent’s decision making processes in the form of what the robots “interests” in the various outcomes might be. This result is potentially very exciting but I think that it jumps to conclusions rather too quickly. I’d now like to briefly give reasons for my thinking along these lines.

Mathur and Reichling seem to be making two claims in the discussion of their study’s results.

  1. That subjects trusted the artificial agents.
  2. That this trust implies the ascription of mental states.

My objections here are the following. I think that i) is more complicated than the authors make it out to be and that ii) is just not at all obvious and does not follow from i) when i) is analysed in the proper way. Let us address i) first as it leads into the problem with ii).

When elaborated, I think that i) is making a claim that the subjects believed that the artificial agents would act in a certain way and that this action would be satisfactorily reliable. I think that this is plausible but I also think that the form of trust here is not that which is intended by Mathur and Reichling and is thus uninteresting in relation to ii). There are, as far as I can tell, at least two ways in which we can trust things. The first and perhaps most interesting form of trust is that one expressible in sentences like “I trust my brother to return the money that I lent him”. This implies that I think of my brother as the sort of person who would not, given the opportunity and upon rational reflection, do something contrary to what he had told me he would do. The second form of trust is that which we might have towards a ladder or something similar. We might say of such objects that “I trust that if I walk up this ladder it will not collapse because I know that it is sturdy”. The difference here should be obvious. I trust the ladder because I can infer from its physical state that it will perform its designated function. It has no loose fixtures, rotting parts or anything else that might make it collapse when I walk up it. To trust the ladder in this way I do not think that it has to make commitments to the action expected of it based on a given set of ethical standards. In the case of trusting my brother, my trust in him is expressible as a belief in the idea that given the opportunity to choose not do what I have asked of him he will chose in favour of that which I have asked. The trust that I have in my brother requires that I believe that he has mental states that inform and help him to choose to act in favour of my asking him to do something. One form of trust implies the existence of mental states, the other does not. In regards to ii) then, as has just been argued, trust only implies mental states if it is of the form that I would ascribe to my brother in the example just given, but not if that trust was of the sort that we would normally ascribe to reliably functional objects like ladders. So ii) only follows from i) if the former kind of trust is evinced and not otherwise.

This analysis suggests that if we are to believe that the subjects in this experiment ascribed mental states to the artificial agents (or indeed subjects in any other experiment that reaches the same conclusions) then we need sufficient reasons for thinking that the subjects were treating the artificial agents like I would treat my brother and not like I would treat the ladder in respect to ascriptions of trust. Mathur and Reichling are silent as to these and thus we have no good reason for thinking that mental state ascriptions were taking place in the minds of the subjects in their experiment. While I do not think that it is entirely impossible that such a thing might obtain in some circumstances it is just not clear from this experiment that it obtains in this instance.

What I have hopefully shown in this post is that is important that proceed with caution when making claims about our willingness to ascribe other minds to certain kinds of objects and agents (either artificial or otherwise). Specifically, it is important to do so in relation to our ability to hold such things in seemingly special kinds of relations with ourselves, trust being an important example of this.

 

References:

Berg, J., Dickhaut J., McCabe, K., (1995). Trust, Reciprocity, and Social History. Game and Economic Behaviour, 10, 122-142.

Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., … Ishida, H. (2006). Human-inspired robots. IEEE Intelligent Systems, 21(4), 74–85.

Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130.

MacDorman, K. F. (2005). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 workshop: toward social mechanisms of android science (pp. 106–118).

  1. B. Mathur and D. B. Reichling, “An uncanny game of trust: Social trustworthiness of robots inferred from subtle anthropomorphic facial cues,”Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on, La Jolla, CA, 2009, pp. 313-314.

Saygin, A. P. (2012). What can the Brain Tell us about Interactions with Artificial Agents and Vice Versa? In Workshop on Teleoperated Androids, 34th Annual Conference of the Cognitive Science Society.

Saygin, A. P., & Stadler, W. (2012). The role of appearance and motion in action prediction. Psychological Research, 76(4), 388–394. http://doi.org/10.1007/s00426-012-0426-z

Urgen, B. A., & Miller, L. E. (2015). Towards an Empirically Grounded Predictive Coding Account of Action Understanding. Journal of Neuroscience, 35(12), 4789–4791.