Enactivism, Computation, and Autonomy

by Joe Dewhurst -Teaching Assistant at The University of Edinburgh

Enactivism has historically rejected computational characterisations of cognition, at least in more traditional versions. This has led to the perception that enactivist approaches to cognition must be opposed to be more mainstream computationalist approaches, which offer a computational characterisation of cognition. However, the conception of computation which enactivism rejects is in some senses quite old fashioned, and it is not so clear that enactivism need necessarily be opposed to computation, understood in a more modern sense. Demonstrating that there could be compatibility, or at least not a necessary opposition, between enactivism and computationalism (in some sense) would open the door to a possible reconciliation or cooperation between the two approaches.

In a recently published paper (Villalobos & Dewhurst 2017), my collaborator Mario and I have focused on elucidating some of the reasons why enactivism has rejected computation, and have argued that these do not necessarily apply to more modern accounts of computation. In particular, we have demonstrated that a physically instantiated Turing machine, which we take to be a paradigmatic example of a computational system, can meet the autonomy requirements that enactivism uses to characterise cognitive systems. This demonstration goes some way towards establishing that enactivism need not be opposed to computational characterisations of cognition, although there may be other reasons for this opposition, distinct from the autonomy requirements.

The enactive concept of autonomy first appears in its modern guise in Varela, Thompson, & Rosch’s 1991 book The Embodied Mind, although it has important historical precursors in Maturana’s autopoietic theory (see his 1970, 1975, 1981; see also Maturana & Varela 1980) and cybernetic work on homeostasis (see e.g. Ashby 1956, 1960). There are three dimensions of autonomy that we consider in our analysis of computation. Self-determination requires that the behaviour of an autonomous system must be determined by that system’s own structure, and not by external instruction. Operational closure requires that the functional organisation of an autonomous system must loop back on itself, such that the system possesses no (non-arbitrary) inputs or outputs. Finally, an autonomous system must be precarious, such that the continued existence of the system depends on its own functional organisation, rather than on external factors outside of its control. In this post I will focus on demonstrating that these criteria can be applied to a physical computing system, rather than addressing why or how enactivism argues for them in the first place.

All three criteria have traditionally been used to disqualify computational systems from being autonomous systems, and hence to deny that cognition (which for enactivists requires autonomy) can be computational (see e.g. Thompson 2007: chapter 3). Here it is important to recognise that the enactivists have a particular account of computation in mind, one that they have inherited from traditional computationalists. According to this ‘semantic’ account, a physical computer is defined as a system that performs systematic transformations over content-bearing (i.e. representational) states or symbols (see e.g. Sprevak 2010). With such an account in mind, it is easy to see why the autonomy criteria might rule out computational systems. We typically think of such a system as consuming symbolic inputs, which it transforms according to programmed instructions, before producing further symbolic outputs. Already this system has failed to meet the self-determination and operational closure criteria. Furthermore, as artefactual computers are typically reliant on their creators for maintenance, etc., they also fail to meet the precariousness criteria. So, given this quite traditional understanding of computation, it is easy to see why enactivists have typically denied that computational systems can be autonomous.

Nonetheless, understood according to more recent, ‘mechanistic’ accounts of computation, there is no reason to think that the autonomy criteria must necessarily exclude computational systems. Whilst they differ in some details, all of these accounts deny that computation is inherently semantic, and instead define physical computation in terms of mechanistic structures. We will not rehearse these accounts in any detail here, but the basic idea is that physical computation can be understood in terms of mechanisms that perform systematic transformations over states that do not possess any intrinsic semantic content (see e.g. Miłkowski 2013; Fresco 2014; Piccinini 2015). With this rough framework in mind, we can return to the autonomy criteria.

Even under the mechanistic account, computation is usually understood in terms of mappings between inputs and outputs, where there is a clear sense of the beginning and end of the computational operation. A system organised in this way can be described as ‘functionally open’, meaning that its functional organisation is open to the world. A functionally closed system, on the other hand, is one whose functional organisation loops back through the world, such that the environmental impact of the system’s ‘outputs’ contributes to the ‘inputs’ that it receives.

A simple example of this distinction can be found by considering two different ways that a thermostat could be used. In the first case the sensor, which detects ambient temperature, is placed in one house, and the effector, which controls a radiator, is placed in another (see figure 1). This system is functionally open, because there is only a one-way connection between the sensor and the effector, allowing us to straightforwardly identify inputs and outputs to the system.

A more conventional way of setting up a thermostat is with both the sensor and the effector in the same house (see figure 2). In this case the apparent ‘output’ (i.e. control of the radiator) loops back way round to the apparent ‘input’ (i.e. ambient temperature), forming a functionally closed system. The ambient air temperature in the house is effectively part of the system, meaning that we could just as well treat the effector as providing input and the sensor as producing output – there is no non-arbitrary beginning or end to this system.

Whilst it is typical to treat a computing mechanism more like the first thermostat, with a clear input and output, we do not think that this perspective is essential to the mechanistic understanding of computation. There are two possible ways that we could arrange a computing mechanism. The functionally open mechanism (figure 3) reads from one tape and writes onto another, whilst the functionally closed mechanism (figure 4) reads and writes onto the same tape, creating a closed system analogous to the thermostat with its sensor and effector in the same house. As Wells (1998) suggests, a conventional Turing machine is actually arranged in the second way, providing an illustration of a functional closed computing mechanism. Whether or not this is true of other computational systems is a distinct question, but it is clear that at least some physically implemented computers can exhibit operational closure.

The self-determination criterion requires that a system’s operations are determined by its own structure, rather than by external instructions. This criterion applies straightforwardly to at least some computing mechanisms. Whilst many computers are programmable, their basic operations are nonetheless determined by their own physical structure, such that the ‘instructions’ provided by the programmer only make sense in the context of the system itself. To another system, with a distinct physical structure, those ‘instructions’ would be meaningless. Just as the enactive automaton ‘Bittorio’ brings meaning to a meaningless sea of 1s and 0s (see Varela 1988; Varela, Thompson, & Rosch 1991: 151-5), so the structure of a computing mechanism bring meaning to the world that it encounters.

Finally, we can turn to the precariousness criterion. Whilst the computational systems that we construct are typically reliant upon us for continued maintenance and a supply of energy, and play no direct role in their own upkeep, this is more a pragmatic feature of our design of those systems, rather than anything essential to computation. We could easily imagine a computing mechanism designed so that it seeks out its own source of energy and is able to maintain its own physical structure. Such a system would be precarious in just the same sense that enactivism conceives of living systems as being precarious. So there is no in-principle reason why a computing system should not be able to meet the precariousness criterion.

In this post I have very briefly argued that the enactivist autonomy criteria can be applied to (some) physically implemented computing mechanisms. Of course, enactivists may have other reasons for thinking that cognitive systems cannot be computational. Nonetheless, we think this analysis could be interesting for a couple of reasons. Firstly, insofar as computational neuroscience and computational psychology have been successful research programs, enactivists might be interested in adopting some aspects of computational explanation for their own analyses of cognitive systems. Secondly, we think that the enactivist characterisation of autonomous systems might help to elucidate the senses in which a computational system might be cognitive. Now that we have established the basic possibility of autonomous computational systems, we hope to develop future work along both of these lines, and invite others to do so too.

I will leave you with this short and amusing video of the autonomous robotic creations of the British cyberneticist W. Grey Walter, which I hope might serve as a source of inspiration for future cooperation between enactivism and computationalism.

 

References

  • Ashby, R. (1956). An introduction to cybernetics. London: Chapman and Hall.
  • Ashby, R. (1960). Design for a Brain. London: Chapman and Hall.
  • Fresco, N. (2014). Physical computation and cognitive science. Berlin, Heidelberg: Springer-Verlag.
  • Maturana, H. (1970). Biology of cognition. Biological Computer Laboratory, BCL Report 9, University of Illinois, Urbana.
  • Maturana, H. (1975). The organization of the living: A theory of the living organization. International Journal of Man-Machine studies, 7, 313-332.
  • Maturana, H. (1981). Autopoiesis. In M. Zeleny (Ed.), Autopoiesis: a theory of living organization (pp. 21-33). New York; Oxford: North Holland.
  • Maturana, H. and Varela, F. (1980). Autopoiesis and cognition: The realization of the living. Dordrecht, Holland: Kluwer Academic Publisher.
  • Miłkowski, M. (2013). Explaining the computational mind. Cambridge, MA: MIT Press.
  • Piccinini, G. (2015). Physical Computation. Oxford: OUP.
  • Sprevak, M. (2010). Computation, Individuation, and Received View on Representations. Studies in History and Philosophy of Science, 41: 260-70.
  • Thompson, E. (2007). Mind in Life: Biology, phenomenology, and the sciences of mind. Cambridge, MA: Harvard University Press.
  • Varela F. 1988. Structural Coupling and the Origin of Meaning in a Simple Cellular Automation. In Sercarz E. E., Celada F., Mitchison N.A., Tada T. (eds.), The Semiotics of Cellular Communication in the Immune System, pp. 151-61. New York: Springer-Verlag.
  • Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.
  • Villalobos, M. & Dewhurst, J. (2017). Enactive autonomy in computational systems. Synthese, doi:10.1007/s11229-017-1386-z
  • Wells, A. J. (1998). Turing’s Analysis of Computation and Theories of Cognitive Architecture. Cognition, 22(3), 269-94.

 

Are olfactory objects spatial?

by Solveig Aasen – Associate Professor of Philosophy at the University of Oslo

On several recent accounts of orthonasal olfaction, olfactory experience does (in some sense) have a spatial aspect. These views open up novel ways of thinking about the spatiality of what we perceive. For while olfactory experience may not qualify as spatial in the way visual experience does, it may nevertheless be spatial in a different way. What way? And how does it differ from visual spatiality?

It is often noted that, by contrast to what we see, what we smell is neither at a distance nor at a direction from us. Unlike animals such as rats and the hammerhead shark, which have their nostrils placed far enough apart that they can smell in stereo (much like we can see and hear in stereo), we humans are not able to tell which direction a smell is coming from (except perhaps under special conditions (Radil and Wysocki 1998; Porter et al. 2005), or if we individuate olfaction so as to include the trigeminal nerve (Young et al. 2014)). Nor are we able to tell how a smell is distributed around where we are sitting (Batty 2010a p. 525; 2011, p. 166). Nevertheless, it can be argued that what we smell can be spatial in some sense. Several suggestions to this effect are on offer.

Batty (2010a; 2010b; 2011; 2014) holds that what we smell (olfactory properties, according to her) is presented as ‘here’. This is not a location like any other. It is the only location at which olfactory properties are ever presented, for olfactory experience, on Batty’s view, lacks spatial differentiation. Moreover, she emphasises that, if we are to make room for a certain kind of non-veridical olfactory experience, ‘here’ cannot be a location in our environment; it is not to be understood as ‘out there’ (Batty 2010b, pp. 20-21). This latter point contrasts with Richardson’s (2013) view. She observes that, because olfactory experience involves sniffing, it is part of the phenomenology of olfactory experience that something (odours, according to Richardson) seems to be brought into the nostrils from outside the body. Thus, the object of olfactory experience seems spatial in the sense that what we smell is coming from without, although it is not coming from any particular location. It is interesting that although Batty and Richardson claims contrast, they both seem to think that they are pointing out a spatial aspect of olfactory experiences when claiming that what we smell is, respectively, ‘here’ or coming from without.

Another view, compatible with the claim that what we smell is neither at a distance nor direction from us, is presented by Young (2016). He emphasises the fact that the molecular structure of chemical compounds determines which olfactory quality subjects experience. It is precisely this structure within an odour plume, he argues, that is the object of olfactory experience. Would an olfactory experience of the molecular structure have a spatial aspect? Young does not specify this. But since the structure of the molecule is spatial, one can at least envisage that experiencing molecular structure is, in part, to experience the spatial relations between molecules. If so, we can envisage spatiality without perspective. For, presumably, the spatial orientation the molecules have relative to each other and to the perceiver would not matter to the experience. Presumably, it would be their internal spatial structure that is experienced, regardless of their orientation relative to other things.

The claim that what we smell is neither at a direction nor distance from us can, however, be disputed. As Young (2016) notes, this claim neglects the possibility of tracking smells over time. Although the boundaries of the cloud of odours are less clear than for visual objects, the extension of the cloud in space and the changes in its intensity seem to be spatial aspects of our olfactory experiences when we move around over time. Perhaps one would object that the more fundamental type of olfactory experience is synchronic and not diachronic. The synchronic variety has certainly received the most attention in the literature. But if one’s interested in an investigation of our ordinary olfactory experiences, it is not clear why diachronic experiences should be less worthy of consideration.

Perhaps one would think that an obvious spatial aspect of olfactory experience is the spatial properties of the source, i.e. the physical object from which the chemical compounds in the air originate. But there is a surprisingly widespread consensus in the literature that the source is not part of what we perceive in olfaction. Lycan’s (1996; 2014) layering view may be an exception. He claims that we smell sources by smelling odours. But, as Lycan himself notes, there is a question as to whether the ‘by’-relation is an inference relation. If it is, his claim is not necessarily substantially different from Batty’s (2014, pp. 241-243) claim that olfactory properties are locked onto source objects at the level of belief, but that sources are not perceived.

Something that makes evaluation of the abovementioned ideas about olfactory spatiality complicated is that there is a variety of facts about olfaction that can be taken to inform an account of olfactory experience. As Stevenson and Wilson (2006) note, chemical structure has been much studied. But even though the nose has about 300 receptors ‘which allow the detection of a nearly endless combination of different odorants’ (ibid., p. 246), how relevant is the chemical structure to the question ‘what we can perceive?’, when the discriminations we as perceivers report are much less detailed? What is the relevance of facts about the workings and individuation of the olfactory system? Is it a serious flaw if our conclusions about olfactory experience contradict the phenomenology? Different contributors to the debate seem to provide or presuppose different answers to questions like these. This makes comparison complicated. Comparison aside, however, some interesting ideas about olfactory spatiality can, as briefly shown, be appreciated on their own terms.

 

 

References:

Batty, C. 2014. ‘Olfactory Objects’. In D. Stokes, M. Matthen and S. Biggs (eds.), Perception and Its Modalities. Oxford: Oxford University Press.

Batty, C. 2011. ‘Smelling Lessons’. Philosophical Studies 153: 161-174.

Batty, C. 2010a. ‘A Representationalist Account of Olfactory Expereince’. Canadian Journal of Philosophy 40(4): 511-538.

Batty, C. 2010b. ‘What the Nose Doesn’t Know: Non-veridicality and Olfactory Experience’. Journal of Consciousness Studies 17: 10-27.

Lycan, W. G. 2014. ‘The Intentionality of Smell’. Frontiers in Psychology 5: 68-75.

Lycan, W. G. 1996. Consciousness and Experience. Cambridge, MA: Bradford Books/MIT Press.

Radil, T. and C. J. Wysocki. 1998. ‘Spatiotemporal masking in pure olfaction’. Olfaction and Taste 12(855): 641-644.

Richardson, L. 2013. ‘Sniffing and Smelling’. Philosophical Studies 162: 401-419.

Porter, J. Anand, T., Johnson, B. N., Kahn, R. M., and N. Sobel. 2005. ‘Brain mechanisms for extracting spatial information from smell’. Neuron 47: 581-592.

Young, B. D. 2016. ‘Smelling Matter’. Philosophical Psychology 29(4): 520-534.

Young, B. D., A. Keller and D. Rosenthal. 2014. ‘Quality-space Theory in Olfaction’. Frontiers in Psychology 5: 116-130.

Wilson, D. A. and R. J. Stevenson. 2006. Learning to Smell. Olfactory Perception from Neurobiology to Behaviour. Baltimore, MD: The John Hopkins University Press.

How stereotypes shape our perceptions of other minds

 by Evan Westra – Ph.D. Candidate, University of MarylandAmbiguous Pictures Task

McGlothlin & Killen (2006) showed groups of (predominantly white) American elementary school children from ages 6 to 10 a series of vignettes depicting children in ambiguous situations. For instance, one picture (above) showed two children by a swing set, with one on the ground frowning, and one behind the swing with a neutral expression. Two things might be going on in this picture: i) the child on the ground may have fallen off by accident (neutral scenario), or ii) the child on the ground may have been intentionally pushed by the one standing behind the swing (harmful scenario). Crucially, McGlothlin and Killen varied the race of the children depicted in the image, such that some children saw a white child standing behind the swing (left), and some saw a black child (right). Children were asked to explain what had just happened in the scenario, to predict what would happen next, and to evaluate the action that had just happened. Overwhelmingly, children were more likely to give the harmful scenario interpretation – that the child behind the swing intentionally pushed the other child – when the child behind the swing was black than when she was white. The race the child depicted, it seems, influenced whether or not participants made an inference to harmful intentions.

This is yet another depressing example of how racial bias can warp our perceptions of others. But this study (and others like it: Sagar & Schofield 1990; Burnham & Harris 1992; Condry et al. 1985)) also hint at a relationship between two forms of social cognition that are not often studied together: mindreading and stereotyping. The stereotyping component is clear enough. The mindreading component comes from the fact that race didn’t just affect kids’ attitudes towards the target – it affected what they thought was going on in the target’s mind. Although these two ways of thinking about other people – mindreading and stereotyping – both seem to play an important role in how we navigate the social world, curiously little attention has been paid to understanding the way they relate to one another. In this post, I want to explore this relationship. I’ll first briefly explain what I mean by “mindreading” and “stereotyping.” Next, I’ll discuss one existing proposal about the relationship between mindreading and stereotyping, and raise some problems for it. Then I will lay out the beginnings of a different way of cashing out this relationship.

*          *          *

 First, lets get clear on what I mean by “mindreading” and “stereotyping.”

Mindreading:

In order to achieve our goals in highly social environments, we need to be able to accurately predict what other people will do, and how they will react to us. To do this, our brains generate complex models of other people’s beliefs, desires, and intentions, which we use to predict and interpret their behavior. This capacity to represent other minds is known various as theory of mind, mindreading, mentalizing, and folk psychology. In human beings, this ability begins to emerge very early in development. As adults, we use it constantly, in a fast, flexible, and unconscious fashion. We use it in many important social activities, including communication, social coordination, and moral judgment.

Stereotyping:

Stereotypes are ways of storing generic information about social groups (including races, genders, sexual orientation, age-groups, nationalities, professions, political affiliation, physical or mental ability, and so on) (Amodio 2014). A particularly important aspect of stereotypes is that they often contain information about stable personality traits. Unfortunately, it is all too easy for us to think of stereotypes about how certain social groups are lazy, or greedy, or aggressive, or submissive, and so on. According to Susan Fiske and colleagues’ Stereotype Content Model (SCM), there is an underlying pattern to the way we attribute personality traits to groups (Cuddy et al. 2009; Cuddy et al. 2007; Fiske et al. 2002; Fiske 2015). Personality trait attribution, on this view, varies along two primary dimensions: warmth and competence. The warmth dimension includes traits like (dis-)honesty, (un-)trustworthiness, and (un-)friendliness. These are traits that tell you whether or not someone is liable to help you or harm you. The competence dimension contains traits like (un-)intelligence, skillfulness, persistence, laziness, clumsiness, etc. These traits tell you how effectively someone is at achieving their goals.

Together, these two dimensions combine to yield four distinct clusters of traits, each of which picks out a different kind of stereotype:

the stereotype content model

*          *          *

So what do stereotyping and mindreading have to do with one another? There are some obvious differences, of course: stereotypes are mainly about groups, while mindreading is mainly about individuals. But intuitively, it seems like knowing about somebody’s social group membership could tell you a lot about what they think: if I tell you that I am a liberal, for instance, that should tell you a lot about my beliefs, values, and social preferences – valuable information, when it comes to predicting and interpreting my behavior.

Some philosophers and psychologists, such as Kristin Andrews, Anika Fiebich and Mark Coltheart, have suggested that stereotypes and mindreading may actually be alternative strategies for predicting and interpreting behavior (Andrews 2012; Fiebich & Coltheart 2015). That is, it may be that sometimes we use stereotypes instead of mindreading to figure out what a person is going to do. According to one such proposal (Fiebich & Coltheart 2015), stereotypes allow us to predict behavior because they encode associations between social categories, situations, and behaviors. Thus, one might form a three-way association between the social category police, the situation donut shops, and the behavior eating donuts, which would lead one to predict that, when one sees a police officer in a donut shop, he or she will likely be eating a donut. A more complex version of this associationist approach would be to associate social groups with particular traits labels (as per the SCM), and thus consist in four-way associations between social cateogires, trait labels, situations, and behaviors (Fiebich & Coltheart 2015; Andrews 2012). Thus, one might come to the trait of generosity with leaving large tips in restaurants, and associate the social category of uncles with generosity, and thereby come to expect uncles to leave large tips in restaurants. One might then explain this behavior by referring to the uncle’s generosity. The key thing to notice about these accounts is that their predictions do not rely at all upon mental-state attributions. This is by design: these proposals are meant to show that we often don’t need mindreading to predict or interpret behavior.

One problem for this sort of view comes from its invocation of “situations.” What information, one might wonder, is contained within the scope of a particular “situation”? Surely, a situation does not include everything about the state of the world at a given moment. Situations are probably meant to pick out local states of affairs. But not all the facts about a local state of affairs will be relevant to behavior prediction. The presence of mice in the kitchen of a restaurant, for instance will not affect your predictions about the size of your uncle’s tip. It might, however, affect our predictions about the behavior of the health inspector, should one suddenly arrive. Which local facts are predictively useful will ultimately depend upon their relevance to the agent whose behavior we are predicting. But whether or not a fact is relevant to an agent will depend upon that agent’s beliefs about the local state of affairs, as well as her goals and desires. If this is how representations of predictively useful situations are computed, then the purportedly non-mentalistic proposal given above really includes a tacit appeal to mindreading. If this is not how situations are computed, then we are owed an explanation for how the non-mentalistic behavior-predictor arrives at predictively useful representations of situations that do not depend upon considerations of relevance.

*          *          *

Instead of treating mindreading and stereotypes as separate forms of behavior-prediction and interpretation, we might instead explore the ways in which stereotypes might inform mindreading. The key to this approach, I suggest, lies in the fact that stereotypes encode information about personality traits. In many ways, personality traits are like mental states: they are unobservable mental properties of individual, and they are causally related to behavior. But they also differ in one key respect: their temporal stability. Beliefs and desires are inherently unstable: a belief that P can be changed by the observation of not-P; a desire for Q can be extinguished by the attainment of Q. Personality traits, in contrast, cannot be extinguished or abandoned based on everyday events. Rather, they tend to last throughout a person’s lifetime, and manifest themselves in many different ways across many different situations. A unique feature of personality traits, in other words, is that they are highly stable mental entities (Doris 2002). So when stereotypes ascribe traits to groups, they are ascribing a property that one could reasonably expect to remain consistent across many different situations.

The temporal properties of mental states are extremely relevant for mindreading, especially in models that employ Bayesian Predictive Coding (Kilner & Frith 2007; Koster-Hale & Saxe 2013; Hohwy & Palmer 2014; Hohwy 2013; Clark 2015). To see why, let’s start with an example:

Suppose that we believe that Laura is thirsty, and have attributed to her the goal of getting a drink (G). As goals go, this one is relatively short-term (unlike, say, the goal of getting a PhD). But we know that in order to achieve (G), we predict that Laura must form a number of even shorter-term sub-goals: (G1) get the juice from the fridge, and (G2) pour herself a glass of juice. But each of these requires the formation of still shorter-term sub-sub-goals: (G1a) walk over to kitchen, (G1b) open fridge door, (G1c) remove juice container, (G2a) remove cup from cupboard, (G2b) pour juice into cup. Predicting Laura’s behavior in this context thus begins with the ascription of a longer-duration mental state (G), followed by the ascription of successively shorter-term mental-state attributions (G1, G2, G1a, G1b, G1c, G2a, G2b).

As mindreaders, we can use attributions of more abstract, temporally extended mental states to make top-down inferences about more transient mental states. At each level in this action-prediction hierarchy, we use higher-level goal-attributions to constrain the space of possible sub-goals that the agent might form. We then use our prior experience to select the most likely sub-goal from the hypothesis space, and the process repeats itself. Ultimately, this yields fairly fine-grained expectations about motor-intentions, which manifest themselves as mirror-neuron activity (Kilner & Frith 2007; Csibra 2008). Action-prediction thus plays out as a descent from more stable mental-state attributions to more transient ones, which ultimately bottom out in highly concrete expectations about behavior.

Personality traits, which are distinguished by their high degree of temporal stability, fit naturally into the upper levels of this action-prediction hierarchy. Warmth traits, for instance, can tell us about the general preferences of an agent: a generous person probably has a general preference for helping others, while a greedy person probably has a general desire to enrich herself. These broad preference-attributions can in turn inform more immediate goal-attributions, which can then be used to predict behavior.

This role for representations of personality traits in mental-state inference fits well with what we know about how we reason about traits more generally. For instance, we often make extremely rapid judgments about the warmth and competence traits of individuals based on fairly superficial evidence, such as facial features (Todorov et al. 2008); we also tend to over attribute the causes of behavior to personality traits, rather than situational factors – a phenomenom commonly known as the “fundamental attribution error” or the “correspondence bias (Gawronski 2004; Ross 1977; Gilbert et al. 1995). Prioritizing personality traits makes a lot of sense if they form the inferential basis for more complex forms of behavior prediction. It also makes sense that this aspect of mindreading would need to rely on fast, rough-and-ready heuristics, since personality trait information would need to be inferred very quickly in order to be useful in action-planning.

From a computational perspective, thus, using personality traits to make inferences about behavior makes a lot of sense, and might make mindreading more efficient. But in exchange for this efficiency, we make a very disturbing trade. Stereotypes, which can be activated rapidly based on easily available perceptual cues provide the mindreading system with a rapid means for storing trait information (Mason et al. 2006; Macrae et al. 1994). With this speed comes one of the most morally pernicious forms of human social cognition, one that helps to perpetuate discrimination and social inequality.

*          *          *

 The picture I’ve painted in this post is, admittedly, rather pessimistic. But just because the roots of discrimination are cognitively deep, we should not conclude that it is inevitable. More recent work from McGlothlin and Killen (2010) should give us some hope: while children from racially homogeneous schools (who had little direct contact with members of other races) tended to show signs of biased intention-attribution, McGlothlin and Killen also found that children from racially heterogeneous schools (who had regular contact with members of other races) did not display such signs of bias. Evidently, intergroup contact is effective in curbing the development of stereotypes – and, by extension, biased mindreading.

 

References:

Amodio, D.M., 2014. The neuroscience of prejudice and stereotyping. Nature Reviews: Neuroscience, 15(10), pp.670–682.

Andrews, K., 2012. Do apes read minds?: Toward a new folk psychology, Cambridge, MA: MIT Press.

Burnham, D.K. & Harris, M.B., 1992. Effects of Real Gender and Labeled Gender on Adults’ Perceptions of Infants. Journal of Genetic Psychology, 15(2), pp.165–183.

Clark, A., 2015. Surfing uncertainty: Prediction, action, and the embodied mind, Oxford: Oxford University Press.

Condry, J.C. et al., 1985. Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children Development Sex and Aggression : The Influence of Gender Label on the Perception of Aggression in Children. Child Development, 56(1), pp.225–233.

Csibra, G., 2008. Action mirroring and action understanding: an alternative account. In P. Haggard, Y. Rossetti, & M. Kawato, eds. Sensorymotor Foundations of Higher Cognition. Attention and Performance XXII. Oxford: Oxford University Press, pp. 435–459.

Cuddy, A.J.C. et al., 2009. Stereotype content model across cultures: Towards universal similarities and some differences. British Journal of Social Psychology, 48(1), pp.1–33.

Cuddy, A.J.C., Fiske, S.T. & Glick, P., 2007. The BIAS map: behaviors from intergroup affect and stereotypes. Journal of personality and social psychology, 92(4), pp.631–48.

Doris, J.M., 2002. Lack of character: Personality and moral behavior, Cambridge, UK: Cambridge University Press.

Fiebich, A. & Coltheart, M., 2015. Various Ways to Understand Other Minds: Towards a Pluralistic Approach to the Explanation of Social Understanding. Mind and Language, 30(3), pp.235–258.

Fiske, S.T., 2015. Intergroup biases: A focus on stereotype content. Current Opinion in Behavioral Sciences, 3(April), pp.45–50.

Fiske, S.T., Cuddy, A.J.C. & Glick, P., 2002. A Model of (Often Mixed Stereotype Content: Competence and Warmth Respectively Follow From Perceived Status and Competition. Journal of personality and social psychologyersonality and social psychology, 82(6), pp.878–902.

Gawronski, B., 2004. Theory-based bias correction in dispositional inference: The fundamental attribution error is dead, long live the correspondence bias. European Review of Social Psychology, 15(1), pp.183–217.

Gilbert, D.T. et al., 1995. The Correspondence Bias. Psychological Bulletin, 117(1), pp.21–38.

Hohwy, J., 2013. The predictive mind, Oxford University Press.

Hohwy, J. & Palmer, C., 2014. Social Cognition as Causal Inference: Implications for Common Knowledge and Autism. In M. Gallotti & J. Michael, eds. Perspectives on Social Ontology and Social Cognition. Dordrecht: Springer Netherlands, pp. 167–189.

Kilner, J.M. & Frith, C.D., 2007. Predictive coding: an account of the mirror neuron system. Cognitive Processes, 8(3), pp.159–166.

Koster-Hale, J. & Saxe, R., 2013. Theory of Mind: A Neural Prediction Problem. Neuron, 79(5), pp.836–848.

Macrae, C.N., Stangor, C. & Milne, A.B., 1994. Activating Social Stereotypes: A Functional Analysis. Journal of Experimental Social Psychology, 30(4), pp.370–389.

Mason, M.F., Cloutier, J. & Macrae, C.N., 2006. On construing others: Category and stereotype activation from facial cues. Social Cognition, 24(5), p.540.

McGlothlin, H. & Killen, M., 2010. How social experience is related to children’s intergroup attitudes. European Journal of Social Psychology, 40(4), pp.625–634.

Mcglothlin, H. & Killen, M., 2006. Intergroup Attitudes of European American Children Attending Ethnically Homogeneous Schools. Child Development, 77(5), pp.1375–1386.

Ross, L., 1977. The Intuitive Psychologist And His Shortcomings: Distortions in the Attribution Process. Advances in Experimental Social Psychology, 10(C), pp.173–220.

Sagar, H.A. & Schofield, J.W., 1990. Racial and behavioral cues in Black and White children’ s perception of ambiguously aggressive acts. Journal of personality and social psychology, 39(October), pp.590–598.

Todorov, A. et al., 2008. Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences, 12(12), pp.455–460.

 

Thanks to Melanie Killen and Joan Tycko for permission to use images of experimental stimuli from McGlothlin & Killen (2006, 2010).

 

Delusions as Explanations

Opera_0

by Matthew Parrott- Lecturer in the Department of Philosophy at King’s College London

One idea that has been extremely influential within cognitive neuropsychology and neuropsychiatry is that delusions arise as intelligible responses to highly irregular experiences. For example, we might think that the reason a subject adopts the belief that a house has inserted a thought into her head is because she has in fact had an extremely bizarre experience representing a house pushing a thought into her head (the case comes from Saks 2007; see Sollberger 2014 for an account of thought insertion along these lines). If this were to happen, then delusions would arise for reasons that are familiar from cases of ordinary belief. A delusional subject would simply be endorsing or taking on board the content of her experience.

However the notion that a delusion is an understandable response to an irregular experience need not be construed along the lines of a subject accepting the content of her experience. Over a number of years, Brendan Maher advocated an influential alternative proposal, according to which an individual adopts a delusional belief because it serves as an explanation of her ‘strange’ or ‘significant’ experience (see Maher 1974, 1988, 1999). Crucially, for Maher, the content of the subject’s experience is not identical to the content of her delusional belief. Rather, the latter is determined in part by contextual factors, such as cultural background or what Maher calls ‘general explanatory systems’ (cf. 1974). Maher’s approach is often referred to as the ‘explanationist’ approach to understanding delusions (Bayne and Pacherie 2004).

Explanationist accounts have been especially popular with respect to the Capgras delusion that one’s friend or relative is really an imposter (e.g., Stone and Young 1997) and delusions of alien control (e.g., Blakemore, et. al. 2002). Yet, despite their prevalence, the explanationist approach has been called into questioned by a number of philosophers on the grounds that delusions are quite obviously very bad explanations.

For instance, Davies and colleagues argue:

‘The suggestion that delusions arise from the normal construction and adoption of an explanation for unusual features of experience faces the problem that delusional patients construct explanations that are not plausible and adopt them even when better explanations are available. This is a striking departure from the more normal thinking of non-delusional subjects who have similar experiences.’ (Davies, et. al. 2001, pg. 147; but see also Bayne and Pacherie 2004, Campbell 2001, Pacherie, et. al. 2006)

Indeed, since delusions strike most of us as highly implausible, it is hard to see how they could explain any experience, no matter how unusual. So if we want to understand delusional cognition along Maher’s lines, we will need to clarify the cognitive transition from anomalous experience to delusional belief in a way that illuminates how it could be a genuinely explanatory transition.

In what follows, I would like to distinguish three distinct ways in which a delusional belief might be thought to be explanatorily inadequate, each of which I think poses a distinct challenge for the explanationist approach.

The first concerns the phenomenal character of a delusional subject’s anomalous experience. Maher claims that the strange experiences we find in cases of delusion ‘demand’ explanations. But why is that? If the experiences that give rise to delusions do not themselves represent highly unusual states of affairs (as Maher seems to think), what is it about them that calls for or ‘demands’ an explanation? And does the particular phenomenal character of a ‘strange’ experience ‘demand’ a specific form of explanation, or are all ‘strange’ experiences relatively equal when it comes to their demands? The challenge for the explanationist is to clarify the phenomenal character of a delusional subject’s anomalous experience in such a manner that makes clear how it could be the explanadum of a delusion. Let’s call this the Phenomenal Challenge.

I actually think some very influential neuropsychological accounts have difficulty with the Phenomenal Challenge. To briefly take one example, Ellis and Young (1990) proposed that the Capgras delusion arises because of a lack of responsiveness to familiar faces in the autonomic nervous system. In non-delusional subjects, an experience of a familiar face is associated with an affective response in the autonomic nervous system, but Capgras subjects fail to have this response. Ellis and Young’s theory predicted that there would be no significant difference in the skin conductance responses of Capgras subjects when they are shown familiar verses unfamiliar faces, which has subsequently been confirmed by a number of studies. Thus it seems there is good evidence that a typical Capgras subject’s autonomic nervous system is not sensitive to familiar faces.

This seems promising but I don’t think it answers the Phenomenal Challenge because it doesn’t tell us anything about what a Capgras subject’s experience of a face is like. As John Campbell notes, ‘the mere lack of affect does not itself constitute the perception’s having a particular content.’ (2001, pg. 96) Moreover, individuals are not normally conscious of their autonomic nervous system (see Coltheart 2005). So it isn’t clear how diminished sensitivity within that system constitutes an experience that ‘demands’ an explanation involving imposters. To really understand why an anomalous experience of a familiar face calls for a delusional explanation, we need to get a better sense on what that experience is like.

A second worry raised in the previous passage is that delusional subjects adopt delusional explanations ‘even when better explanations are available’. Why does this happen? Why does a delusional subject select an inferior hypothesis from the set of those available to her? Let’s call this the Abductive Challenge.

To illustrate, let’s stick with Capgras. The explanationist proposal is that a subject adopts the belief that her friend has been replaced by an imposter in order to explain some odd experience. But even if we suppose the imposter hypothesis is empirically adequate, it is highly unlikely to be the best explanation available. As Davies and Egan remark, ‘one might ask whether there is an alternative to the imposter hypothesis that provides a better explanation of the patient’s anomalous experience. There is, of course, an obvious candidate for such a proposition.’ (2013, pg. 719) In fact, there seems to be a number of better available hypotheses; for example, that one has suffered brain injury or any hypothesis that appealed to more familiar changes affecting facial appearance, such as hair-style or illness.

Put simply, the Abductive Challenge is that even if we assume the cognitive transition from unusual experience to delusion involves something like abductive reasoning or inference to the best explanation, delusional subjects select poor explanations instead of better available alternatives. The explanationist needs to tell us why this happens (for some attempts see Coltheart et. al. 2010, Davies and Egan 2013, McKay 2012, Parrott and Koralus, 2015).

The final challenge for explanationism is, in my view, the most problematic. In the above passage, Davies and colleagues remark that delusions are extremely implausible. Along these lines, we might naturally wonder why a subject would even consider one to be a candidate explanation of her unusual experience. Why would she not instead immediately rule out a delusional hypothesis on the grounds that it is far too implausible to be given serious consideration? This concern is echoed by Fine and colleagues:

‘They explain the anomalous thought in a way that is so far-fetched as to strain the notion of explanation. The explanations produced by patients with delusions to account for their anomalous thoughts are not just incorrect; they are nonstarters. Appealing to the notion of explanation, therefore, does not clarify how the delusional belief comes about in the first place because the explanations of the delusional patients are nothing like explanations as we understand them.’ (Fine, et. al. 2005, pg. 160)

The task of explaining some target phenomenon demands cognitive resources and the idea that delusions are explanatory ‘nonstarters’ means that they normally would be immediately rejected. When engaged in an explanatory task, we know that a person considers only a restricted set of hypotheses and it seems quite natural to exclude ones that are inconsistent with one’s background knowledge. Since delusions seem to be in conflict with our background knowledge, this is perhaps why we find it difficult to understand how someone could think a delusion is even potentially explanatory (for further discussion, see Parrott 2016).

So why do subjects consider delusional explanations as candidate hypotheses? This is the final challenge for the explanationist. Let’s call it the implausibility challenge. Notice that whereas the abductive challenge asks why a subject eventually adopts one hypothesis instead of another from among a fixed set of available alternatives, the implausibility challenge is more general. It asks where these hypotheses, the ones subject to further investigation, come from in the first place.

Can these three challenges be overcome? I am optimistic and have tried to address them for the case of thought insertion (see Parrott forthcoming). However, I also think much more work needs to be done.

First, as I mentioned above, it is not clear that we have a good understanding of what it is like for an individual to have the sorts of experiences we think are implicated in many cases of delusion. Without such understanding, I think it is hard to see why some experiences make demands on a person’s cognitive explanatory resources. I also suspect that understanding what various anomalous experiences are like might shed more light on why delusional individuals tend to adopt very similar explanations.

Second, I think that addressing the implausibility challenge requires us to obtain a far better understanding of how hypotheses are generated than we currently have. In both delusional and non-delusional cognition, an explanatory task presents a computational problem. Which candidate hypotheses should be selected for further empirical testing? Although I have suggested that epistemically impossible hypotheses are normally ruled out, that doesn’t tell us how candidates are ruled in. Plausibly, there is some selection function(s) that chooses candidate explanations of a target phenomenon, but, as Thomas and colleagues note, we have very little sense of how this might work:

‘Hypothesis generation is a fundamental component of human judgment. However, despite hypothesis generation’s importance in understanding judgment, little empirical and even less theoretical work has been devoted to understanding the processes underlying hypothesis generation (Thomas, et. al. 2008, pg. 174).

The implausibility challenge strikes me as especially puzzling because I think we can easily see that certain strategies for hypothesis generation would be bad. For instance, it wouldn’t generally be good to consider hypotheses only if they have a prior probability above a certain threshold, because a hypothesis with a low prior probability might best explain a new piece of evidence.

Delusional cognition raises quite a few deep and interesting questions, many of which bear on how we think about belief formation and reasoning. And I have only scratched the surface when it comes to the kinds of puzzles that arise when we start thinking about the origins of delusion. But I hope that distinguishing these explanatory challenges will help us in thinking about the questions which need to be pursued if we are to assess the plausibility of the explanationist strategy.

 

References:

Bayne, T. and E. Pacherie. 2004. “Bottom-up or Top-down?: Campbell’s Rationalist Account of Monothematic Delusions.” Philosophy, Psychiatry, and Psychology, 11: 1-11.

Blakemore, S., D. Wolpert, and C. Frith. 2002. “Abnormalities in the Awareness of Action.” Trends in Cognitive Science, 6: 237-242.

Campbell, J. 2001. “Rationality, Meaning and the Analysis of Delusion.” Philosophy, Psychiatry and Psychology,8: 89-100.

Coltheart, M., P. Menzies, and J. Sutton. 2010. “Abductive Inference and Delusional Belief.” Cognitive Neuropsychiatry, 15: 261-87.

Coltheart, M. 2005. “Conscious Experience and Delusional Belief.” Philosophy, Psychiatry and Psychology, 12: 153-57.

Davies, M., M. Coltheart, R. Langdon, and N. Breen. 2001. “Monothematic Delusions: Towards a Two-Factor Account.” Philosophy, Psychiatry and Psychology, 8: 133-158.

Davies, M. and Egan, A. 2013. “Delusion: Cognitive Approaches, Bayesian Inference and Compartmentalization.” In K.W.M. Fulford, M. Davies, R.G.T. Gipps, G. Graham, J. Sadler, G. Stanghellini and T. Thornton (eds.), The Oxford Handbook of Philosophy of Psychiatry. Oxford: Oxford University Press.

Ellis, H. and A. Young. 1990. “Accounting for Delusional Misidentifications.” British Journal of Psychiatry, 157: 239-48.

Fine, C. M, J. Craigie, & I. Gold. 2005. “The Explanation Approach to Delusion.” Philosophy, Psychiatry, and Psychology, 12 (2): 159-163.

Maher, B. 1974. “Delusional Thinking and Perceptual Disorder.” Journal of Individual Psychology, 30: 98-113.

Maher, B. 1988. “Anomalous Experience and Delusional Thinking: The Logic of

Explanations”, in T. Oltmanns and B. Maher (eds.), Delusional Beliefs, Chichester: John Wiley and Sons, pp. 15-33.

Maher, B. 1999. “Anomalous Experience in Everyday Life: Its Significance for Psychopathology.” The Monist, 82: 547-570.

McKay, R. 2012. “Delusional Inference.” Mind and Language, 27: pp. 330-55.

Pacherie, E., M. Green, and T. Bayne. 2006. “Phenomenology and Delusions: Who Put the ‘Alien’ in Alien Control?” Consciousness and Cognition, 15: 566-577.

Parrott, M. 2016. “Bayesian Models, Delusional Beliefs, and Epistemic Possibilities.” The British Journal for the Philosophy of Science, 67: 271-296.

Parrott, M. forthcoming. “Subjective Misidentification and Thought Insertion.” Mind and Language.

Parrott, M. and P. Koralus. 2015. “The Erotetic Theory of Delusional Thinking.” Cognitive Neuropsychiatry, 20 (5): 398-415.

Saks, E. 2007. The Center Cannot Hold. New York: Hyperion.

Sollberger, M. 2014. “Making Sense of an Endorsement Model of Thought Insertion.” Mind and Language, 29: 590-612.

Stone, T. and A. Young. 1997. “Delusions and Brain Injury: the Philosophy and Psychology of Belief.” Mind and Language, 12: 327-364.

Thomas, R., M. Dougherty, A. Sprenger, and J. Harbison. 2008. “Diagnostic Hypothesis Generation and Human Judgment.” Psychological Review, 115(1): 155-185.

How much of an animal are you?

Baby chimpanzee

by Léa Salje Lecturer in Philosophy of Mind and Language at The University of Leeds

I’m an animal, and so are you. We might be rather special animals, but we are animals all the same: biological organisms operating in a particular ecological niche. For most of us, this is something we’ve known for a long time, probably since primary school. It’s perhaps surprising, then, how little it seems to permeate our everyday thinking about ourselves, for many of us at least. I’m hardly minded to earnestly contemplate the fact of my animality in my dealings with myself as I go about my daily business of coffee-ordering and Facebook-posting.

There’s also a question about how deeply the fact of our animality genuinely penetrates the conception of ourselves that guides our philosophy of mind, even among those of us happy to accept it on its surface. This was the question at the heart of the Persons as Animals project – an AHRC-funded project at the University of Leeds that I’ve been working on for the last year led by Helen Steward, that aims to explore the ways in which certain areas in philosophy of mind might be illuminated by a perspective that forefronts the fact that we are animals. A couple of things (at least) follow from taking such a perspective seriously. The first is that if we are animals, we are thereby not Cartesian egos, or brains, or systems of information, or functional systems, or bundles of mental states. We are entire embodied wholes, such that an understanding of ourselves requires a much more holistic perspective than that which is often taken in philosophy of mind. And second, if we are animals then our powers and capacities must be related in an evolutionary way to those of other creatures. This means that a decent understanding of those powers and capacities – even relatively hifalutin powers like language and the capacity to make choices – should benefit from a perspective that takes account of what is known of animal perception, cognition and agency.

Clearly, mere knowledge of the biological fact of our animality is not enough to mobilise these sorts of changes. One of the central planks of the project was that we need new and better ways to articulate our place in the animal kingdom if we are to make philosophical progress in these areas. And before we can do that, we need to understand what sorts of obstacles might have so far prevented such an animalistic self-conception from really taking hold.

To this end, the Persons as Animals project came together earlier this year with conservation social scientist Andy Moss from the education department at Chester Zoo to run a series of semi-structured focus groups, designed to explore how we think of ourselves and our relation to the animal world. What sorts of things get in the way of animalistic thinking about ourselves? How might it be encouraged? We ran 12 groups in all, 6 made up of zoo visitors, and another 6 of students from Leeds University.

What we found was a striking absence of any univocal narrative about our sense of our own animality. Instead, we found a deeply fractured and uneasy picture: we do see ourselves as animals, and we don’t. And many of us struggle to reconcile these two viewpoints.

Interestingly, this sense of unease came out in different ways for different participants. Some began with a firm sense of their own animality, often accompanied by expressions of indignation at the very suggestion that we might think otherwise. (Of course we’re animals; how dare we count ourselves as special?) The discussion of these participants tended to highlight the intelligent behaviours of other animals, and to downplay our own behaviours and capacities as largely instinct-driven under a flimsy veneer of civility.

This is, of course, to forefront the fact of our animality in a way. But by so magnifying our continuity with the rest of the animal world, these participants seemed to face a special challenge: they seemed to struggle to absorb into that animalistic self-image our alienation from and – even more troublingly – domination over the natural world around us. How can we reconcile this self-conceived status as one species of animal among others on the one hand, with the eye-watering extent of our damaging impositions on the world around us on the other? It’s one thing to think of ourselves as a special category of being, perhaps one that has the right (or even the duty) to organise things for the whole of the natural world. But that option is ruled out by a robust insistence on our lack of specialness, of continuity with other animals. The only option remaining, however, seems to be infinitely more disturbing – that we are mere animals who have simply spiralled out of control. In the end, we often found these participants adopting the rather ingenious solution of moving from first personal locutions to speaking in generalisations when discussing power asymmetries with the rest of the natural world; ‘I don’t think we’re special, but the problem is that people do’.

Others, by contrast, began from a heightened sense of fundamental distinctness from other animals. Even if we’re animals (sotto voce), we’re obviously special. No danger among these groups of failing to celebrate the special complexity of human beings. But these participants faced another challenge, of reconciling this self-conception as fundamentally different from other animals with knowledge of the biological fact of our animality.

Typically participants expressing this sort of view reported that their knowledge of their animality is highly muted or recessive as they go about their daily lives. Indeed, some reported not only that it normally faded into the background, but more strongly that it took considerable cognitive effort to bring it to mind and make it fit with how they really see themselves. In one particularly memorable articulation of this feeling, one participant recalled finding out that she was an animal, and thinking of it ‘as more of a classification like fitting everything into bubbles, like when I realised the sun was a star. It has all the same properties as the other stars and that’s weird to you because you regard them very differently in your everyday life.’ Our animality, the idea seems to be, is a matter of indisputable scientific fact which is nevertheless somehow completely at odds with our everyday conceptualisations and categorisations.

Through discussion, these groups too found creative ways of dissolving the tension. An extreme minority reaction was to give up on the claim that we are animals as simply ‘not ringing true’. Another strategy, observed in an extended discussion by a group of physics students, was to redraw the conceptual boundaries of what it is to be an animal. If we abandon the idea that animals must be biological organisms, then we create more space to comfortably hold together both the fact that we are animals and the conviction that we are importantly different from other members of the animal kingdom. To say that we are animals, after all, might now be to position ourselves us just as closely to computers as to caterpillars. A third sort of resolution was to associate animality with a very basic form of existence; one that we have, by now, transcended. We might once have been animals, the idea is, but we’ve now moved beyond it. With this response, participants were able to bracket out uncomfortable facts about our animal natures as part of our evolutionary history, rather than presenting themselves as calling for incorporation into our live self-conceptions. For the most part, however, all of these responses were given with observable unease and frank statements of felt difficulty in incorporating the fact of our animality into their everyday self-conceptions.

Among yet other participants there emerged a quite different viewpoint, this time one that seemed much better able to accommodate our claims to both animality and to distinctness. For this group, the traits, behaviours and capacities that might at first glance seem to separate us from the rest the animal kingdom are really just the results of evolutionary processes, like any other. Cinemas, religion, prog rock, i-pads, sarcasm, nuclear weapons, cryptic crosswords and Shoreditch apartments don’t cut us off from the natural world; they are part of it. We are, on this view, placed unflinchingly alongside other animals in the natural world, but not at the cost of a denial or deprecation of human complexity.

One of the central aims of the Persons as Animals project was to better understand our relationship to our own animality, so that we might in turn better understand how to instill more deep-rooted ways of thinking of ourselves as animals into our philosophy of mind. Our results seem to suggest that for many of us the answer is that the relationship is a profoundly awkward one; we seem to be far from finding a stable resting place for our sense of position in the animal world. This finding ought to put us on our guard in our philosophical practices. We are not insulated, as philosophers, from the uneasy and conflicted animalistic self-conceptions that seemingly underlie our everyday thinking about ourselves.

Is implicit cognition bad cognition?

implicit-test-large

by Sophie Stammers– incoming postdoctoral fellow on project PERFECT

A significant body of research in cognitive science holds that human cognition comprises two kinds of processes: explicit and implicit. According to this research, explicit processes operate slowly, requiring attentional guidance, whilst implicit processes operate quickly, automatically and without attentional guidance (Kahneman, 2012; Gawonski and Bodenhausen, 2014). A prominent example of implicit cognition that has seen much recent discussion in philosophy is that of implicit social bias, where associations between (often) stigmatized social groups and (often) negative traits manifest in behaviour, resulting in discrimination (see Brownstein and Saul, 2016a; 2016b). This is the case even though the individual in question isn’t directing their behaviour to be discriminatory with the use of attentional guidance, and is apparently unaware that they’re exhibiting any kind of disfavouring treatment at the time (although see Holroyd 2015 for the suggestion that individuals may be able to observe bias in their behaviour).

Examples of implicit social bias manifesting in behaviour include exhibiting greater signs of social unease, less smiling and more speech errors when conversing with a black experimenter compared to when the experimenter is white (McConnell and Leibold, 2001); less eye contact and increased blinking in conversations with a black experimenter versus their white counterpart (Dovidio et al., 1997), and reduced willingness for skin contact with a black experimenter versus a white one (Wilson et al., 2000). Implicit social biases also arise in more deliberative scenarios: Swedish recruiters who harbor implicit racial associations are less likely to interview applicants perceived to be Muslim, as compared to applicants with a Swedish name (Rooth, 2007), and doctors who harbor implicit racial associations are less likely to offer treatment to black patients with the clinical presentation of heart disease than to white patients with the same clinical presentation of the disease (Green, et al., 2007). These studies establish that there is no correlation between participants’ discriminatory behaviour and the beliefs and values that they profess to have when questioned.

Both the mechanisms of implicit bias, and implicit processes more generally, are often characterised in the language of the sup-optimal. Variously, they deliver “a more inflexible form of thinking” than explicit cognition (Pérez, 2016: 28), they are “arational” compared to the rational processes that govern belief update (Gendler, 2008a: 641; 2008b: 557), and their content is “disunified” with our set of explicit attitudes (Levy, 2014: 101-103). As such, one might be tempted to think of implicit cognition as regularly, or even necessarily bad cognition. A strong interpretation of that value-laden assessment might mean that the processes in question deliver objectively bad outputs, however these are to be defined, but we could also mean something a bit weaker, such as that outputs are not aligned with the agent’s goals, or similar. It’s easy to see why one might apply this value-laden assessment to the mechanisms which result in implicitly biased behaviour: individuals simply have no reason to discriminate against already marginalized people in the ways outlined above, and yet they do anyway – that seems like a good candidate for bad cognition. That implicitly biased behaviours are the product of what appears to be a suboptimal processing system might motivate the argument that we’re not the agents of our implicitly biased behaviors, as well as arguments that might follow from this, such as that it is not appropriate to hold people morally responsible for their implicit biases (Levy, 2014).

But I think it would be wrong to conclude that implicit cognition necessarily delivers suboptimal outputs, and that implicit bias is an example of bad cognition simply for the reason that it is implicit. Moreover, as I’ll argue below, maintaining the former claim may well do a disservice to the project of reducing implicit social biases.

Whilst explicit processes may be ‘better’ at some cognitive tasks, research suggests that implicit processes can actually deliver a more favourable performance than explicit processes in a variety of domains. For instance, non-attentional, automatic processes govern the fast motor reactions employed by skilled athletes (Kibele, 2006). Trying to bring these processes under attentional control can actually disrupt sporting performance: Fleagal and Anderson (2008) show that directing attention to their action performance significantly impairs the ability of high-skill golfers on a putting task, whilst high-skill footballers perform less proficiently when directing attention to their execution of dribbling (Beilock et al., 2002). Engaging attentional processes when learning new motor skills can also disrupt performance (McKay et al., 2015).

Meanwhile, functional MRI studies suggest that improvisation implicates non-attentional processes. One study shows that when professional jazz pianists improvise, they do so in the absence of central processes implicated in attentional guidance (Limb and Braun, 2008). Another study demonstrates that trained musicians inhibit networks associated with attentional processing during improvisation, (Berkowitz and Ansari, 2010).

Further, deliberately disengaging attentional resources can facilitate creativity, a process known as ‘incubation’. Subjects who return to work on a creative task after a period directing attentional resources to something unrelated to the task at hand often deliver enhanced outputs compared with those who continually engage their attentional resources (Dodds et al., 2003). It has been proposed that task-relevant implicit processes remain active during the incubation period and contribute to enhanced creative output (Ritter and Dijksterhuis, 2014).

So it would be wrong to suggest that implicit processes necessarily, or even typically, deliver sub-optimal outputs compared with their explicit cousins. And pertinent to our discussion of implicit social bias, implicit processes themselves can actually be recruited to inhibit the manifestation of bias. Research demonstrates that participants with genuine long-term egalitarian commitments (Moskowitz et al. 1999) as well as those in whom egalitarian commitments are activated during in an experimental task (Moskowitz and Li, 2011) actually manifest less implicit bias than those without such commitments. Crucially, the processes which bring implicit responses in line with an agent’s long-term commitments are not driven by attentional guidance, instead operating automatically to prevent the facilitation of stereotypic categories in the presence of the relevant social concepts (Moskowitz et al. 1999: 168). The suggestion here is that developing genuine commitments to egalitarian values and treatment can actually recalibrate implicit processes to deliver value-consistent behavior (see Holroyd and Kelly, 2016), without needing to effortfully override implicit responses each time one encounters social concepts that might otherwise trigger biased reactions. It would seem that the profile of implicit processes as inflexible, arational and disunified with explicit values and commitments is ill-fitted to account for this example.

So, in a number of cases it seems that implicit processes can serve our goals and values. If this is right, then we should perhaps be more willing to locate ourselves as agents not just in the behavior that arises from our explicit processes, but in that which arises from our implicit ones as well.

I think this has an important implication for practices related to implicit bias training. We should be wary of the rhetoric that distances us as agents from our implicit processes: for instance, characterizing implicit bias as “racism without racists”1 might be comforting for those of us with implicit racial biases, but disowning the implicit processes that lead to racial discrimination, while not disowning those that lead to skilled musical improvisation or creativity as above, seems somewhat inconsistent. I wonder whether greater willingness to accept one’s implicit processes as aspects of one’s agency (not necessarily as central, defining aspects of one’s agency – but somewhere in there nonetheless) might help to motivate the project of realigning one’s implicitly biased responses?

 

Footnotes:

  1. In U.S. Department of Justice. 2016. “Implicit Bias.” Community Oriented Policing Services report, page 1. Accessed 27/07/16, URL: https://uploads.trustandjustice.org/misc/ImplicitBiasBrief.pdf

 

References:

Berkowitz, A. L. and D. Ansari. 2010. “Expertise-Related Deactivation of the Right Temporoparietal Junction during Musical Improvisation.” NeuroImage 49 (1): 712–19.

Brownstein, M and J. Saul. 2016a. Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, New York: Oxford University Press.

Brownstein, M and J. Saul. 2016b. Implicit Bias and Philosophy, Volume 2: Moral Responsibility, Structural Injustice, and Ethics, New York: Oxford University Press.

Dodds R. D., T. B. Ward and S. M. Smith. 2003. “Incubation in problem solving and creativity.” in The Creativity Research Handbook, edited by Runco M. A., Cresskill, NJ: Hampton Press.

Dovidio, J. F., K. Kawakami, C. Johnson, B. Johnson and A. Howard. 1997. “On the Nature of Prejudice: Automatic and Controlled Processes.” Journal of Experimental Social Psychology 33 (5): 510–40.

Gawronski, B. and G. V. Bodenhausen. 2014. “Implicit and Explicit Evaluation: A Brief Review of the Associative-Propositional Evaluation Model: APE Model.” Social and Personality Psychology Compass 8 (8): 448–62.

Gendler, T. S. 2008a. “Alief and Belief.” The Journal of Philosophy 105 (10): 634–63.

———. 2008b. “Alief in Action (and Reaction).” Mind & Language 23 (5): 552– 85.

Green, A. R., D. R. Carney, D. J. Pallin, L. H. Ngo, K. L. Raymond, L. I. Iezzoni and M. R. Banaji. 2007. “Implicit Bias among Physicians and Its Prediction of Thrombolysis Decisions for Black and White Patients.” Journal of General Internal Medicine 22 (9): 1231–38.

Holroyd, J. 2015. “Implicit Bias, Awareness and Imperfect Cognitions.” Consciousness and Cognition 33 (May): 511–23.

Holroyd, J. and D. Kelly. 2016. “Implicit Bias, Character, and Control.” In From Personality to Virtue, edited by A. Masala and J. Webber, Oxford: Oxford University Press.

Kahneman, D. 2012. Thinking, Fast and Slow, London: Penguin Books.

Kibele, A. 2006. “Non-Consciously Controlled Decision Making for Fast Motor Reactions in sports—A Priming Approach for Motor Responses to Non-Consciously Perceived Movement Features.” Psychology of Sport and Exercise 7 (6): 591–610.

Levy, N. 2014. Consciousness and Moral Responsibility, Oxford; New York: Oxford University Press.

Limb, C. J. and A. R. Braun. 2008. “Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation.” Edited by E. Greene. PLoS ONE 3 (2): e1679.

McConnell, A. R. and J. M. Leibold. 2001. “Relations among the Implicit Association Test, Discriminatory Behavior, and Explicit Measures of Racial Attitudes.” Journal of Experimental Social Psychology 37 (5): 435–42.

McKay, B., G. Wulf, R. Lewthwaite and A. Nordin. 2015. “The Self: Your Own Worst Enemy? A Test of the Self-Invoking Trigger Hypothesis.” The Quarterly Journal of Experimental Psychology 68 (9): 1910–19.

Moskowitz, G. B., P. M. Gollwitzer, W. Wasel and B. Schaal. 1999. “Preconscious Control of Stereotype Activation Through Chronic Egalitarian Goals.” Journal of Personality and Social Psychology 77 (1): 167–184

Moskowitz, G. B., and P. Li. 2011. “Egalitarian Goals Trigger Stereotype Inhibition: A Proactive Form of Stereotype Control.” Journal of Experimental Social Psychology 47 (1): 103–16.

Pérez, E. O. 2016. Unspoken Politics: Implicit Attitudes and Political Thinking, New York, NY: Cambridge University Press.

Ritter, S. M. and A. Dijksterhuis. 2014. “Creativity–the Unconscious Foundations of the Incubation Period.” Frontiers in Human Neuroscience 8: 22–31.

Rooth, D-O. 2007. “Implicit Discrimination in Hiring: Real World Evidence.” (IZA Discussion Paper No. 2764). Bonn, Germany: Forschungsinstitut Zur Zukunft Der Arbeit (Institute for the Study of Labor).

Wilson, T. D., S. Lindsey and T. Y. Schooler. 2000. “A Model of Dual Attitudes.” Psychological Review 107 (1): 101–26.

 

 

Trusting the Uncanny Valley: Exploring the Relationship Between AI, Mental State Ascriptions, and Trust.

uncanny-valley-humanoid-android-with-creator-468x312

Henry Powell- PhD Candidate in Philosophy at the University of Warwick

Interactive artificial agents such as social and palliative robots have become increasingly prevalent in the educational and medical fields (Coradeshi et al. 2006). Different kinds of robots, however, seem to engender different kinds of interactive experiences from their users. Social robots, for example, tend to afford positive interactions that look analogous to the ones we might have with one another. Industrial robots, on the other hand, rarely, if ever, are treated in the same way. Some very lifelike humanoid robots seem to fit somewhere outside of these two spheres, inspiring feelings of discomfort or disgust from people who come into contact with them. One way of understanding why this phenomenon obtains is via a conjecture developed by the Japanese roboticist Masahiro Mori in 1970 (Mori, 1970, pp. 33-35). This so called “uncanny valley” conjecture has a number of potentially interesting theoretical ramifications. Most importantly, that it may help us to understand a set of conditions under which humans could potentially ascribe mental states to beings without minds – in this case, that trusting an artificial agent can lead one to do just that. With this in mind the aims of this post are two-fold. Firstly, I wish to provide an introduction to the uncanny valley conjecture and secondly, I want to raise doubts concerning its ability to shed light on the conditions under which mental state ascriptions occur. Specifically, in experimental paradigms that see subjects as trusting their AI coactors.

Mori’s uncanny valley conjecture proposes that as robots increase in their likeness to human beings, their familiarity likewise increases. This trend continues up to a point at which their lifelike qualities are such that we become uncomfortable interacting with them. At around 75% human likeness, robots are seen as uncannily like human beings and are viewed with discomfort, or, in more extreme cases, disgust, significantly hindering their potential to galvanise positive social interactions.

uncanny-valley-graph-450x351

This effect has been explained in a number of ways. For instance, Saygin et al. (2011, 2012), have suggested that the uncanny valley effect is produced when there is a perceived incongruence between an artificial agent’s form and its motion. If an agent is seen to be clearly robotic but move in a very human-like way, or vice-versa, there is an incompatibility effect in the predictive, action simulating cognitive mechanisms that seek to pick out and forecast the actions of humanlike and non-humanlike objects. This predictive coding mechanism is provided contradicting information by the visual system ([human agent] with [nonhuman movement]) that prevents it from carrying out predictive operations to its normal degree of accuracy (Urgen & Miller, 2015). I take it that the output of this cognitive system is presented in our experience as being uncertain and that this uncertainty accounts for the feelings of unease that we experience when interacting with these uncanny artificial agents.

Of particular philosophical interest in this regard is a strand of research that has suggested that humans can be seen to make mental state ascriptions to artificial agents that fall outside the uncanny valley in given situations. This story was posited in two studies published in 2011 and 2015 by Kurt Gray & Daniel Wegner and Maya Mathur & David Reichling respectively. As I believe that it contains the most interesting evidential basis for thinking along these lines I will limit my discussion here to the latter experiment.

Mathur & Reichling’s study saw subjects partake in an “investment game” (Berg et al. 1995) – a generally accepted experimental standard in measuring trust – with a number of artificial agents whose facial features varied in their human likeness. This was to test whether subjects were willing to trust different kinds of artificial agents depending on where they fell on the uncanny valley scale. What they found was that subjects played the game in such a way that indicated that they trusted robots with certain kinds of facial features to act in certain ways so as to reach an outcome that was mutually beneficial to both of them, rather than favouring one or the other. The authors surmised that because the subjects seemed to trust these artificial agents, in a way that suggested that they had thought about what the artificial agent’s intentions might be, the subjects had ascribed mental states to their robotic partners in these cases.

It was proposed that subjects had believed that the artificial agents had mental states encompassing intentional propositional attitudes (beliefs, desires, intentions etc.). This was because subjects seemed to assess the artificial agent’s decision making processes in the form of what the robots “interests” in the various outcomes might be. This result is potentially very exciting but I think that it jumps to conclusions rather too quickly. I’d now like to briefly give reasons for my thinking along these lines.

Mathur and Reichling seem to be making two claims in the discussion of their study’s results.

  1. That subjects trusted the artificial agents.
  2. That this trust implies the ascription of mental states.

My objections here are the following. I think that i) is more complicated than the authors make it out to be and that ii) is just not at all obvious and does not follow from i) when i) is analysed in the proper way. Let us address i) first as it leads into the problem with ii).

When elaborated, I think that i) is making a claim that the subjects believed that the artificial agents would act in a certain way and that this action would be satisfactorily reliable. I think that this is plausible but I also think that the form of trust here is not that which is intended by Mathur and Reichling and is thus uninteresting in relation to ii). There are, as far as I can tell, at least two ways in which we can trust things. The first and perhaps most interesting form of trust is that one expressible in sentences like “I trust my brother to return the money that I lent him”. This implies that I think of my brother as the sort of person who would not, given the opportunity and upon rational reflection, do something contrary to what he had told me he would do. The second form of trust is that which we might have towards a ladder or something similar. We might say of such objects that “I trust that if I walk up this ladder it will not collapse because I know that it is sturdy”. The difference here should be obvious. I trust the ladder because I can infer from its physical state that it will perform its designated function. It has no loose fixtures, rotting parts or anything else that might make it collapse when I walk up it. To trust the ladder in this way I do not think that it has to make commitments to the action expected of it based on a given set of ethical standards. In the case of trusting my brother, my trust in him is expressible as a belief in the idea that given the opportunity to choose not do what I have asked of him he will chose in favour of that which I have asked. The trust that I have in my brother requires that I believe that he has mental states that inform and help him to choose to act in favour of my asking him to do something. One form of trust implies the existence of mental states, the other does not. In regards to ii) then, as has just been argued, trust only implies mental states if it is of the form that I would ascribe to my brother in the example just given, but not if that trust was of the sort that we would normally ascribe to reliably functional objects like ladders. So ii) only follows from i) if the former kind of trust is evinced and not otherwise.

This analysis suggests that if we are to believe that the subjects in this experiment ascribed mental states to the artificial agents (or indeed subjects in any other experiment that reaches the same conclusions) then we need sufficient reasons for thinking that the subjects were treating the artificial agents like I would treat my brother and not like I would treat the ladder in respect to ascriptions of trust. Mathur and Reichling are silent as to these and thus we have no good reason for thinking that mental state ascriptions were taking place in the minds of the subjects in their experiment. While I do not think that it is entirely impossible that such a thing might obtain in some circumstances it is just not clear from this experiment that it obtains in this instance.

What I have hopefully shown in this post is that is important that proceed with caution when making claims about our willingness to ascribe other minds to certain kinds of objects and agents (either artificial or otherwise). Specifically, it is important to do so in relation to our ability to hold such things in seemingly special kinds of relations with ourselves, trust being an important example of this.

 

References:

Berg, J., Dickhaut J., McCabe, K., (1995). Trust, Reciprocity, and Social History. Game and Economic Behaviour, 10, 122-142.

Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., … Ishida, H. (2006). Human-inspired robots. IEEE Intelligent Systems, 21(4), 74–85.

Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130.

MacDorman, K. F. (2005). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 workshop: toward social mechanisms of android science (pp. 106–118).

  1. B. Mathur and D. B. Reichling, “An uncanny game of trust: Social trustworthiness of robots inferred from subtle anthropomorphic facial cues,”Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on, La Jolla, CA, 2009, pp. 313-314.

Saygin, A. P. (2012). What can the Brain Tell us about Interactions with Artificial Agents and Vice Versa? In Workshop on Teleoperated Androids, 34th Annual Conference of the Cognitive Science Society.

Saygin, A. P., & Stadler, W. (2012). The role of appearance and motion in action prediction. Psychological Research, 76(4), 388–394. http://doi.org/10.1007/s00426-012-0426-z

Urgen, B. A., & Miller, L. E. (2015). Towards an Empirically Grounded Predictive Coding Account of Action Understanding. Journal of Neuroscience, 35(12), 4789–4791.

 

 

 

Split Brains and the Compositional Metaphysics of Consciousness

il_fullxfull.367667424_picx

Luke Roelofs- Postdoctoral Fellow in Philosophy at the Australian National University

The mammalian brain has an odd sort of redundancy: it has two hemispheres, each capable of supporting more-or-less normal human consciousness without the other. We know this because destroying, incapacitating, or removing one hemisphere leaves a patient who, despite some difficulties with particular activities, is clearly lucid and conscious. The puzzling implications of this redundancy are best brought out by considering the unusual phenomenon called the ‘split-brain’.

The hemispheres are connected by a bundle of nerve fibres called the corpus callosum, as well as both being linked to the non-hemispheric parts of the brain (the ‘brainstem’). To control the spread of epileptic seizures, some patients had their corpus callosum severed while leaving both hemispheres, and the brainstem, intact (Gazzaniga et al. 1962, Sperry 1964). These patients appear normal most of the time, with no abnormalities in thought or action, but when experimenters manage to present stimuli to sensory channels which will take them exclusively to one hemisphere or the other, strange dissociations appear. For example, when we show the word ‘key’ to the right hemisphere (such as by flashing it in the left half of the patient’s visual field), it cannot be verbally reported (because the left hemisphere controls language), but if we ask the patient to pick up the object they saw the word for, they will readily pick out a key – but only if they can use their left hand (controlled by the right hemisphere). Moreover, for example, if the patient is shown the word ‘keyring’, with ‘key’ going to the right hemisphere and ‘ring’ going to the left, they will pick out a key (with their left hand) and a ring (with their right hand), but not a keyring. They will even report having seen only the word ‘ring’, and deny having seen either ‘key’ or ‘keyring’.

Philosophical discussion of the split-brain phenomenon takes two forms: arguing in support of a particular account of what is going on (e.g. Marks 1980, Hurley 1998, Tye 2003, pp.111-129, Bayne & Chalmers 2003, pp.111-112, Bayne 2008, 2010, pp.197-220), or exploring how the case challenges the very way that we frame such accounts. A seminal example of the latter form is Nagel (1971) which reviews several ways to make sense of the split-brain patient – as one person, as two people, as one person who occasionally splits into two people, etc. – and rejects them all for different reasons, concluding that we have found a case where our ordinary concept of ‘a person’ breaks down and cannot be coherently applied. My work develops an idea in the vicinity of Nagel’s: that our ordinary concept of ‘a person’ can handle the split-brain phenomenon if we transform it to allow for composite subjectivity – something which we have independent arguments for.

Start with what Nagel says about one of the proposed interpretations of the split-brain patient: as two people inhabiting one body. Pointing out that when not in experimental situations, the patient shows fully integrated behaviour, he asks whether we can really refuse to ascribe all their behaviour to a single person, “just because of some peculiarities about how the integration is achieved”(Nagel 1971, p.406). Of course sometimes two people do seem to work ‘as one’, as in “pairs of individuals engaged in a performance requiring exact behavioral coordination, like using a two-handed saw, or playing a duet.” Perhaps the two hemispheres are like this? But Nagel worries that this position is unstable:

“If we decided that they definitely had two minds, then [why not] conclude on anatomical grounds that everyone has two minds, but that we don’t notice it except in these odd cases because most pairs of minds in a single body run in perfect parallel?” (Nagel 1971, p.409)

Nagel’s worry here is cogent: if we accept that there can be two distinct subjects despite it appearing for all the world as though there was only one, we seem to lose any basis for confidence that the same thing is not happening in other cases. He continues:

“In case anyone is inclined to embrace the conclusion that we all have two minds, let me suggest that the trouble will not end there. For the mental operations of a single hemisphere, such as vision, hearing, speech, writing, verbal comprehension, etc. can to a great extent be separated from one another by suitable cortical deconnections; why then should we not regard each hemisphere as inhabited by several cooperating minds with specialized capacities? Where is one to stop?” (Nagel 1971, Fn11)

Where indeed? If one apparently unified mind could be really a collection of interacting minds, why not think that all apparently unified minds are really such collections? What evidence could decide one way or the other? Taking this line seems to leave us with empirically undecidable questions about every mind we encounter.

What is striking is that this way of thinking isn’t problematic for anything other than minds – indeed it is platitudinous. Most things can be equally well understood as one or as many, because we are happy to regard them simultaneously as a collection of parts and as a single whole. What makes the split-brain phenomenon so perplexing is our difficulty in extending this attitude to minds.

Consider, for instance, the physical brain. Do we have one brain, or do we have several billion neurones, or even 8-or-so lobes? The answer of course is ‘all of the above’: the brain is nothing separate from the billions of neurones, in the right relationships, and neither are the 8 lobes anything separate from the brain (which they compose) or the neurones (which compose them). And as a result of the ease with which we shift between one-whole and many-parts modes of description, we can be sanguine about the question ‘how many brains does the split-brain patient have?’ There is some basis for saying ‘one’, and some basis for saying ‘two’, but it’s fine if we can’t settle on a single answer, because the question is ultimately a verbal one. There are all the normal parts of a brain, standing in some but not all of their normal relations, and so not fitting the criteria for being ‘a brain’ as well as they normally would. And there are two overlapping subsystems within the one whole, which individually fit the criteria for being ‘a brain’ moderately well. But there is no further fact about which form of description – calling the whole a brain or calling the two subsystems each a brain – is ultimately correct.

The challenge is to take the same relaxed attitude to the question ‘how many people?’ Here is what I would like to say: the two hemispheres are conscious, and the one brain that they compose is conscious in virtue of their consciousness and the relations between them. Under normal circumstances their interactions ensure that the composite consciousness of the whole brain is well-unified: in the split-brain experiments, their interactions are different and establish a lesser degree of unity. And each hemisphere is itself a composite of smaller conscious parts. This amounts to embracing what Nagel views as a reductio.

There is something very difficult to think through about the composite consciousness view. It seems as though if each hemisphere is someone, that’s one thing, and if the whole brain is someone, that’s another: they cannot be just two equivalent ways of describing the same state of affairs. And this intuitive resistance to seeing conscious minds as composed of others (call it the ‘Anti-Combination intuition’) goes well beyond the split-brain phenomenon. It has a long history in the form of the ‘simplicity argument’, which anti-materialist philosophers from Plotinus (1956, pp.255-­258, 342-­356) to Descartes (1985, Volume 2, p.59) to Brentano (1987, pp. 290­-301) have used to show the immateriality of the soul. In a nutshell, this argument says that since minds cannot be thought of as composite, they must be indivisible, and since all material things are divisible, the mind cannot be material (for further analysis see Mijuskovic 1984, Schachter 2002, Lennon & Stainton 2008). Nor is the significance of this difficulty just historical: many recent materialist theories either stipulate that no conscious being can be part of another (Putnam 1965, pp.163, Tononi 2012, pp.59­-68), or else advance arguments based on the intuitive absurdity of consciousness in a being composed of other conscious beings (Block 1978, cf. Barnett 2008, Schwitzgebel 2015).

All of the just-cited authors take the Anti-Combination intuition as a datum, and draw conclusions from it about the nature of consciousness – conclusions up to and including substance dualism. I prefer the opposite approach: to see the Anti-Combination intuition as a fact about humans which impedes our understanding of how consciousness fits into the natural world, and thus as something which philosophers should seek to analyse, understand, and ultimately move beyond. As it happens, there is a group of contemporary philosophers engaged in just this task: constitutive panpsychists. Panpsychists think that the best explanation for human consciousness is that consciousness is a general feature of matter, and constitutive panpsychists see human consciousness as constituted out of simpler consciousnesses just as the human brain is constituted out of simpler physical structures. The most pressing objection to this view, which has received extensive recent discussion, is the ‘combination problem’: can multiple simple consciousness really compose a single complex consciousness (Seager 1995, p.280, Goff 2009, Coleman 2013, Mørch 2014, Roelofs 2014, Forthcoming-a, Forthcoming-b, Chalmers Forthcoming)? And this is at bottom the same issue as we have been grappling with concerning the split-brain phenomenon. In my research, I try to explore the Anti-Combination intuition, its basis, and how to move past it, with an eye both to the general metaphysical questions raised by constitutive panpsychism, and to particular neuroscientific phenomena like the split-brain.

 

References:

Barnett, David. 2008. ‘The Simplicity Intuition and Its Hidden Influence on Philosophy of Mind.’ Nous 42(2): 308­-335

Bayne, Timothy. 2008. ‘The Unity of Consciousness and the Split-Brain Syndrome.’ The Journal of Philosophy 105(6): 277-300.

Bayne, Timothy. 2010. The Unity of Consciousness. Oxford: Oxford University Press

Bayne, Timothy, & Chalmers, David. 2003. ‘What is the Unity of Consciousness?’ In Cleeremans, A. (ed.), The Unity of Consciousness: Binding, Integration, Dissociation, Oxford: Oxford University Press: 23-58

Block, Ned. 1978. ‘Troubles with Functionalism.’ In Savage, C. W. (ed.), Perception and Cognition: Issues in the Foundations of Psychology¸ University of Minneapolis Press: 261-325

Brentano, Franz. 1987. The Existence of God: Lectures given at the Universities of Worzburg and Vienna, 1868-­1891. Ed. and trans. Krantz, S., Nijhoff International Philosophy Series

Chalmers, David. Forthcoming­. ‘The Combination Problem for Panpsychism.’ In Bruntrup, G. and Jaskolla, L. (eds.), Panpsychism, Oxford: Oxford University Press

Coleman, Sam. 2014. ‘The Real Combination Problem: Panpsychism, Micro-­Subjects, and Emergence.’ Erkenntnis 79:19-44

Descartes, René. 1985. ‘Meditations on First Philosophy.’ Originally published 1641. In Cottingham, John, Stoothoff, Robert, and Murdoch, Dugald, (trans and eds.) The Philosophical Writings of Descartes, 2 vols., Cambridge: Cambridge University Press

Gazzaniga, Michael, Bogen, Joseph, and Sperry, Roger. 1962. ‘Some Functional Effects of Sectioning the Cerebral Commissures in Man.’ Proceedings of the National Academy of Sciences 48(2): 17-65

Goff, Philip. 2009. ‘Why Panpsychism doesn’t Help us Explain Consciousness.’ Dialectica 63(3):289-­311

Hurley, Katherine. 1998. Consciousness in Action. Harvard University Press.

Lennon, Thomas, and Stainton, Robert. (eds.) 2008. The Achilles of Rationalist Psychology. Studies In The History Of Philosophy Of Mind, V7, Springer

Marks, Charles. 1980. Commissurotomy, Consciousness, and Unity of Mind. MIT Press

Mijuskovic, Benjamin. 1984. The Achilles of Rationalist Arguments: The Simplicity, Unity, and Identity of Thought and Soul From the Cambridge Platonists to Kant: A Study in the History of an Argument. Martinus Nijhoff.

Mørch, Hedda Hassel. 2014. Panpsychism and Causation: A New Argument and a Solution to the Combination Problem. Doctoral Dissertation, University of Oslo

Nagel, Thomas. 1971. ‘Brain Bisection and the Unity of Consciousness.’ Synthese 22:396-413.

Plotinus. 1956. Enneads. Trans. and eds. Mackenna, Stephen, and Page, B. S. London: Faber and Faber Ltd.

Putnam, Hilary. 1965. ‘Psychological predicates’. In Capitan, William, and Merrill, Daniel. (eds.), Art, mind, and religion. Liverpool: University of Pittsburgh Press

Roelofs, Luke. 2014. ‘Phenomenal Blending and the Palette Problem.’ Thought 3:59–70.

Roelofs, Luke. Forthcoming-a. ‘The Unity of Consciousness, Within and Between Subjects.’ Philosophical Studies.

Roelofs, Luke. Forthcoming-b. ‘Can We Sum subjects? Evaluating Panpsychism’s Hard Problem.’ In Seager, William (ed.), The Routledge Handbook of Panpsychism, Routledge.

Schachter, Jean-Pierre. 2002. ‘Pierre Bayle, Matter, and the Unity of Consciousness.’ Canadian Journal of Philosophy 32(2): 241­-265

Seager, William. 1995. ‘Consciousness, Information and Panpsychism.’ Journal of Consciousness Studies 2­3:272-2­88

Sperry, Roger. 1964. ‘Brain Bisection and Mechanisms of Consciousness.’ In Eccles, John (ed.), Brain and Conscious Experience. Springer-Verlag

Tye, Michael. 2003. Consciousness and Persons: Unity and Identity. MIT Press

Tononi, Giulio. 2012. ‘Integrated information theory of consciousness: an updated account.’ Archives Italiennes de Biologie 150(2­3): 56­-90

Investigating the Stream of Consciousness

Oliver Rashbrook-Cooper–British Academy Postdoctoral Fellow in Philosophy at the University of Oxford

There are a number of different ways in which we can fruitfully study our streams of consciousness. We might try to provide a detailed characterisation of how conscious experience seems ‘from the inside’, and closely scrutinize the phenomenology. We might try to uncover the structure of consciousness by focussing upon our temporal acuity, and examining when and how we are subject to temporal illusions. Or we might focus upon investigating the neural mechanisms upon which conscious experience depends.

Sometimes, these different approaches appear to yield contradictory results. In particular, the deliverances of introspection sometimes appear to be at odds with what is revealed both by certain temporal illusions and by research into neural mechanisms. When this occurs, what should we do? We can begin by considering two features of how consciousness phenomenologically seems.

It is natural to think of experience as unfolding in step with its objects. Over a ten second interval, for instance, I might watch someone sprint 100 metres. If I watch this event, my experience will unfold over a ten second interval. First I will hear the pistol fire, see the race begin, and so on, until I see the leader cross the finish line. My experience of the race has two features. Firstly, it seems to unfold in step with the race itself, secondly it seems to unfold smoothly – it seems as if I am continuously aware of the race, rather than my awareness of it being fragmented into discrete episodes.

Can this characterisation of how things seem be reconciled with what we learn from other ways of investigating the stream of consciousness? To answer this question we can consider two different cases: the case of the colour phi phenomenon, and the case of discrete neural processing.

The colour phi phenomenon is a case in which the presentation of two static stimuli gives rise to an illusory experience of motion. When two coloured dots that are sufficiently close to one another are illuminated successively in a sufficiently brief window of time, one is left with the impression that there is a single dot moving from one location to the other (examples can be found here and here)

This phenomenon generates a puzzle about whether experience really unfolds in step with its objects. In order for us to experience apparent motion between the two locations, we need to register the occurrence of the second dot. This makes it seem as if the experience of motion can only occur after the second dot has flashed, for without registering the second dot, we wouldn’t experience motion at all. So it seems that, in this case, the experience of motion doesn’t unfold in step with its apparent object at all. If this is right, then we have reason to doubt that experience normally unfolds in step with its objects, for if we can be wrong about this in the colour phi case, perhaps we are wrong about it in all cases.

The second kind of case is the case of discrete neural processing. There is reason to think that the neural mechanisms underpinning conscious perception are discrete (see, for example, VanRullen and Koch, 2003). This looks to be in tension with the second feature we noted earlier – that our awareness of things appears to be continuous. As in the case of colour phi, it might be tempting to think that this tells us that our impression of how things seem ‘from the inside’ is mistaken.

However, when we consider how things really strike us phenomenologically, it becomes clear that there is an alternative way to reconcile these apparently contradictory results. We can begin by noting that when we introspect, it isn’t possible for us to focus our attention upon conscious experience without focussing upon a temporally extended portion of experience – there is always a minimal interval upon which we are able to focus.

The claims that experience seems to unfold in step with its objects and seems continuous apply to these temporally extended portions of experience that we are able to focus upon when we introspect. If this is right, then we have a different way of thinking about the colour phi case. On this approach, over an interval, we have an experience of apparent motion that unfolds over the time it takes the two dots to flash. The phenomenology is, however, neutral about what occurs over the sub-intervals of this experience.

The claim that this experience unfolds over an extended interval of time isn’t inconsistent with what goes on in the colour phi case. The apparent inconsistency only arises if we think that the claim that experience seems to unfold in step with its object applies to all of the sub-intervals of this experience, no matter how short (for development and discussion of this point, see Hoerl (2013), Phillips (2014), and Rashbrook (2013a)).

Likewise, in the case of discrete neural processing, in order for the case to generate a clash with how experience appears ‘from the inside’, our characterisation of how consciousness seems must apply not only to some temporally extended potions of consciousness, but to all of them, no matter how brief. Again, we might question whether this is really how things seem.

While experience doesn’t seem to be fragmented into discrete episodes, this certainly doesn’t mean that it seems to fill every interval for which we are conscious, no matter how brief (for discussion, see Rashbrook, 2013b). As in the case of the colour phi, perhaps our characterisation of how things seem applies only to temporally extended portions of experience – so the deliverances of introspection are simply neutral about whether conscious experience fills every instant of the interval it occupies.

There is more than one way, then, to reconcile the psychological and the phenomenological strategies of enquiring about conscious experience. Rather than taking non-phenomenological investigation to reveal the phenomenology to be misleading, perhaps we should take it as an invitation to think more carefully about how things seem ‘from the inside’.

 

References:

Hoerl, Christoph. 2013. ‘A Succession of Feelings, in and of Itself, is Not a Feeling of Succession’. Mind 122:373-417.

Phillips, Ian. 2014. The Temporal Structure of Experience. In Subjective Time: The Philosophy, Psychology, and Neurscience of Temporality, ed. Dan Lloyd and Valtteri Arstila, 139-159. MIT.

Rashbrook, Oliver. 2013a. An Appearance of Succession Requires a Succession of Appearances. Philosophy and Phenomenological Research 87:584-610.

Rashbrook, Oliver. 2013b. The continuity of consciousness. European Journal of Philosophy 21:611-640.

VanRullen, Rufin. and Koch, Christoph. 2003. Is perception discrete or continuous? Trends in Cognitive Sciences 7:207-13.

Infant Number Knowledge: Analogue Magnitude Reconsidered

Alex Green, University of Warwick Philosophy

Following Stanislas Dehaene’s The Number Sense (1997) there has been a surge in interest in number knowledge, especially the development of number knowledge in infants. This research has broadly focused on answering the following questions: What numerical abilities do infants possess, and how do these work? How are they different from the numerical abilities of adults, and how is the gap bridged in cognitive development?

The aim of this post is to provide a general introduction to infant number knowledge by focusing on the first two of these questions. There is much evidence indicating that there are two distinct systems by which infants are able to track and represent numerosity – parallel individuation and analogue magnitude. I will begin by briefly explaining what these numerical capacities are. I will then focus my discussion on the analogue magnitude system, and raise some doubts about the way in which this system is commonly understood to work.

Firstly, consider parallel individuation. This system allows infants to differentiate between sets of different quantities by tracking multiple individual objects at the same time (see Feigenson & Carey 2003; Feigenson et al 2002; Hyde 2011). For example if an infant were presented with three objects, parallel individuation would allow the tracking of the individual objects ({object 1, object 2, object 3}) rather than allowing the tracking of total set-size ({three objects}). There are two further points of interest about parallel individuation. Firstly, parallel individuation only represents numerosity indirectly because it track individuals rather than total set-size. Secondly it is limited to sets of fewer than four individuals.

Secondly, consider analogue magnitude. This system allow infants to discriminate between set sizes provided that the ratio is sufficiently large (see (Xu & Spelke 2000), (Feigenson et al 2004), (Xu et al, 2005)). More specifically, analogue magnitude allows infants to differentiate between different sets provided that the ratio is at least 2:1. Interestingly the precise cardinal value of the sets seems to be irrelevant as long as the ratio remains constant (i.e. it applies equally to a case of two-to-four as twenty-to-forty). Thus the limitations of the analogue magnitude system are determined by ratio, in contrast to the parallel individuation system whose limitations are determined by specific set-size.

So how does analogue magnitude work? I will argue that the most recent answer to this question is incorrect. This is because contemporary authors rightly reject the original characterisation of analogue magnitude (the accumulator model), yet fail to reject its implications.

The accumulator model of analogue magnitude is introduced by Dehaene, by way of an analogy with Robinson Crusoe (1997, p.28). Suppose that Crusoe must count coconuts. To do this he might dig a hole next to a river, and dig a trench which links the river to this hole. He also creates a dam, such that he can control when the river flows into the hole. For every coconut Crusoe counts, he diverts some given amount of water into the hole. However as Crusoe diverts more water into the hole, it becomes more difficult to differentiate between consecutive numbers of coconuts (i.e. the difference between one and two diversions of water is easier to see than between twenty and twenty-one).

Dehaene supposes that analogue magnitude representations are given by a similar iconic format, i.e. by representing a physical magnitude proportional to the number of individuals in the set. Consider the following example: one object is represented by ‘_’, two objects are represented by ‘__’, three are represented by ‘___’, and so on. Under this model, analogue magnitude is understood to represent the approximate cardinal value of a set by the use of an iterative counting method (Dehaene 1997, p.29). This partly reflects the empirical data: subjects are able to represent differences in set size (with longer lines indicating larger sets), and the importance of ratio for differentiation is accounted for (because it is more difficult to differentiate between sets which differ by smaller ratios).

More recently this accumulator model of analogue magnitude has come to be rejected, however. This model entails that each object in a set must be individually represented in turn (the first object produces the representation ‘_’, the second produces the representation ‘__’, etc). This suggests that it would take longer for a larger number to be represented than a smaller one (as the quantity of objects to be individually represented differs). However there are empirical reasons to reject this.

For example there is evidence suggesting that the speed of forming analogue magnitude representations doesn’t vary between different set sizes (Wood & Spelke 2005). Additionally, infants are still able to discriminate between different set sizes in cases where they are unable to attend to the individual objects of a set in sequence (Intriligator & Cavanagh 2001). These findings suggests that it is incorrect to claim that analogue magnitude representations are formed by responding to individual objects in turn.

Despite these observations, many authors continue to advocate the implications of this accumulator model even though there isn’t empirical evidence to support these. The implications that I am referring to are that analogue magnitude represents approximate cardinal value and that it does so by the aforementioned iconic format. For example, consider Carey’s discussions of analogue magnitude (2001, 2009). Carey takes analogue magnitude to enable infants to ‘represent the approximate cardinal value of sets’ (2009, p.127). As a result, the above iconic format (in which infants represent a physical magnitude proportional to the number of relevant objects) is still advocated (Carey 2001, p.38). This characterisation of analogue magnitude is typical of many authors (e.g. Feigenson et al 2004; Slaughter et al 2006; Feigenson et al 2002; Lipton & Spelke 2003; Condry & Spelke 2008).

Given the rejection of the accumulator method, this characterisation seems difficult to justify. Analogue magnitude allows infants the ability to differentiate between two sets of quantity, but there seems no reason why this would require anything over and above the representation of ordinal value (i.e. ‘greater than’ and ‘less than’). Consequently the claim that analogue magnitude represents approximate cardinal value seems to be both unjustified and unnecessary. Given this there also seems to be no justification for the Crusoe-analogy iconic format because this doesn’t contribute anything other than allowing analogue magnitude to represent approximate cardinal value which, as we have seen, is empirically undermined.

In this post I have discussed the abilities of parallel individuation and analogue magnitude, in answer to the question: what numerical abilities do infants possess, and how do these work? Parallel individuation allows infants to differentiate between small quantities of objects (fewer than four), and analogue magnitude allows differentiation between quantities if the ratio is sufficiently large. I have also advanced a negative argument against the dominant understanding of analogue magnitude. Many authors have rejected the iterative accumulator model without rejecting its implications (analogue magnitude as representing approximate cardinal value, and its doing so by iconic format). This suggests that the literature requires a new understanding of how the analogue magnitude system works.

 

References:

Carey, S. 2001. ‘Cognitive Foundations of Arithmetic: Evolution and Ontogenisis’. Mind & Language. 16(1): 37-55.

Carey, S. 2009. The Origin of Concepts. New York: OUP.

Condry, K., & Spelke, E. 2008. ‘The Development of Language and Abstract Concepts: The Case of Natural Number.’ Journal of Experimental Psychology: General. 137(1): 22-38.

Dehaene, S. 1997. The Number Sense: How the Mind Creates Mathematics. Oxford: OUP.

Feigenson, L., Carey, S., & Hauser, M. 2002. ‘The Representations Underlying Infants’ Choice of More: Object Files versus Analog Magnitudes’. Psychological Science. 13(2): 150-156.

Feigenson, L., & Carey, S. 2003. ‘Tracking Individuals via Object-Files: Evidence from Infants’ Manual Search’. Developmental Science. 6(5): 568-584.

Feigenson, L., Dehaene, S., & Spelke, E. 2004. ‘Core Systems of Number’. Trends in Cognitive Sciences. 8(7): 307-314.

Hyde, D. 2011. ‘Two Systems of Non-Symbolic Numerical Cognition’. Frontiers in Human Neuroscience. 5: 150.

Intriligator, J., & Cavanagh, P. 2001. ‘The Spatial Resolution of Visual Attention’. Cognitive Psychology. 43: 171-216.

Lipton, J., & Spelke, E. 2003. ‘Origins of Number Sense: Large-Number Discrimination in Human Infants’. Psychological Science. 14(5): 396-401.

Slaughter, V., Kamppi, D., & Paynter, J. 2006. ‘Toddler Subtraction with Large Sets: Further Evidence for an Analog-Magnitude Representation of Number’. Developmental Science. 9(1): 33-39.

Wagner, J., & Johnson, S. 2011. ‘An Association between Understanding Cardinality and Analog Magnitude Representations in Preschoolers’. Cognition. 119(1): 10-22.

Wood, J., & Spelke, E. 2005. ‘Chronometric Studies of Numerical Cognition in Five-Month-Old Infants’. Cognition. 97(1): 23-29.

Xu, F., & Spelke, E. 2000. ‘Large Number Discrimination in 6-Month-Old Infants’. Cognition. 74(1): B1-B11.

Xu, F., Spelke, E., & Goddard, S. 2005. ‘Number Sense in Human Infants’. Developmental Science. 8(1): 88-101.