iCog Blog

An interesting time for the study of moral judgment and cognition

Veljko Dubljevic- Banting Postdoctoral Fellow in the Neuroethics Research Unit at the The Institut de recherches cli­niques de Montréal and the depart­ment of Neurology and Neurosurgery at McGill University- Co-Editor of the Springer Book Series “Advances in Neuroethics”

What is mor­al? Is it always good to save lives? Is killing always wrong? Is being caring always a vir­tue? Are there vari­ous factors that col­lect­ively affect mor­al judge­ments? Are these factors self-standing or do they interact?

Our mor­al judge­ments and mor­al intu­itions sug­gest answers to some of these ques­tions. This is so for both experts, such as mor­al philo­soph­ers and psy­cho­lo­gists, who study mor­al­ity in their dif­fer­ent ways, and layper­sons alike. The study of mor­al­ity among mor­al philo­soph­ers has long been marked by dis­agree­ment between util­it­ari­ans, deont­o­lo­gists and vir­tue the­or­ists on norm­at­ive issues (such as should we give pri­or­ity respect­ively to con­sequences, duties or vir­tues in mor­al judg­ment), as well as between cog­nit­iv­ists and non-cognitivists, real­ists and anti-realists (to name just a few oppos­ing views) on meta-ethical issues.

Moral psychology—the empir­ic­al and sci­entif­ic study of human morality—has, by con­trast, long shown con­sid­er­able con­ver­gence in its approach to mor­al judg­ment. Despite some vari­ation in the details, it is strik­ing that Kohlberg’s (1968) devel­op­ment­al mod­el has simply been adop­ted, even where it is cri­ti­cised (see e.g., Gilligan 1982). According to the devel­op­ment­al mod­el mor­al judg­ment is simply the applic­a­tion of mor­al reas­on­ing – delib­er­ate, effort­ful use of mor­al know­ledge (a sys­tem 2 pro­cess, in today’s par­lance). This is not to dis­reg­ard the vari­ety of view­points in mor­al philo­sophy – mor­al psy­cho­logy has taken these to reflect dis­tinct stages in the devel­op­ment of a ‘mature’ morality.

This all changed with a paradigm shift in mor­al psy­cho­logy towards a more diverse ‘intu­it­ive paradigm’, accord­ing to which mor­al judg­ment is most often auto­mat­ic and effort­less (a sys­tem 1 pro­cess). Studies reveal­ing auto­mat­ism in every­day beha­viour (Bargh and Chartrand 1999), cog­nit­ive illu­sions and sub­lim­in­al influ­ences such as ‘prim­ing’ (Tulving and Schacter 1990), ‘fram­ing’ (Tversky and Kahneman 1981), and ‘anchor­ing’ effects (Ariely 2008), provide ample empir­ic­al evid­ence that mor­al cog­ni­tion, decision-making and judg­ment are often a product of asso­ci­at­ive, hol­ist­ic, auto­mat­ic and quick pro­cesses which are cog­nit­ively undemand­ing (see Haidt 2001). This along with the ‘mor­al dumb­found­ing’ effect – the fact that most people make quick mor­al judg­ments and are hard pressed to offer a reasoned explan­a­tion for them – led to a shift away from the devel­op­ment­al mod­el which struggled to acco­mod­ate these findings.

As a res­ult, mor­al psy­cho­lo­gists now agree that mor­al judg­ment is not driv­en solely by sys­tem 2 reas­on­ing. However, they dis­agree on almost everything else. A range of com­pet­ing the­or­ies and mod­els offer explan­a­tions on how mor­al judg­ment takes place. Some claim that mor­al judg­ments are noth­ing more than basic emo­tion­al responses, per­haps fol­lowed by ration­al­iz­a­tions (Haidt 2001), while oth­ers claim that there are com­pet­ing emo­tion­al and ration­al pro­cesses that pull mor­al judg­ment in one or the oth­er dir­ec­tion (Greene 2008), while still oth­ers think that mor­al judg­ment is intu­it­ive, but not neces­sar­ily emo­tion­al (see, e.g., Mikhail 2007, Gigerenzer 2010, Dubljevic & Racine 2014)

Here, I will sum­mar­ize some rel­ev­ant inform­a­tion and con­clude by con­sid­er­ing which mod­els are still viable and which are not, based on cur­rently avail­able evidence.

Let’s start with the basic emo­t­iv­ist mod­el: As men­tioned earli­er, it was espoused by Jonathan Haidt (2001) in pion­eer­ing work that offered a con­struct­ive syn­thes­is of social and cog­nit­ive psy­cho­lo­gic­al work on auto­mati­city, intu­ition and emo­tion, and has also been cham­pioned by influ­en­tial mor­al philo­soph­ers, such as Walter Sinnott-Armstrong et al. (2010). However, it has been called into ques­tion by work that suc­cess­fully dis­so­ci­ated emo­tion from mor­al judg­ment. For example, con­sider the ‘tor­ture case’ study (Batson et al 2009, Batson 2011). In this study, American respond­ents were asked to rate the mor­al wrong­ness of spe­cif­ic cases of tor­ture and their emo­tion­al arous­al. The exper­i­ment­al group is presen­ted with a vign­ette in which an American sol­dier is tor­tured by mil­it­ants, while a con­trol group read a text in which a Sri-Lankan sol­dier is tor­tured by Tamil rebels. Even though there was no sig­ni­fic­ant dif­fer­ence in the intens­ity of mor­al judg­ment, the respond­ents were ‘riled-up’ emo­tion­ally only in the case of a mem­ber of their in-group being tor­tured. This does not put mor­al emo­tions per se in ques­tion, but it neatly under­mines a crude ‘mor­al judg­ment is just emo­tion’ model.

Now, let’s take a look at the ‘dual-process’ mod­el of mor­al judg­ment. Pioneering research in the neur­os­cience of eth­ics (e.g. Greene et al. 2001) for­mu­lated a clas­si­fic­a­tion of dilem­mas into so-called imper­son­al, such as the ori­gin­al trol­ley dilemma (e.g. wheth­er to throw a switch to save five people and killing one) and per­son­al dilem­mas, such as the foot­bridge dilemma (e.g. wheth­er to push one man to his death in order to save five people). Proponents of the view, take their data to show that the pat­terns of responses in trol­ley dilem­mas favour a “util­it­ari­an” view of mor­al­ity based on abstract think­ing and cal­cu­la­tion, while responses in the foot­bridge dilemma sug­gest that emo­tion­al reac­tions drive answers. The pur­por­ted upshot is that ration­al (driv­ing util­it­ari­an cal­cu­la­tion) and emo­tion­al (driv­ing aver­sion to per­son­ally caus­ing injury) pro­cesses are com­pet­ing for dominance.

Even though there were some ini­tial stud­ies that seemed to cor­rob­or­ate this hypo­thes­is, it remains con­tro­ver­sial, with cer­tain empir­ic­al find­ings appear­ing to remain at odds with the dual-process approach. In par­tic­u­lar, if util­it­ari­an, out­come based judg­ment, is caused by abstract think­ing (sys­tem 2), where­as non-consequentialist intent or duty based judg­ment is intu­it­ive (sys­tem 1) and thus irra­tion­al, how come chil­dren ages 4 to 10 focus more on out­come than on intent (see Cushman 2013)? Given that abstract thought is developed after age 12, ‘fully ration­al’ util­it­ari­an judg­ments should not be observ­able in chil­dren. And yet they are not only observed, but seem to dom­in­ate imma­ture and dys­func­tion­al mor­al cognition.

It is then safe to say that recent research has called the dual-process mod­el into ques­tion. Recent stud­ies have shown that favour­ing the “util­it­ari­an” option has been actu­ally linked to anti-social per­son­al­ity traits, such as Machiavelianism (Bartels & Pizarro, 2011), and psy­cho­pathy (Koenings 2012), as well as with tem­por­ary (increased anger, decreased respons­ib­il­ity, induced lower levels of sero­ton­in Crockett & Rini 2015) and per­man­ent con­di­tions, such as vmPFC dam­age (Koenings 2007) and Fronto-temporal demen­tia (Mendez 2009), that are prob­ably not facil­it­at­ing “ration­al” decision mak­ing. Perhaps the most damning piece of evid­ence is a recent study (Duke & Begue 2015) estab­lish­ing a cor­rel­a­tion between study par­ti­cipants’ blood alco­hol con­cen­tra­tions and util­it­ari­an pref­er­ences. All in all, the empir­ic­al evid­ence seems to sug­gest a stronger role for impaired social cog­ni­tion than intact delib­er­at­ive reas­on­ing in pre­dict­ing util­it­ari­an responses in the trol­ley dilemma, which in turn leads to a con­clu­sion that the dual pro­cess mod­el is on thin ice.

So which mod­el is true? The data seems to sug­gest that an intu­ition­ist mod­el of mor­al judg­ment is most likely, how­ever there are at least three com­pet­it­ors: the mor­al found­a­tions the­ory (Haidt & Graham 2007), the uni­ver­sal mor­al gram­mar (Mikhail 2007, 2011) and the ADC approach (Dubljevic & Racine 2014).

Due to reas­ons of space I will not go into the spe­cif­ics of all mod­els apart from men­tion­ing them and their feas­ib­il­ity, and since I am an inter­ested party in this debate, I will briefly can­vass the ADC approach.

The Agent-Deed-Consequence frame­work offers an insight into the types of simple and fast intu­it­ive pro­cesses involved in mor­al apprais­als. Namely, the heur­ist­ic prin­ciple of attrib­ute sub­sti­tu­tion – quickly and effi­ciently sub­sti­tut­ing a com­plex and intract­able prob­lem with more access­ible inform­a­tion – is applied to spe­cif­ic inform­a­tion rel­ev­ant for mor­al apprais­al. I argued (along with my co-author, Eric Racine) that there are three kinds of mor­al intu­itions stem­ming from three kinds of heur­ist­ic pro­cesses that sim­ul­tan­eously mod­u­late mor­al judg­ments. We pos­ited that they also form the basis of three dis­tinct kinds of mor­al the­ory by sub­sti­tut­ing the glob­al attrib­ute of mor­al praiseworthiness/blameworthiness with the sim­pler attrib­utes of virtue/vice of the agent or char­ac­ter (as in vir­tue the­ory), right/wrong deed or action (as in deont­o­logy) and good/bad con­sequences or out­comes (as in consequentialism).

The Agent-Deed-Consequence frame­work provides a vocab­u­lary to start break­ing down mor­al judg­ment into cog­nit­ive com­pon­ents, which could increase explan­at­ory and pre­dict­ive power of future work on mor­al judg­ment in gen­er­al and mor­al heur­ist­ics in par­tic­u­lar. Furthermore, this research cla­ri­fies a wide set of find­ings from empir­ic­al and the­or­et­ic­al mor­al psy­cho­logy (e.g., “intu­it­ive­ness” and “counter-intuitiveness” of cer­tain judg­ments, mor­al “dumb­foun­ded­ness”, “eth­ic­al blind spots” of tra­di­tion­al mor­al prin­ciples, etc.). The frame­work offers a descrip­tion of how mor­al judg­ment takes place (three aspects are com­puted at the same time), but also offers norm­at­ive guid­ance on dis­so­ci­at­ing and cla­ri­fy­ing rel­ev­ant norm­at­ive components.

Perhaps an example might be help­ful to put things into per­spect­ive. Consider this (real life) case:

In October 2002, police­men in Frankfurt, Germany were faced with a chilling dilemma. They had in cus­tody the man who they sus­pec­ted had kid­napped a banker’s 11-year-old son and asked for ransom. Although the man was arres­ted while try­ing to take the ransom money, he main­tained his inno­cence and denied hav­ing any know­ledge of the where­abouts of the child. In the mean­time, time was run­ning out – if the kid­nap­per was in cus­tody, who will feed and hydrate the child? The police officer in charge finally decided to use coer­cion to make the sus­pect talk. He had threatened to inflict ser­i­ous pain upon the sus­pec­ted kid­nap­per if he did not reveal where he had hid­den the child. The threat worked – how­ever, the child was already dead. (Dubljevic & Racine 2014, p. 12)

The ADC approach allows us to ana­lyze the norm­at­ive cues of the case. Here it is safe to assume that the eval­u­ation of the agent is pos­it­ive (as a vir­tu­ous per­son), eval­u­ation of the deed or action is neg­at­ive (tor­ture is wrong), where­as the con­sequences are unclear ([A+] [D-] [C?] = [MJ?]).

Modulating any of the ele­ments of the case can res­ult in a dif­fer­ent intu­it­ive judg­ment, and the pub­lic con­tro­versy in Germany fol­low­ing this case cre­ated two camps: one stress­ing the uncer­tainty of guilt and a pre­ced­ent of com­mit­ting tor­ture in police work, and the oth­er stress­ing the poten­tial to save a child by any means neces­sary. If the case is changed so that the con­sequence com­pon­ent is clearly bad (e.g., sus­pect is inno­cent AND the child died), the intu­it­ive responses would be spe­cif­ic, pre­cise and neg­at­ive ([A+] [D-] [C-] = [MJ-]). And vice-versa, if we mod­u­late the case so that the con­sequences are clearly good (e.g., the sus­pect is guilty AND a life has been saved), the intu­it­ive responses would be spe­cif­ic, pre­cise and clearly pos­it­ive ([A+] [D-] [C+] = [MJ+]).

This is just one example of the frugal­ity of the ADC frame­work. However, it would be pre­ma­ture to con­clude that this mod­el is obvi­ously true or bet­ter than the remain­ing com­pet­it­ors, the mor­al found­a­tions the­ory and uni­ver­sal mor­al gram­mar. Ultimately, it is most likely that evid­ence will force all mod­els to accom­mod­ate new data and insights, but one thing is clear: this is an inter­est­ing time for the study of mor­al judg­ment and cognition.

 

References :

Ariely, D. 2008. Predictably irra­tion­al: The hid­den forces that shape our decisions. New York, NY: Harper.

Bargh, J. A., and T. L. Chartrand. 1999. The unbear­able auto­mati­city of being. American Psychologist 54: 462–479.

Bartels, D.M. & Pizarro, D. (2011) : The mis­meas­ure of mor­als : Antisocial per­son­al­ity traits pre­dict util­it­ari­an responses to mor­al dilem­mas, Cognition 121 : 154–161.

Batson, C.D. (2011): What’s wrong with mor­al­ity?, Emotion Review 3 (3): 230–236.

Batson, C.D., Chao, M.C. & Givens, J.M. (2009): Pursuing mor­al out­rage: Anger at tor­ture, Journal of Experimental Social Psychology, 45: 155–160.

Crockett, M.J., Clark, L., Hauser, M.D. & Robbins, T.W. (2010): Serotonin select­lively influ­ences mor­al judg­ment and beha­vi­or through effect on harm aver­sion, PNAS, 107 (40): 17433–38.

Crockett, M.J. & Rini, R.A. (2015): Neuromodulators and the instabil­ity of mor­al cog­ni­tion, in Decety, J. & Wheatley, T. (Eds.): The Moral Brain: A Multidisciplinary Perspective, Cambridge, MA: MIT Press, pp. 221–235.

Dubljević, V. & Racine, E. (2014): The ADC of Moral Judgment: Opening the Black Box of Moral Intuitions with Heuristics about Agents, Deeds and Consequences, AJOB–Neuroscience, 5 (4): 3–20.

Duke, A.A. & Begue, L. (2015): The drunk util­it­ari­an: Blood alco­hol con­cen­tra­tion pre­dicts util­it­ari­an responses in mor­al dilem­mas, Cognition 134: 121–127

Gigerenzer, G. (2010): Moral sat­is­ficing: Rethinking mor­al beha­vi­or as bounded ration­al­ity. Topics in Cognitive Science, 2 (3): 528–554.

Greene, J. D. (2008): The secret joke of Kant’s soul, in Sinnott-Armstrong, W. (Ed.): Moral psy­cho­logy Vol. 3, The neur­os­cience of mor­al­ity, Cambridge, MA: MIT Press; 35–79.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. and Cohen, J. D. (2001): An fMRI invest­ig­a­tion of emo­tion­al engage­ment in mor­al judg­ment, Science 293: 2105 – 2108.

Haidt, J. 2001. The emo­tion­al dog and its ration­al tail: A social intu­ition­ist approach to mor­al judg­ment. Psychological Review 108 (4): 814–834.

Haidt, J., & Graham, J. (2007). When mor­al­ity opposes justice: Conservatives have mor­al intu­itions that lib­er­als may not recog­nize. Social Justice Research 20(1): 98–116.

Hauser, M., Young, L. & Cushman, F. (2008): Reviving Rawls´s Linguistic Analogy, in Walter Sinott-Armstrong (Ed.): Moral psy­cho­logy 2:, MIT Press, pp. 107–144.

Knoch, D., Pasqual-Leone, A., Meyer, K., Treyer, V. and Fehr, E. (2006): Diminishing recip­roc­al fair­ness by dis­rupt­ing the right pre­front­al cor­tex, Science 314: 829–832.

Knoch, D; Nitsche, M.A; Fischbacher, U; Eisenegger, C; Pasqual-Leone, A. and Fehr, E. (2008): Studying the neuro­bi­o­logy of social inter­ac­tion with tran­s­cra­ni­al dir­ect cur­rent stimulation—The example of pun­ish­ing unfair­ness, Cerebral Cortex; 18:1987–1990.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M. & Damasio, A. (2007): Damage to the pre­front­al cor­tex increases util­it­ari­an mor­al judge­ments, Nature, 446: 908–911.

Koenigs M, Kruepke M, Zeier J, Newman JP. (2012): Utilitarian mor­al judg­ment in psy­cho­pathy, SCAN; 7(6): 708–14;

Kohlberg, L. (1968): The child as a mor­al philo­soph­er, Psychology Today, 2: 25–30.

Mendez, M. F. 2009. The neuro­bi­o­logy of mor­al beha­vi­or: Review and neuro­psy­chi­at­ric implic­a­tions. CNS Spectrums 14(11): 608–620.

Mikhail, J. 2007. Universal mor­al gram­mar: Theory, evid­ence and

the future. Trends in Cognitive Sciences 11(4): 143–152.

Mikhail, J. (2011): Elements of mor­al cog­ni­tion, New York: Cambridge University Press.

Persson, I. & Savulescu, J. (2012): Unfit for the Future: The Need for Moral Enhancement. Oxford: Oxford University Press.

Sinnott‐Armstrong, W., Young, L. & Cushman, F. (2010): Moral Intuitions, in John M. Doris (Ed.): The Moral Psychology Handbook, Oxford: Oxford University Press, DOI: 10.1093/acprof:oso/9780199582143.003.0008

Terbeck, S., Kahane, G., McTavish, S., Savulescu, J., Levy, N., Hewstone, M. & Cowen, P.J. (2013): Beta adren­er­gic block­ade reduces util­it­ari­an judg­ment, Biological Psychology 92: 323–328.

Tversky, A., and D. Kahneman. 1981. The fram­ing of decisions and the psy­cho­logy of choice. Science 211(4481): 453–458.

Tulving, E., and D. L. Schacter. 1990. Priming and human memory sys­tems. Science 247(4940): 301–306.

Young, L; Camprodon J.A; Hauser, M; Pascual-Leone, A. and Saxe, R. (2010): Disruption of the right tem­poro­pari­et­al junc­tion with tran­s­cra­ni­al mag­net­ic stim­u­la­tion reduces the role of beliefs in mor­al judge­ments, PNAS, 107: 6753– 6758.

In the mind’s eye: cognitive science and the riddle of cave art

Eveline Seghers- PhD stu­dent in the Department of Art, Music and Theatre Studies, Ghent University

In his 2002 book The Mind in the Cave, the archae­olo­gist David Lewis-Williams remarked that ‘art’ is a concept that every­one assumes they grasp, “until asked to define” it (2002: 41). This inev­it­ably clouds our insights into its ori­gins. While most research­ers tend to agree that pari­et­al and port­able imagery from the Upper Palaeolithic con­sti­tute the earli­est known art in human his­tory, there is very little con­ver­gence on its func­tion, if any, for our ancest­ors. One fairly pop­u­lar view con­sists in describ­ing cave paint­ings and small objects as instances of reli­gious prac­tices and beliefs, although opin­ions still dif­fer as to which links there may have been between Prehistoric art and reli­gion. While Lewis-Williams (2002) thinks cave paint­ings are the out­comes of sham­an­ist­ic hal­lu­cin­a­tions, the explan­a­tion of ‘hunt­ing magic’ also remains widely cited (for a dis­cus­sion, see Bahn and Vertut, 1997). More sec­u­lar inter­pret­a­tions have been pro­posed by authors such as the archae­olo­gist John Halverson, who endorsed an art for art’s sake explan­a­tion, and the archae­olo­gist R. Dale Guthrie, who sug­gests that the cave paint­ings may have been made by teens with grafitti-like intent, rather than by skilled artists with sym­bol­ic pur­poses, as is often assumed (Guthrie, 2005; Halverson, 1987). As such, fig­ur­at­ive art, impress­ive as it appears to us, may not be a sym­bol­ic break­through after all (Currie 2011).

Few authors have invest­ig­ated the ori­gins of fig­ur­at­ive art from the obvi­ous, yet sur­pris­ingly under­ex­plored per­spect­ive of the human mind and cog­nit­ive sci­ence. Although devel­op­ment­al stages in the mind are some­times inferred from fig­ur­at­ive and abstract cave paint­ings and port­able art — the pro­posed link between fig­ur­at­ive imagery and sym­bol­ic cog­ni­tion for example — cog­nit­ive, and by exten­sion neur­os­cientif­ic frame­works are rarely sys­tem­at­ic­ally applied to inter­pret the record at our dis­pos­al. A not­able excep­tion to this is Steven Mithen’s cog­nit­ive fluid­ity approach, which provides both a hypo­thes­is con­cern­ing both the evol­u­tion of cog­ni­tion and the emer­gence of com­plex human cul­ture (1996). The gen­er­al reluct­ance to approach cave art from a cog­nit­ive per­spect­ive may be partly due to the fact that brains do not fos­sil­ize, which means we can merely study the remain­ing fossil cra­nia. This poses a num­ber of meth­od­o­lo­gic­al chal­lenges to research­ers, who are left won­der­ing about the men­tal lives and the beha­viour of our ancest­ors without the elab­or­ate toolkit of present-day cog­nit­ive psy­cho­lo­gists, and the obvi­ous absence of study sub­jects whose beha­viour we are try­ing to assess.

Despite these chal­lenges, many research­ers have come up with cre­at­ive approaches to address our lack of actu­al ances­tral brains and beha­viour to exam­ine. By mak­ing endocasts — mod­elled recon­struc­tions of Prehistoric brains based on skulls avail­able in palaeo­anthro­po­lo­gic­al record — it becomes pos­sible to estim­ate the size and sur­face struc­ture of the brain. Another meth­od involves mak­ing com­par­at­ive ana­lyses of over­all brain volume and the volume of par­tic­u­lar areas in extant prim­ate spe­cies, to then make infer­ences about the brain struc­tures of our ancest­ors. Admittedly, such meth­ods leave a great deal to be estim­ated when it comes to the actu­al intern­al organ­iz­a­tion of ances­tral brains as, for example, pre­cise volumes asso­ci­ated with par­tic­u­lar neur­al func­tions can­not be derived from endocasts or com­par­is­ons of extant prim­ate brains. To rem­edy this, a new meth­od was recently developed that involves a com­par­at­ive ana­lys­is of the visu­al sys­tem — both eye and orbit size and the visu­al cor­tex — of Neanderthals and Anatomically Modern Humans, enabling infer­ences about how the brains of these two spe­cies may have been intern­ally organ­ized and how they may have differed, in turn spark­ing new insights into mat­ters such as their socio-cognitive abil­it­ies and their beha­vi­our­al rep­er­toires (Pearce, Stringer and Dunbar, 2013).

But what does cog­nit­ive sci­ence in itself con­trib­ute to our under­stand­ing of Prehistoric art? Assuming that the goal of invest­ig­at­ing the lat­ter through the lens of the former is not too ambi­tious for the afore­men­tioned meth­od­o­lo­gic­al reas­ons, how can we apply research from present-day cog­nit­ive sci­ence to ques­tions con­cern­ing the emer­gence and func­tion of cul­ture? Are we neces­sar­ily con­fined to archae­olo­gic­al and anthro­po­lo­gic­al meth­ods such as those men­tioned above in order to then for­mu­late hypo­theses about the cog­nit­ive machinery present in our ancest­ors’ brains, or can we per­haps also apply insights from cog­nit­ive sci­ence more dir­ectly? Many research­ers will agree that we can. The field of cog­nit­ive archae­ology, not­ably developed and endorsed by authors such as Colin Renfrew, Steven Mithen, Thomas Wynn and Frederick Coolidge, under­tak­ing invest­ig­a­tions of the archae­olo­gic­al record with the help of the con­cep­tu­al and meth­od­o­lo­gic­al tool­box of cog­nit­ive sci­ence, invest­ig­at­ing sub­jects as far apart as the evol­u­tion of con­scious­ness and its role in arte­fact pro­duc­tion, lin­guist­ic evol­u­tion in rela­tion to tool man­u­fac­tur­ing, men­tal mod­u­lar­ity and the con­ver­gence of cog­nit­ive domains in the realm of art, the capa­city of work­ing memory in the cre­ation of visu­al rep­res­ent­a­tions, and the cog­nit­ive nature of innov­a­tion and its reflec­tion in the emer­gence of arte­facts (e.g. Coolidge and Wynn 2009; de Beaune, Coolidge and Wynn, 2009; Mithen, 1996; Renfrew and Zubrow, 1994). In addi­tion, neur­os­cience has proven to be a loy­al com­pan­ion to the field, res­ult­ing in new lines of research that can be referred to, col­lect­ively, as ‘neuroar­chae­ology’ (e.g. Malafouris 2013).

In a talk giv­en at the inaug­ur­al iCog con­fer­ence, we invest­ig­ated an inter­est­ing case study at the inter­sec­tion of Prehistoric archae­ology and cog­nit­ive sci­ence. In 1998, the psy­cho­lo­gist Nicholas Humphrey sug­ges­ted that we may have been wrong to see fig­ur­at­ive cave art as evid­ence of the break­through of fully mod­ern cog­ni­tion; some­thing that oth­ers have described as the ulti­mate expo­nent and first unequi­voc­al evid­ence of our abil­ity to think sym­bol­ic­ally. Perhaps, he argued, cave art is rather “the swan song of the old” (1998: 165), reflect­ing stages of cog­nit­ive evol­u­tion that pre­cede the attain­ment of levels of cog­nit­ive and beha­vi­our­al mod­ern­ity that rival our present minds. Tying research on lan­guage evol­u­tion, the­ory of mind, aut­ism, and the evol­u­tion of social cog­ni­tion togeth­er, Humphrey attemp­ted to pave the way for a new view on cave art. Methodologically, he pro­posed that we might under­stand the devel­op­ment­al tra­ject­or­ies of the human mind at the time of the Upper Palaeolithic trans­ition by study­ing present-day indi­vidu­als with aut­ism spec­trum dis­orders. This sug­ges­tion has eli­cited much con­tro­versy: par­al­lelling a present-day aut­ist­ic child with human ancest­ors who may have been in earli­er devel­op­ment­al phases of cog­nit­ive evol­u­tion was seen as an eth­ic­al issue not jus­ti­fied giv­en the over­all spec­u­lat­ive nature of Humphrey’s hypo­thes­is. As a con­sequence, his ideas did not receive much sup­port among research­ers of Prehistoric cave art. In follow-up research, we there­fore reas­sessed Humphrey’s ori­gin­al hypo­thes­is by tak­ing a fresh per­spect­ive that com­bines a cog­nit­ive anthro­po­lo­gic­al frame­work, focus­sing on the emer­gence of metarep­res­ent­a­tion­al abil­ity, with sound empir­ic­al evid­ence pro­duced by cog­nit­ive psy­cho­lo­gic­al stud­ies on the rela­tion­ship between visu­al imagery and the­ory of mind (e.g. Charman and Baron-Cohen, 1992, 1995; Leslie, 1987; Sperber, 1994). This primary ana­lys­is can also be anchored into oth­er fields of research. By gath­er­ing recent find­ings on, for example, the evol­u­tion of spoken lan­guage and pat­terns of migrat­ory move­ment by our ancest­ors across the globe — ele­ments which turn out to be highly rel­ev­ant when dis­cuss­ing the evol­u­tion of human social cog­ni­tion — it becomes pos­sible to estab­lish a renewed and empirically-founded cog­nit­ive view on cave art, which may provide a start­ing point for oth­er cognitively-based ana­lyses of this subject.

Overall, excit­ing times lie ahead for archae­olo­gists research­ing the emer­gence and nature of cave art. Will cog­nit­ive sci­ence provide us with the ulti­mate key to under­stand­ing art’s ori­gins? Probably not, as a com­plex evol­u­tion­ary occur­rence such as the emer­gence of art, can only be under­stood bet­ter by com­bin­ing insights from a wide vari­ety of rel­ev­ant sci­entif­ic dis­cip­lines. But as cog­nit­ive sci­ent­ists and human­it­ies schol­ars inter­ested in this approach will con­tin­ue to join forces in order to shed light on the nature of Prehistoric art, our know­ledge can only increase, spark­ing new ques­tions and hypo­theses, the bound­ar­ies of which are cur­rently not even in view.

 

REFERENCES

Bahn, P.G. and Vertut, J. (1997) Journey Through the Ice Age, Berkeley: University of California Press.

Charman, T. and Baron-Cohen, S. (1992) ‘Understanding draw­ings and beliefs: a fur­ther test of the metarep­res­ent­a­tion the­ory of aut­ism: a research note’, Journal of Child Psychology and Psychiatry, vol. 33, pp. 1105–1112.

Charman, T. and Baron-Cohen, S. (1995) ‘Understanding pho­tos, mod­els, and beliefs: a test of the mod­u­lar­ity thes­is of the­ory of mind’, Cognitive Development, vol. 10, pp. 287–298.

Coolidge, F.L. and Wynn, T. (2009) The Rise of Homo Sapiens. The Evolution of Modern Thinking, Malden: Wiley-Blackwell.

Currie, G. (2011) ‘The mas­ter of the Masek Beds: handaxes, art, and the minds of early humans’, in Schellekens, E. and Goldie, P. (eds.) The Aesthetic Mind. Philosophy and Psychology, Oxford: Oxford University Press.

De Beaune, S.A., Coolidge, F.L. and Wynn, T. (eds.) (2009) Cognitive Archaeology and Human Evolution, Cambridge: Cambridge University Press.

Guthrie, R.D. (2005) The Nature of Paleolithic Art, Chicago: University of Chicago Press.

Halverson, J. (1987) ‘Art for art’s sake in the Paleolithic’, Current Anthropology, vol. 28, no. 1, February, pp. 63–71.

Humphrey, N. (1998) ‘Cave art, aut­ism, and the evol­u­tion of the human mind’, Cambridge Archaeological Journal, vol. 8, no. 2, pp. 165–191.

Leslie, A.M. (1987) ‘Pretense and rep­res­ent­a­tion: the ori­gins of “the­ory of mind”’, Psychological Review, vol. 94, no. 4, pp. 412–426.

Malafouris, L. (2013) How Things Shape the Mind. A Theory of Material Engagement, Cambridge: MIT Press.

Mithen, S. (1996) The Prehistory of the Mind. A Search for the Origins of Art, Religion, and Science, London: Thames and Hudson.

Lewis-Williams, D. (2002) The Mind in the Cave. Consciousness and the Origins of Art, London: Thames and Hudson.

Pearce, E., Stringer, C. and Dunbar, R.I.M. (2013) ‘New insights into dif­fer­ences in brain organ­iz­a­tion between Neanderthals and ana­tom­ic­ally mod­ern humans’, Proceedings of the Royal Society B¸ vol. 280, 20130168. http://dx.doi.org/10.1098/rspb.2013.0168

Renfrew, A.C. and Zubrow, E.B.W. (eds.) (1994) The Ancient Mind: Elements of Cognitive Archaeology, Cambridge: Cambridge University Press.

Seghers, E., & Blancke, S. (2013). Metarepresentational abil­ity and the emer­gence of fig­ur­at­ive cave art. Paper present­a­tion at the iCog Inaugural Conference: Interdisciplinarity in Cognitive Science, University of Sheffield, 30 November — 1 December 2013.

Sperber, D. (1994) ‘The mod­u­lar­ity of thought and the epi­demi­ology of rep­res­ent­a­tions’, in Hirschfeld, L.A. and Gelman, S.A. (eds.) Mapping the Mind. Domain Specificity in Cognition and Culture, Cambridge: Cambridge University Press.

Arguments from developmental order

Dr. Richard Stöckle-Schobel- Postdoctoral Fellow in the Mercator Research Group “Structure of Memory” at the chair for philo­sophy of lan­guage and cog­ni­tion, Ruhr-University Bochum

One top­ic that I have thought about ever since writ­ing my PhD thes­is on concept learn­ing is the role of devel­op­ment­al order in arguing for or against a giv­en psy­cho­lo­gic­al (or philo­soph­ic­al, for that mat­ter) claim. In this blog post, I will explain the prob­lem of devel­op­ment­al order and I will intro­duce the two main pos­i­tions one can adopt in response. I don’t have a clear pos­i­tion on the mat­ter, but I hope to pro­voke some thoughts and a dis­cus­sion about the issue in what follows.

Consider any giv­en men­tal capa­city M that has a devel­op­ment­al tra­ject­ory: the ways in which humans use M changes over their lifespan. At a giv­en point in early child­hood, the con­sensus is that chil­dren use pro­cess P1 to per­form that spe­cif­ic task. Over the course of devel­op­ment, P1 gets replaced, or sup­por­ted by, fur­ther pro­cesses P2, P3, and P4. The typ­ic­al adult will use a giv­en set of men­tal capa­cit­ies, that might con­tain P3 and P4, or evolved forms of P1 and P4, just P4, or any giv­en com­bin­a­tion of these processes.

So, for the cog­nit­ive sci­ent­ist, the ques­tion is: Which men­tal pro­cess is fun­da­ment­al for under­stand­ing human cog­ni­tion with regard to M? As an example domain, con­sider the struc­ture of human con­cepts – so, M stands for con­cep­tu­al think­ing. I had dis­cussed the issue with Edouard Machery dur­ing his 2012 vis­it to Edinburgh, where we held a Q&A on Machery’s book “Doing without con­cepts” (Machery 2009). One of the main claims Machery makes in his book is the Heterogeneity Hypothesis: there are mul­tiple kinds of con­cepts – at least, pro­to­types, exem­plars, and the­or­ies – inde­pend­ently at work in human cog­ni­tion, and there are no good reas­ons to priv­ilege any of them more than the oth­ers (i.e., there are no reas­ons to regard any one of them as more fun­da­ment­al than the others).

There are two dir­ec­tions to argue in here: One could, first, argue that the devel­op­ment­ally early pro­cess P1 is indic­at­ive of how the mind works with regard to M. Theoretical accounts of M should, on that view, keep the ini­tial state and the first pro­cesses for M at the centre of their mod­els. Later devel­op­ments do not replace these ini­tial pro­cesses, one might say, because they might only be more elab­or­ate ver­sions of P1, or that P1 would be suf­fi­cient to per­form the key role that P4 has in adult cog­ni­tion. Without this start­ing point, the whole devel­op­ment of M might nev­er have got­ten off the ground, or it might have evolved into a very dif­fer­ent men­tal capacity.

In our example, one might say: Developmental research can show us that one type of concept is the first one used in infancy. The ini­tial stock of con­cepts an infant forms takes the form of pro­to­types, or exem­plars, or maybe the­or­ies. All oth­er kinds of con­cepts are later devel­op­ments and thus can­not be part of the found­a­tions of con­cep­tu­al thinking.

One could, second, also argue that the devel­op­ment­ally late pro­cess P4 is fun­da­ment­al for explain­ing MP4 is the cor­rect anchor for a the­ory of M. After all, it is the mature form of the pro­cess. It might have many advant­ages, such as a bet­ter integ­ra­tion with oth­er men­tal pro­cesses; P1 could be a very simple form of asso­ci­at­ive thought, for instance, and P4 could be a reas­on­ing heur­ist­ic that is integ­rated with a large store of back­ground know­ledge. Also, P4 appears to have the “norm­at­ive force” on its side: It appears that this is how M is done right, i.e., it’s the best that human cog­ni­tion has come up with to solve the giv­en prob­lem, or per­form the giv­en task, and it should there­fore be con­sidered fun­da­ment­al or central.

To return to the example of con­cepts: One can argue that devel­op­ment­al pri­or­ity by itself would­n’t be a good reas­on to priv­ilege one type of concept over oth­ers, as the cog­nit­ive power that comes from hav­ing the diverse kinds of con­cepts at one’s dis­pos­al is a stronger reas­on for deny­ing any one of them such priority.

In my own past and cur­rent research, I have been drawn to both ver­sions of the argu­ment, and I think there is an issue of meth­od­o­lo­gic­al interest con­nec­ted to it. Indeed, sim­il­ar issues have been in the spot­light of devel­op­ment­al research for quite some time; one example is the ques­tion of the (dis)continuity in con­cep­tu­al change (cf. Carey 2009 for a pro­ponent of the dis­con­tinu­ity view, i.e., that many con­cepts are rad­ic­ally dif­fer­ent in mean­ing before and after an import­ant step in con­cep­tu­al devel­op­ment). Another related issue is wheth­er giv­ing an explan­a­tion of the devel­op­ment­al tra­ject­ory of a cog­nit­ive pro­cess is a spe­cial (or espe­cially import­ant) vir­tue of a the­ory. These are import­ant ques­tions at the inter­sec­tion of psy­cho­logy, philo­sophy of sci­ence, and philo­sophy of mind in gen­er­al that could feed into a “philo­sophy of devel­op­ment” more gen­er­ally. I would be very inter­ested in dis­cuss­ing the issues I intro­duced above fur­ther, and to hear oth­er examples and considerations.

REFERENCES

Carey, S. (2009). The ori­gin of con­cepts. Oxford University Press.

Machery, E. (2009). Doing without Concepts. Oxford University Press.

 

 

Pathology Based Philosophy of Mind

Dr Craig French–Postdoctoral Research Fellow on the John Templeton Foundation’s New Directions in the Study of Mind Project in the Faculty of Philosophy, University of Cambridge and Research Fellow, Trinity Hall, University of Cambridge,

 

Is all vis­ion col­our vis­ion? What about in type 2 blind­sight where there is lim­ited con­scious­ness of move­ment and form without col­our? Does see­ing an object require see­ing it as spa­tially loc­ated and see­ing some of the space it occu­pies? What about in Bálint’s Syndrome when at any giv­en time a single object and noth­ing else is seen? How can such patho­lo­gies inform our the­or­iz­ing about men­tal phe­nom­ena such as vision?

Pathology based philo­sophy of mind can take neg­at­ive and pos­it­ive forms. On the neg­at­ive side, reflec­tion upon patho­lo­gic­al con­di­tions can give us empir­ic­al counter­examples. We start by high­light­ing a core claim involved in a philo­soph­ic­al the­ory of some men­tal phe­nomen­on, for instance, a claim about the nature of thought, or con­scious­ness, or per­cep­tion, or whatever. But then we look to a patho­lo­gic­al case, where we find the phe­nomen­on in ques­tion, but in such a form that the ini­tial claim can­not be sus­tained. However, such reflec­tion doesn’t have to take this neg­at­ive form. It may force us to recon­sider the start­ing claim. But this might instead prompt an attempt to artic­u­late a more subtle ver­sion of it. Looked at in this way, reflec­tion upon the patho­logy can play a pos­it­ive role, it can help us to devel­op and fin­esse philo­soph­ic­al theories.

Thus, patho­logy based philo­sophy of mind can be fruit­ful in dif­fer­ent ways. And the sig­ni­fic­ance of such work is not con­fined to philo­sophy. For it might also yield inter­est­ing per­spect­ives on patho­lo­gies themselves.

An excel­lent example of patho­logy based philo­sophy of mind is to be found in Fiona Macpherson’s “The Structure of Experience, the Nature of the Visual, and Type 2 Blindsight”. Macpherson con­siders a tra­di­tion­al claim in the philo­sophy of per­cep­tion: the claim that all vis­ion is col­our vis­ion. Call this the Colour Claim. This claim is as old as Aristotle and his claim that col­our is the prop­er object of sight. And it does seem plaus­ible. Although vari­ous forms of col­our blind­ness are pos­sible, these are typ­ic­ally cases in which col­our vis­ion is severely degraded, not cases of visu­al exper­i­ence without any exper­i­ence of col­our (includ­ing achromat­ic col­ours). And for many of us it will be very dif­fi­cult to ima­gine a visu­al exper­i­ence which is not a col­our exper­i­ence. Moreover, the Colour Claim earns its the­or­et­ic­al keep. Combined with the idea that only visu­al exper­i­ences are col­our exper­i­ences, we can use it to explain what dis­tin­guishes vis­ion from oth­er senses.

What does Macpherson have to say about the Colour Claim, and what has it got to do with patho­logy based philo­sophy of mind? Though she doesn’t frame her dis­cus­sion in quite this way, Macpherson effect­ively argues that the Colour Claim comes under pres­sure from a cer­tain patho­lo­gic­al con­di­tion: type 2 blind­sight. When we look to the form that visu­al exper­i­ence takes in a cer­tain sort of patho­lo­gic­al case – type 2 blind­sight – we see that the Colour Claim is unsus­tain­able (if, that is, we accept the con­cep­tion of type 2 blind­sight, which Macpherson argues is defens­ible, and on which it involves visu­al exper­i­ence). Let me briefly spell out some of the details of Macpherson’s discussion.

The phe­nomen­on of blind­sight involves a puzz­ling com­bin­a­tion of con­di­tions. On the one hand, a sub­ject with blind­sight seems to lack con­scious exper­i­ence of stim­uli presen­ted in a cer­tain por­tion of their visu­al field. Such sub­jects report that they are blind in that area of the visu­al field – the area thus gets called the blind field. On the oth­er hand, such sub­jects are remark­ably accur­ate at determ­in­ing the pres­ence of stim­uli and cer­tain fea­tures presen­ted in the blind field, when forced to guess in exper­i­ment­al con­di­tions (Weiskrantz (2009)).

Now it turns out that some indi­vidu­als with blind­sight have lim­ited con­scious­ness in their blind fields. Thus, Weiskrantz (1998) intro­duced a dis­tinc­tion between type 1 and type 2 blind­sight. The former type is as described above, the lat­ter encom­passes cases where there is lim­ited con­scious­ness in the blind field. In what way is the con­scious­ness involved in type 2 blind­sight lim­ited? It is lim­ited in involving con­scious­ness of just move­ment and form. (More care­fully: a plaus­ible under­stand­ing of the con­scious­ness of one type 2 blind­sight patient – GY – is that he has con­scious­ness of just move­ment and form (Macpherson, p. 115). Other cases may dif­fer, but we can take this as a model).

The ques­tion Macpherson con­siders is how are we to under­stand such con­scious­ness. Should we under­stand it as visu­al exper­i­ence? As con­scious thought? As con­scious feel­ing? Or some­how else? Suppose we under­stand it as visu­al exper­i­ence. Well, then we have a patho­logy based counter­example to the Colour Claim. Since the con­scious­ness involved in type 2 blind­sight is con­scious­ness of just move­ment and form, such visu­al exper­i­ence would have to be visu­al exper­i­ence of just move­ment and form, and so visu­al exper­i­ence without col­our exper­i­ence. It is, as Macpherson brings out, a mat­ter of hot debate wheth­er the con­scious­ness involved in type 2 blind­sight should be under­stood as visu­al exper­i­ence. But Macpherson defends the idea that the con­scious­ness involved in type 2 blind­sight can be under­stood as visu­al exper­i­ence (without pre­tend­ing to offer a decis­ive case for this). Thus what we effect­ively get are (defeas­ible) grounds for the claim that type 2 blind­sight is a patho­logy based counter­example to the Colour Claim.

Macpherson’s work is a par­tic­u­larly excel­lent example of patho­logy based philo­sophy of mind. Not only does it advance a philo­soph­ic­al issue (the status of the Colour Claim), it fur­thers our under­stand­ing of a patho­lo­gic­al con­di­tion itself (type 2 blindsight).
Some of my own work – “Bálint’s Syndrome and the Structure of Experience” – falls into the mould of patho­logy based philo­sophy of mind. One tra­di­tion­al philo­soph­ic­al claim about see­ing an object is that it must involve (a) see­ing the object as spa­tially loc­ated, and (b) see­ing some of the space occu­pied by the object. I call this the spa­tial per­cep­tion require­ment (SPR). (There is instruct­ive dis­cus­sion of some such require­ment and how it finds sup­port in Kant, Husserl, and Wittgenstein in Schwenkler (2012)).

(SPR) comes under pres­sure from Bálint’s Syndrome – a patho­lo­gic­al spa­tial per­cep­tu­al dis­order defined in terms of three main defi­cits: sim­ul­tanagnosia, optic atax­ia, and optic aprax­ia. Simultanagnosia is the inab­il­ity to see more than one object sim­ul­tan­eously, optic atax­ia is an inab­il­ity to reach accur­ately for seen objects, and optic aprax­ia is a con­di­tion whereby gaze remains fix­ated des­pite a lack of a prob­lem with eye move­ment. A strik­ing illus­tra­tion of the first two defi­cits can be found here:

The pres­sure to (SPR) comes from the dom­in­ant inter­pret­a­tion of Bálint’s Syndrome on which sub­jects can see objects (though only one at a time) yet can­not see either space or spa­tial loc­a­tion. The dom­in­ant inter­pret­a­tion has it that Bálint’s Syndrome is a rad­ic­al form of spa­tial blind­ness. Lynn Robertson and col­leagues have extens­ively tested a patient with Bálint’s Syndrome, RM, and their find­ings sup­port this inter­pret­a­tion. Thus, they describe Bálint’s Syndrome and its mani­fest­a­tion in RM as follows:

These patients per­ceive a single object… yet they have no idea where it is loc­ated. It is not mis­lo­cated. Instead it seems to have no pos­i­tion at all (2004, p. 108)… During early test­ing of his [RM’s] extraper­son­al spa­tial abil­it­ies he often made state­ments like, “See, that’s my prob­lem. I can’t see where it is.”… objects that popped into his view were not mis­lo­cated per se. Rather, they simply had no loc­a­tion in his per­cep­tu­al exper­i­ence (pp. 158–159)… RM had… com­pletely lost his abil­ity expli­citly to rep­res­ent space (1997, p. 302).

A defend­er of (SPR) might be temp­ted to claim that RM just doesn’t see objects. But this is empir­ic­ally implaus­ible, as argued con­vin­cingly by Schwenkler. Not only can RM see (and identi­fy) objects, he can see them as coher­ent wholes, as hav­ing shape, size, and col­our (though his col­our per­cep­tion is often erro­neous, as emphas­ized in Campbell (2007)).

What I argue, how­ever, is that Bálint’s Syndrome doesn’t give a counter­example to (SPR) (see §4 of my paper). Instead, it helps us to see how some­times see­ing objects involves only very lim­ited space and spa­tial loc­a­tion awareness.

There are two aspects to my dis­cus­sion on this point. First, I argue that the evid­ence doesn’t rule out that indi­vidu­als with Bálint’s Syndrome are cap­able of a severely lim­ited form of loc­a­tion aware­ness. The empir­ic­al evid­ence points to vari­ous prob­lems with ego­centric and allo­centric loc­a­tion aware­ness – see­ing objects as loc­ated rel­at­ive to one­self and oth­er things. But it doesn’t rule out purely object-centric loc­a­tion aware­ness; see­ing an object as there where ‘there’ picks out just the space occu­pied by and defined in terms of the bound­ar­ies of whatever object is seen.

Second, I argue that the idea that indi­vidu­als with Bálint’s Syndrome can see space is empir­ic­ally con­sist­ent. Again, such space per­cep­tion would have to be severely lim­ited: con­fined to just object-space. Bálint’s patients may not be able to see the object-space as a space in a lar­ger region of space, nor will their per­cep­tion of such space be inde­pend­ent of and dis­so­ci­able from their per­cep­tion of the object (as it might be for us when we see a fig­ure as upside down, where its top points to the bot­tom of the space in which it’s seen), but, for all that, they can still clap their eyes upon regions of space delim­ited by the objects they see. I argue fur­ther that we have pos­it­ive reas­on to sup­pose that Bálint’s patients do see object-spaces. RM, for instance, can see objects as shaped and exten­ded. He can thus see objects as tak­ing form in and extend­ing into space – as occupy­ing space. This sug­gests that he can see the object-spaces of the objects he sees.

If I am right, we can res­ist the idea that in Bálint’s Syndrome visu­al aware­ness of space and spa­tial loc­a­tion goes miss­ing. In the final sec­tion of my paper I sug­gest, instead, that we can char­ac­ter­ize the exper­i­ences of those with Bálint’s Syndrome in terms of the idea that the visu­al field goes miss­ing (on the par­tic­u­lar con­cep­tion of the visu­al field we find in Martin (1992, 1993), Richardson (2010), and Soteriou (2013), see also Mac Cumhaill (2015)).

The visu­al field involved in ordin­ary visu­al exper­i­ence delim­its a cone of phys­ic­al space, which we are aware of, and in which things are seen as loc­ated, in rela­tion to one­self, and oth­er things (includ­ing regions space). Such exper­i­ence is lim­ited, and its lim­its can be spe­cified in terms of the bound­ar­ies of the space delim­ited by the visu­al field: the bound­ar­ies seem to be bound­ar­ies bey­ond which things can’t now be seen. But they seem to be bound­ar­ies bey­ond which things could be seen, if one alters one’s point of view. As Richardson and Soteriou emphas­ize, such lim­its come not from any object or space one per­ceives, but are rather mani­fest­a­tions of one’s own sens­ory lim­it­a­tions. Thus a change in the lim­it­a­tion strikes one as a change in one’s own sens­ory lim­it­a­tions, and not as a change in any object or space in the world (as presen­ted in exper­i­ence). Suppose I gradu­ally come to have a nar­row­er field of view. The way in which my exper­i­ence is lim­ited thus changes: it becomes more lim­ited. Before the change, the lim­it­a­tion was to a region of space of such-and-such a size, but now it is to a region of space of a smal­ler size. In reflec­tion upon my exper­i­ence, this strikes me as a change in my sens­ory lim­it­a­tions, not as change in the (presen­ted) world, as the shrink­ing of some object or space.

Now even if the exper­i­ences of Bálint’s patients involve aware­ness of space and spa­tial loc­a­tion, they are noth­ing like this. Suppose RM sees an apple in apple-space. RM can be aware only of what falls with­in the apple-space he sees. But this lim­it­a­tion is set by the apple he sees, and its spa­tial struc­ture. If RM goes from being aware of the apple, to being aware of, say, a church, or a banana, the way in which his exper­i­ence is lim­ited will change accord­ingly. The spa­tial spe­cific­a­tion of the lim­it­a­tion will now be set by the spa­tial struc­ture of the church or the banana. The spa­tial struc­ture of RM’s exper­i­ence is behold­en to whichever object he hap­pens to see and this is quite unlike how things are in exper­i­ences which involve a visu­al field.

 

REFERENCES

Campbell, John (2007). “What’s the role of spa­tial aware­ness in visu­al per­cep­tion of objects?” Mind and Language 22.5, pp. 548–562.

French, Craig (2015). “Bálint’s Syndrome and the Structure of Visual Experience”. Unpublished Manuscript. http://craigafrench.github.io/assets/CFBalintsPaper.pdf

Mac Cumhaill, Clare (2015). “Perceiving Immaterial Paths”. In: Philosophy and Phenomenological Research 90.3, pp. 687–715.

Macpherson, Fiona (2015). “The Structure of Experience, the Nature of the Visual, and Type 2 Blindsight”. In: Consciousness and Cognition 32, pp. 104–128.

Martin, Michael G. F. (1992). “Sight and Touch”. In: The Contents of Experience. Ed. by Tim Crane. Cambridge: Cambridge University Press, pp. 196–215.

— (1993). “Sense Modalities and Spatial Properties”. In: Spatial Representation: Problems in Philosophy and Psychology. Ed. by Naomi Eilan, Rosaleen McCarthy, and Bill Brewer. Oxford: Oxford University Press, pp. 206–217.

Richardson, Louise (2010). “Seeing Empty Space”. In: European Journal of Philosophy 18.2, pp. 227–243.

Robertson, Lynn (2004). Space, Objects, Minds and Brains. Hove, East Sussex: Psychology Press.

Robertson, Lynn et al. (1997). “The Interaction of Spatial and Object Pathways: Evidence from Balint’s Syndrome”. In: The Journal of Cognitive Neuroscience 9.3, pp. 295–317.

Schwenkler, John (2012). “Does Visual Spatial Awareness Require the Visual Awareness of Space?” In: Mind and Language 27.3, pp. 308–329.

Soteriou, Matthew (2013). The Mind’s Construction: The Ontology of Mind and Mental Action. Oxford: Oxford University Press.

Weiskrantz, Lawrence (1998). Blindsight: A Case Study and Implications. Paperback Edition. Oxford: Oxford University Press.

— (2009). Blindsight: a case study span­ning 35 years and new devel­op­ments. Oxford: Oxford University Press.

A Philosophical Perspective on the Infant Mindreading Puzzle 

Dr John Michael—Postdoctoral Fellow in the Department for Cognitive Science—Central European University

 

‘I know a small child who replied when asked “Are you learn­ing the viol­in”? with a           deni­al: ‘No, I already know how to play the viol­in; I’m learn­ing how to play it bet­ter’            (Millikan 2000: 55).

Over the past 35 years or so, there has been a con­tinu­ous and intense focus with­in the cog­nit­ive sci­ences upon the nature of our every­day, com­mon­sense psy­cho­logy. How is it that we can effort­lessly pre­dict and under­stand oth­er people’s beha­vi­or in most every­day situ­ations? If I see my col­league hid­ing behind a statue of Gustav Mahler while our mutu­al boss walks by on the way to a meet­ing that we should all be going to, I under­stand that she does not want to go to the meet­ing, that she does not want our boss to see her because our boss might then order her to go the meet­ing, and that she believes that hid­ing behind a statue will pre­vent our boss from becom­ing aware of her pres­ence because our boss can­not see through statues. The tac­tic makes sense to me. What this case illus­trates is that we often make sense of oth­ers’ beha­vi­or by exer­cising an abil­ity to ascribe men­tal states to them, such as beliefs, desires and inten­tions. This capa­city has vari­ously been dubbed ‘the­ory of mind’, ‘men­tal­iz­ing’, or ‘mindread­ing’ (I will use the lat­ter term here, as it is cur­rently the most prevalent).

One gen­er­al strategy that has been pur­sued in invest­ig­at­ing the nature of mindread­ing has been to com­pare dif­fer­ent kinds of agents with dif­fer­ent mindread­ing abil­it­ies – i.e. dif­fer­ent spe­cies and also human chil­dren at dif­fer­ent stages of devel­op­ment – and to attempt to cor­rel­ate these dif­fer­ences with oth­er cog­nit­ive and/or social dif­fer­ences in those agents. The task that has most com­monly been used in assess­ing wheth­er an agent is cap­able of mindread­ing has been the so-called false belief task. In a stand­ard verbal false belief task, a child (or any oth­er test sub­ject) observes as an agent places an object in loc­a­tion A and then tem­por­ar­ily departs, whereupon a second agent arrives on the scene and trans­fers the object to loc­a­tion B. When the first agent returns, the child is asked where s/he is likely to search for the object. The cor­rect answer, of course, is that the agent is likely to search in loc­a­tion A, because that is where s/he falsely believes the object to be loc­ated (Wimmer and Perner, 1983; Baron-Cohen et al., 1985). This task – first pro­posed by the philo­soph­er Daniel Dennett (Dennett 1978) – has been regarded as a lit­mus test for the capa­city to rep­res­ent the beliefs of oth­er agents because the child can’t use her own know­ledge con­cern­ing the loc­a­tion of the object to pre­dict where the agent will search. Rather, the child must dis­tin­guish the agent’s belief about the loc­a­tion of the object from her own know­ledge, and gen­er­ate a pre­dic­tion about where the agent is going to search on the basis of this belief.

For many years, it has been a robust find­ing that chil­dren under the age of four years tend to fail this task (Wellman et al. 2001), which has motiv­ated the wide­spread view that chil­dren young­er than about four don’t rep­res­ent beliefs. And yet, more recently, non-verbal stud­ies employ­ing vari­ous impli­cit meas­ures have now pro­duced a wealth of evid­ence that infants are sens­it­ive to oth­ers’ false beliefs soon after their first birth­days (Onishi and Baillargeon 2005; Buttelammn et al. 2009; Southgate et al. 2010) or even by the middle of their first year (Kovacs et al 2011; Southgate et al. 2014; for an over­view, see Christensen and Michael 2015).

The debate about how to account for this puzz­ling pat­tern of find­ings (i.e. the ‘infant mindread­ing puzzle’) has been struc­tured by a con­trast between rich and lean accounts. Rich accounts (e.g. Baillargeon et al., 2010; Kovacs et al. 2011) main­tain that infants rep­res­ent oth­ers’ beliefs by around one year or earli­er, and then offer some explan­a­tion to account for the lag in per­form­ance on expli­cit verbal false belief tasks, while lean accounts (Perner and Ruffman 2005; Zawidzki 2013) deny that chil­dren rep­res­ent beliefs before about four, and offer defla­tion­ary explan­a­tions of the infant data that do not impute the abil­ity to rep­res­ent beliefs to infants. Midway between these two extremes, Apperly and Butterfill (2009; see also Butterfill and Apperly 2013) have pro­posed a two-systems account of mindread­ing, which explains the infant data by pos­it­ing an early-emerging sys­tem (i.e. sys­tem 1) that enables infants and young chil­dren to track and reas­on about some belief-like cog­nit­ive states of oth­er agents, but not beliefs per se (until around the age of four, when the more soph­ist­ic­ated sys­tem 2 is in place).

Arguably, all three of these the­or­et­ic­al options are intern­ally con­sist­ent, inde­pend­ently motiv­ated by sens­ible the­or­et­ic­al con­sid­er­a­tions, and con­sist­ent with at least most of the exist­ing data. How, then, should we adju­dic­ate among them? In situ­ations such as this, philo­sophy can be of use in identi­fy­ing the­or­et­ic­al com­mit­ments which impli­citly shape a debate or a meth­od­o­logy, in mak­ing the terms of the debate clear, and in open­ing up over­looked altern­at­ives. In this vein, I would pro­pose tak­ing a fresh philo­soph­ic­al look at the infant mindread­ing debate against the back­ground of philo­soph­ic­al con­cep­tions of men­tal con­tent.

Specifically, I would like to point out that this debate has been and con­tin­ues to be shaped by an impli­cit com­mit­ment to a descript­iv­ist con­cep­tion of men­tal con­tent. The basic idea under­ly­ing descriptiv­ism is that the con­tents of a men­tal rep­res­ent­a­tion are indi­vidu­ated by the set of descrip­tions with which that rep­res­ent­a­tion is asso­ci­ated, and which under­pin the role that the rep­res­ent­a­tion plays in cat­egor­iz­a­tion and in infer­en­tial reas­on­ing (Harman 1987; Block 1987; Frege 1892; Russell 1905). So, for example, a rep­res­ent­a­tion may count as a rep­res­ent­a­tion of foxes if it is asso­ci­ated with a set of descrip­tions that are sat­is­fied by foxes and which thereby enable the bear­er of the rep­res­ent­a­tion to cat­egor­izes foxes cor­rectly and to draw appro­pri­ate infer­ences about them (e.g. they are mam­mals, they have four legs, etc.).

To see how the cur­rent debate is shaped by a com­mit­ment to descriptiv­ism, let us con­sider Butterfill and Apperly’s strategy (2013; cf. Apperly and Butterfill 2009). They pro­pose that it should be pos­sible to cor­rob­or­ate a the­ory about the cog­nit­ive mech­an­isms under­ly­ing an abil­ity (such as infants’ abil­ity to suc­ceed at impli­cit false belief tasks) by find­ing evid­ence of par­tic­u­lar lim­its on that abil­ity that one would expect on the basis of the the­ory in ques­tion (but which one should not oth­er­wise expect). Thus, their the­ory about the cog­nit­ive mech­an­isms under­ly­ing infants’ mindread­ing abil­it­ies entails that infants’ mindread­ing abil­it­ies should be sub­ject to the fol­low­ing sig­na­ture lim­its: for example, infants should be unable to attrib­ute beliefs to an agent when it would require the agent to draw infer­ences involving iden­tity rela­tions, an inab­il­ity to track beliefs that involve quan­ti­fi­ers (e.g. the belief that there is no object at a loc­a­tion), an inab­il­ity to per­form level‑2 perspective-taking (i.e. to take modes of present­a­tion into account), and a lack of under­stand­ing of the inter­ac­tions between beliefs and oth­er psy­cho­lo­gic­al states in pro­du­cing actions. In con­trast, if one endorses a the­ory accord­ing to which infants rep­res­ent beliefs per se, accord­ing to Butterfill and Apperly, one should not expect their mindread­ing abil­it­ies to exhib­it these sig­na­ture limits.

For example, ima­gine that an infant knows that some agent has observed that an object O is cur­rently at a loc­a­tion L, that the infant also knows that the agent has wit­nessed evid­ence that O is identic­al with O¢, but that the infant fails to infer that the agent believes that O¢ is cur­rently at L. In such a case, we would have evid­ence that the infant lacked some core know­ledge about how beliefs com­bine with each oth­er to form oth­er beliefs and/or to guide action. Butterfill and Apperly go fur­ther, though, and sub­mit that in such a case we would have evid­ence that the infant were not able to rep­res­ent beliefs at all. The under­ly­ing thought here is that the con­tent of a rep­res­ent­a­tion is fixed by the know­ledge that is asso­ci­ated with that rep­res­ent­a­tion (i.e. the know­ledge about how beliefs com­bine with oth­er men­tal states and thereby con­trib­ute to form oth­er beliefs and/or to guide action). In oth­er words, rather than con­clude that infants make faulty infer­ences about beliefs in some cases, Butterfill and Apperly con­clude that infants do not make infer­ences about beliefs at all.

Thus, it appears that the research strategy pro­posed by Butterfill and Apperly depends upon a pri­or decision to link the judg­ment as to wheth­er chil­dren are able to rep­res­ent beliefs at a par­tic­u­lar age to the ques­tion of how well chil­dren under­stand beliefs at that age (I believe that the oth­er the­or­et­ic­al altern­at­ives are sim­il­arly shaped by descriptiv­ism, although I do not have the space to argue for this claim here). Although there is noth­ing wrong with this decision, it is import­ant to real­ize that it is a non-mandatory decision, to which there are legit­im­ate alternatives.

For example, altern­at­ive con­cep­tions of men­tal con­tent with­in the broad class of causal-historical views do not appeal to the know­ledge asso­ci­ated with rep­res­ent­a­tions in order to determ­ine their con­tents. Instead, what is decis­ive on such accounts is that the rep­res­ent­a­tion in ques­tion stand, or have in the past stood, in a par­tic­u­lar kind of caus­al rela­tion with the object, prop­erty, rela­tion or kind that is its con­tent. It is import­ant to note that causal-historical accounts dif­fer sub­stan­tially in how they spe­cify the par­tic­u­lar kind of caus­al rela­tion that determ­ines con­tent, and also with respect to the fur­ther con­di­tions they intro­duce. According to one par­tic­u­lar causal-historical the­ory, namely Ruth Millikan’s (1984; 2000; 2013) tele­ose­mant­ic approach, what is decis­ive is that the cur­rent exist­ence of a rep­res­ent­a­tion be explained by the fact that it, at some point in phylo- or onto­geny, had the func­tion of co-varying with, and thereby provid­ing inform­a­tion about, its con­tent. As a res­ult, Millikan’s approach does not demand that the rep­res­ent­a­tion cur­rently stand in any kind of caus­al rela­tion with its con­tent what­so­ever, and need not ever have stood in a reli­able caus­al rela­tion – as long as it co-varied with it often enough for some con­sumer sys­tem in the organ­ism to be able to bene­fit from using it as a source of inform­a­tion about that content.

When the infant mindread­ing debate is approached from a causal-historical per­spect­ive, and more spe­cific­ally from a tele­ose­mant­ic per­spect­ive, the ques­tion of wheth­er infants and young chil­dren rep­res­ent oth­ers’ beliefs does not hinge upon their pos­sess­ing or deploy­ing some par­tic­u­lar know­ledge about beliefs. Rather, it hinges upon their hav­ing rep­res­ent­a­tion­al vehicles that have the func­tion of co-varying with oth­er agents’ beliefs, and thereby serve as a source of inform­a­tion about oth­ers’ beliefs. Thus, infants and young chil­dren may be said to rep­res­ent beliefs even though they fail to draw infer­ences about oth­ers’ beha­vi­or that one would draw as an adult who fully mas­ters the concept of belief. As a res­ult, causal-historical the­or­ies – in par­tic­u­lar of the tele­ose­mant­ic sort – make it pos­sible to take the infant mindread­ing data at face value, and accord­ingly to con­clude that infants have the abil­ity to rep­res­ent and reas­on about beliefs by as early as six months, and yet at the same time to acknow­ledge that this abil­ity under­goes sub­stan­tial fur­ther con­struc­tion through­out child­hood. One way of under­stand­ing this point would be to note that abil­it­ies can be improved without thereby becom­ing dif­fer­ent abil­it­ies, as is illus­trated by Ruth Millikan’s charm­ing anec­dote (quoted above) about the child learn­ing to play the viol­in bet­ter.

 

REFERENCES

 

Adams, F. and Aizawa, K., (2010). “Causal Theories of Mental Content”, The Stanford Encyclopedia of Philosophy (Spring 2010 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2010/entries/content-causal/>.

Apperly, I. A., & Butterfill, S. A. (2009). Do humans have two sys­tems to track beliefs and belief-like states? Psychological Review, 116(4), 953–970. doi:10.1037/a0016923

Baillargeon, R. (1998). Infants’ under­stand­ing of the phys­ic­al world. In M. Sabourin, F. Craik, & M. Robert (Eds.), Advances in psy­cho­lo­gic­al sci­ence, Vol. 2:  Biological and cog­nit­ive aspects (pp. 503–529). Hove,  England: Psychology Press/Erlbaum (UK) Taylor & Francis.

Baillargeon, R., Scott, R. M., & He, Z. (2010). False-belief under­stand­ing in infants.          Trends in Cognitive Sciences, 14, 110–118.

Block, N. 1987, Functional Role and Truth Conditions, Proceedings of the Aristotelian      Society LXI, 1987, 157–181

Buttelmann, D., Carpenter, M., & Tomasello, M. (2009). Eighteen-month-old infants show false belief under­stand­ing in an act­ive help­ing paradigm. Cognition, 112(2), 337–342. doi:10.1016/j.cognition.2009.05.006

Butterfill, S. A., & Apperly, I. A. (2013). How to Construct a Minimal Theory of Mind. Mind & Language, 28(5), 606–637. doi:10.1111/mila.12036.

Carlson, S. M. & Moses L. J. (2001). Individuals dif­fer­ences in inhib­it­ory con­trol and children’s the­ory of mind. Child Development, 72, 1032–1053.

Call, J., & Tomasello, M. (2008). Does the chim­pan­zee have a the­ory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192. doi:10.1016/j.tics.2008.02.010

Carey, S. (2009). The Origin of Concepts. Oxford University Press.

Christensen, W. & Michael, J. (2015) From two sys­tems to a multi-systems archi­tec­ture for mindread­ing. New Ideas in Psychology. Accepted on 1 January 2015. DOI: http://dx.doi.org/10.1016/j.newideapsych.2015.01.003

Clements, W. A., & Perner, J. (1994). Implicit under­stand­ing of belief. Cognitive Development, 9(4), 377–395. doi:10.1016/0885–2014(94)90012–4

Dennett, Daniel C. ‘Beliefs about Beliefs’ Behavioral and Brain Sciences 1 (1978): 568–70.

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press.

— (1988). Explaining Behavior: Reasons in a World of Causes. Cambridge: The MIT Press.

Frege, G. 1892. Über Sinn und Bedeutung. Zeitschrift für Philosophie und philo­soph­is­che Kritik, C: 25–50.

Harman, G. (1987). (Non-solipsistic) Conceptual Role Semantics. In New Directions in Semantics, ed. Lepore, E. London: Academic Press.

Kovács, Á. M., Téglás, E., & Endress, A. D. (2010). The Social Sense: Susceptibility to Others’ Beliefs in Human Infants and Adults. Science, 330 (6012), 1830–1834. doi:10.1126/science.1190792

Millikan, R. (1984). Language, Thought, and Other Biological Categories. Cambridge: MIT Press.

— (2000). On Clear and Confused Ideas: An Essay about Substance Concepts. Cambridge: Cambridge University Press.

— (2013). An Epistemology for Phenomenology? In: Richard Brown (ed.), Consciousness Inside and Out: Phenomenology, Neuroscience, and the Nature of Experience, Springer’s series Studies in Brain and Mind.

Onishi, K. H., & Baillargeon, R. (2005). Do 15-Month-Old Infants Understand False Beliefs? Science, 308(5719), 255–258. doi:10.1126/science.1107621

Papineau, D. (1984). Representation and Explanation, Philosophy of Science 51: 550–72.

Perner, J., and T. Ruffman. (2005). Infants’ Insight into the Mind: How Deep? Science 308:212–14.

Russell, B. 1905. On Denoting. Mind 14:479–93.

Scott, R. M., Baillargeon, R., Song, H., & Leslie, A. M. (2010). Attributing false beliefs about non-obvious prop­er­ties at 18 months. Cognitive Psychology, 61(4), 366–395. doi:10.1016/j.cogpsych.2010.09.001

Southgate, V., Chavalier, C., & Csibra, G. (2010). Seventeen-month-olds appeal to false beliefs to inter­pret oth­ers’ ref­er­en­tial com­mu­nic­a­tion. Developmental Science, 13, 907–912.

Southgate, V., Johnson, M. H., Osborne, T., & Csibra, G. (2009). Predictive motor activ­a­tion dur­ing action obser­va­tion in human infants. Biology Letters, 5(6), 769–772.

Southgate, V., Johnson, M., El Karoui, I., & Csibra, G. (2010). Motor sys­tem activ­a­tion reveals infants’ online pre­dic­tion of oth­ers’ goals. Psychological Science, 21, 355–359.

Wellman, H. M., Cross, D., & Watson, J. (2001). Meta-analysis of theory-of- mind          devel­op­ment: The truth about false belief. Child Development, 72(3), 655–684

Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and con­strain­ing func­tion of wrong beliefs in young children’s under­stand­ing of decep­tion. Cognition, 13(1), 103–128. doi:10.1016/0010–0277(83)90004–5

Zawidzki, T. (2013). Mindshaping, Cambridge: MIT Press.

Problems with Placebos

Dr. Jennifer Corns—Postdoctoral Research Fellow—The Value of Suffering Project—University of Glasgow

Consider the fol­low­ing case:

Headache: You have a head­ache. I give you a yel­low sug­ar pill while telling you that it is an aspir­in that will make you feel bet­ter. You take it and your head­ache gets better.

This case is taken to be a paradig­mat­ic case of a placebo (the sug­ar pill) and a placebo effect (your head­ache gets bet­ter). It is, more par­tic­u­larly, a paradig­mat­ic case of placebo anal­gesia or placebo for pain—the most well-studied and well-accepted type of placebo effect.

Placebo1

We might just think it’s obvi­ous, but what is it about this case that makes the effect a placebo effect?

Consider anoth­er case:

 Disappointment: You are dis­ap­poin­ted. I give you a yel­low sug­ar cook­ie while telling you that it is a well-deserved reward that will make you feel bet­ter. You eat it and your dis­ap­point­ment gets better.

Your feel­ing bet­ter, in this case, is not a placebo effect, right? Why not? In both of these cases, sug­ar and a kind word have caused you to feel better.

Maybe it’s because we think that emo­tion­al effects are just not the sort of things that can be placebo effects. But many exper­i­ences, espe­cially emo­tion­al exper­i­ences, are unpleas­ant, and that unpleas­ant­ness appears to be what changes in the typ­ic­al pain placebo cases.

It’s the nasty awful­ness of your pain that placebos make bet­ter and placebos can make the nasty awful­ness of emo­tions bet­ter, too. As Petrovic et al (2005) put it in their influ­en­tial paper (p.963) “… the mod­u­lat­ory pro­cesses in placebo are not spe­cif­ic for placebo anal­gesia, but are rather part of the mech­an­isms involved in the reg­u­la­tion of the emo­tion pro­cesses in gen­er­al.” Along with oth­ers, they have argued that placebos for pain are just an instance of placebo for emo­tion. The unpleas­ant­ness of emo­tions, it is increas­ingly accep­ted, is as sub­ject to placebo treat­ments as the unpleas­ant­ness of pain—and through the same mechanisms.

Do you still think Headaches is a case of the placebo effect, but Disappointment is not. Why?

Consider a third case:

Aspirin: You have a head­ache. I give you a yel­low aspir­in while telling you that it is an aspir­in that will make you feel bet­ter. You take it and your head­ache gets better.

If any­thing is not a placebo effect, we might think, then Aspirin is it. Identifying why, how­ever, turns out to again be difficult.

It has turned out that many of the same path­ways, neuro­chem­ic­als, and brain areas involved in paradig­mat­ic cases of anal­gesia, as in Aspirin, are like­wise involved in paradig­mat­ic cases of placebo anal­gesia, as in Headache. And these sim­il­ar­it­ies, as reviewed in Benedetti’s (2009) import­ant book, are not just lim­ited to the ulti­mate effects on symp­toms, but the mech­an­ist­ic means by which symp­tom changes come about. When you take a yel­low aspir­in and a yel­low sug­ar pill, much the same physiolo­gic­al things (rel­ev­ant to the nasty, awful­ness of pain) happen.

Still, we prob­ably think that Headache is a placebo effect, but Disappointment and Aspirin are not. Why?

Placebo research­ers, along with most people, assume that there is some dif­fer­ence between placebo effects and non-placebo effects. A good char­ac­ter­iz­a­tion of the placebo effect will be broad enough to include (at least most of) the placebo effects, and nar­row enough to exclude (at least most of) the non-placebo effects. It will, for instance, include Headache as a placebo effect, but it won’t include Disappointment or Aspirin. Problems giv­ing a char­ac­ter­iz­a­tion of the placebo effect that can do this are called scope problems.

Placebo2

The tra­di­tion­al way to char­ac­ter­ize the placebo effect was to dis­tin­guish between act­ive treat­ment inter­ven­tions and inact­ive or inert ones. A placebo inter­ven­tion was con­sidered to be an inact­ive or inert treat­ment and a placebo effect was the effect of a placebo inter­ven­tion. Using this char­ac­ter­iz­a­tion, we would say that a sug­ar pill and a kind word are inact­ive or inert. So, when a sug­ar pill and a kind word cause you to feel bet­ter, that’s a placebo effect. An aspir­in, how­ever, is an act­ive and potent drug. So, when an aspir­in causes you feel bet­ter, that’s not a placebo effect.

The prob­lem, as people even­tu­ally real­ized, is that if the sug­ar pill and kind word have made you feel bet­ter, then they are not inact­ive or inert. After all, they made you feel better!

This prob­lem with the tra­di­tion­al char­ac­ter­iz­a­tion of the placebo effect is called the placebo para­dox. As Koshi and Short (2007) put it (p.10): “If placebos are inert sub­stances, they can­not cause an effect. If an effect occurs, the placebos are not inert.” In the face of this prob­lem, most people have giv­en up the tra­di­tion­al char­ac­ter­iz­a­tion of the placebo effect.

But the same prob­lem arises for many of the oth­er attempts to char­ac­ter­ize the placebo effect. Some try think­ing of placebo effects as the effects of fake, illus­ory, or sham inter­ven­tions. But if the inter­ven­tions work, why aren’t they real? And if they aren’t real, how can they cause improvement?

Placebo research­ers are still look­ing for a suc­cess­ful char­ac­ter­iz­a­tion of the placebo effect. Though I lack the space to dis­cuss them all here, all char­ac­ter­iz­a­tions require that we are able to neatly sep­ar­ate out some inter­ven­tions and out­comes from oth­ers. They require, that is, that we can sep­ar­ate out the legit­im­ate treat­ment inter­ven­tions and outcomes—like tak­ing an aspir­ing and feel­ing better—from the sup­posedly ille­git­im­ate ones—like tak­ing a sug­ar pill and feel­ing better.

It seems to me that we should, instead, accept as equally legit­im­ate those inter­ven­tions that work when we test them. If an inter­ven­tion works, then we should use it and drop whatever biases we might have had against its legit­im­acy before we tested it.

We might think, how­ever, that we need to identi­fy placebos and placebo effects pre­cisely we need to test which inter­ven­tions are legitimate.

Randomized con­trolled tri­als (RCTs) are the gold stand­ard for test­ing treat­ment inter­ven­tions. In an RCT, people are ran­dom­ized to at least two inter­ven­tions called treat­ment arms, and the out­comes in these arms are com­pared. RCTs can vary (among oth­er ways) in the num­ber and types of inter­ven­tions com­pared, the types of factors used to select the people, and the type of ana­lys­is used to eval­u­ate the outcomes.

Important for us is that RCTs almost always involve a “placebo arm”: some people are ran­domly assigned to receive what is sup­posed to be a placebo inter­ven­tion. The out­come for those receiv­ing the placebo inter­ven­tion is taken to be the placebo effect and it is taken to be a sign of legit­im­acy for a treat­ment that the outcome(s) in its arm(s) is great­er than that meas­ured in the placebo arm(s). Any treat­ment inter­ven­tion that is not more effect­ive than a placebo inter­ven­tion is con­sidered to be ineffective.

Placebo3

The issues here are com­plic­ated, but it seems to me that we do not need the notion of placebos or placebo effects for RCTs—we will simply need to stop think­ing of the “placebo arms” and the out­comes in those arm as placebos. Instead, we can and should re-conceptualize RCTs as involving the com­par­is­on of out­comes of inter­ven­tions that are sim­il­ar and dif­fer­ent in spe­cified ways. This we can do without dis­tin­guish­ing a dis­tinct placebo effect or class of such effects.

It is import­ant to com­pare the out­comes of inter­ven­tions both to under­stand how they work and to inform decisions about which inter­ven­tions are safe, effect­ive, and worth devel­op­ing. All this we can, and should, do without identi­fy­ing any­thing as a placebo or placebo effect. Nunn (2009) nicely puts the point this way (p.338):

In a post-placebo era, exper­i­ments will simply com­pare some­thing with some­thing else. That is, they will com­pare exper­i­ment­al con­di­tions: one group gets these con­di­tions and anoth­er group gets those con­di­tions. The report of every meth­od­o­lo­gic­ally accept­able exper­i­ment will describe the con­di­tions that have been com­pared… Eventually we will have stand­ard descrip­tions for com­monly com­pared things. Legislation will reflect those stand­ards. We gain trans­par­ency, hon­esty, and clarity.

If an inter­ven­tion is effect­ive, then we should use it. If it works, then it should be accep­ted as being as real, potent, and legit­im­ate as any oth­er inter­ven­tion. Calling some effect­ive treat­ment inter­ven­tion a placebo and a real effect a placebo effect under­mines the legit­im­acy of those treat­ments and effects.

Many inter­ven­tions appear to be lumped in as placebos, even though they are effect­ive, because we think they are all ille­git­im­ate. If it’s not a pill, a needle, or a knife, then it is med­ic­ally sus­pect. This lump­ing mat­ters since some inter­ven­tions are inef­fect­ive; some sup­posed placebos are more effect­ive, for some things, than oth­ers. These lumped inter­ven­tions should be sep­ar­ated out and more thor­oughly and trans­par­ently tested so that we can bet­ter under­stand if, when, and how they work, and so that we can make decisions about when they are safe, effect­ive, and worth devel­op­ing. Not all “placebos” are cre­ated equal. But if we keep dis­tin­guish­ing placebo treat­ments from all oth­er treat­ments, then our biases against their legit­im­acy, des­pite their effic­acy, are likely to persist.

Our beliefs about legit­im­acy are, I think, the biggest con­trib­ut­or to explain­ing why we think some inter­ven­tions and effects are placebos while oth­ers are not. Think again of Headache, Disappointment, and Aspirin. A cook­ie and a kind word is, we might think, a legit­im­ate inter­ven­tion for disappointment—so, not a placebo effect. An aspir­in, we might think, is a legit­im­ate inter­ven­tion for a headache—so, not a placebo effect either. A sug­ar pill and a sug­ges­tion for a head­ache is, how­ever, illegitimate—so, any effect caused by this inter­ven­tion is a placebo effect.

Placebo4

Sometimes, how­ever, maybe what you need is a cook­ie and a kind word, and not anoth­er aspir­in. Not just for dis­ap­point­ment, but for a head­ache, too. If we stop think­ing of cook­ies and kind­ness as placebos, then maybe we’ll start to bet­ter under­stand how they help. Sometimes they really, legit­im­ately, do.

 

REFERENCES

Benedetti, F.. 2009. Placebo Effects: Understanding the Mechanisms in Health and Disease. New York: Oxford University Press.

Koshi, E.B. and Short, C.A. 2007. Placebo the­ory and its implic­a­tions for research and clin­ic­al prac­tice: a review of the recent lit­er­at­ure. Pain Practice 7(1), pp. 4–20.

Nunn, R. 2009. It’s time to put the placebo out of our misery. British Medical Journal 338, b1568.

Petrovic, P, Dietrick T, Fransson P, Andersson J, Carlsson K, and Ingvar M. 2005. Placebo in Emotional Processing—Induced Expectations of Anxiety Relief Activate a General Modulatory Network. Neuron, 46 (6), pp. 957–969.

 

Multisensory Perception

Alisa Mandrigin is a post-doctoral research­er on the AHRC Science in Culture Theme Large Grant, Rethinking the Senses: Uniting the Neuroscience and Philosophy of Perception. She is based in the Department of Philosophy at the University of Warwick.

We see, hear, touch, taste and smell. This is what com­mon sense tells us about per­cep­tion. The view that the nature and breadth of our per­cep­tu­al exper­i­ence is accur­ately cap­tured by those five verbs and that the sens­ory sys­tems are dis­crete and isol­ated from one anoth­er has gov­erned much of the philo­soph­ic­al research into per­cep­tion. Recently, though, research­ers in psy­cho­logy and neur­os­cience have star­ted to focus on the myri­ad inter­ac­tions between the senses and the many oth­er kinds of sens­ory inform­a­tion avail­able to the nervous system.

How do we know that the sens­ory sys­tems inter­act with one anoth­er? Some inter­ac­tions res­ult in effects that fea­ture in every­day exper­i­ence. For example, if we are presen­ted with a light flash and a beep at dif­fer­ent loc­a­tions but at the same time and then asked to loc­ate the beep, we judge it to be at, or at least close to, the loc­a­tion of the light flash (Bertelson, 1999). Judgements about loc­a­tion are one of the meas­ures of the Ventriloquism effect: the mis-location in per­cep­tu­al exper­i­ence of aud­it­ory objects or events as a res­ult of see­ing some­thing at a dif­fer­ent location.

We can meas­ure the effect in the labor­at­ory with visu­al and aud­it­ory stim­uli, but in every­day life we are often in situ­ations in which we acquire spa­tially dis­crep­ant inform­a­tion about what is appar­ently the same source. Ventriloquists make use of the effect in their acts, hence the name giv­en to the effect. At the cinema the film’s soundtrack is played through speak­ers spread out around the screen­ing room, and not from behind the parts of the screen on which visu­al images of mov­ing lips, explo­sions, and so on are presented.

Another visual-auditory inter­ac­tion gives rise to the McGurk effect. If we see a video clip of lip move­ments that should pro­duce the phon­eme /ga/ with an audio record­ing of the phon­eme /ba/ dubbed over the top, the res­ult is per­cep­tion of the phon­eme /da/ (McGurk & MacDonald, 1976). Again, it seems that pro­cessing in the visu­al sys­tem influ­ences pro­cessing in the aud­it­ory sys­tem. We can appre­ci­ate this by listen­ing to the same aud­it­ory stim­u­lus with our eyes closed: without the visu­al stim­u­lus there is no effect. You can try it for your­self by view­ing this video:

Multisensory inter­ac­tions are not lim­ited to vis­ion and audi­tion. We have evid­ence of inter­ac­tions between all five of the sens­ory sys­tems, as tra­di­tion­ally con­ceived. It’s only now, though, that philo­soph­ers are begin­ning to take prop­er notice of the implic­a­tions of these dis­cov­er­ies for per­cep­tu­al exper­i­ence. This brings us to a ques­tion that seems to be import­ant if we are to make sense of these inter­ac­tions and their con­sequences for per­cep­tu­al exper­i­ence: how do we dis­tin­guish the senses? Can we dis­tin­guish the senses on the basis of the dif­fer­ent kinds of exper­i­ence pro­duced, or should we dis­tin­guish them on the basis of dis­tinct sens­ory pro­cessing sys­tems in the brain, or by means of the nature of the prox­im­al stim­u­lus of the exper­i­ence, or by some­thing else entirely (Macpherson, 2011)? There’s an ana­log­ous ques­tion about kinds of exper­i­ence: how can we dis­tin­guish exper­i­ences from one anoth­er as being, for example, visu­al or auditory?

Settling on an answer to these ques­tions seems to be neces­sary if we are to make any head­way in clas­si­fy­ing inter­ac­tions as multi­s­ens­ory and decid­ing wheth­er these inter­ac­tions res­ult in mul­timod­al per­cep­tu­al experiences.

For example, our per­cep­tu­al exper­i­ence when we eat and drink involves retro-nasal smell—the sens­ing of odours when we breathe out—as well as taste. When you’ve had a blocked nose you’ve prob­ably noticed this, find­ing food to be tem­por­ar­ily fla­vor­less and insip­id. One response to this has been to claim that we have a dis­tinct kind of fla­vor exper­i­ence, res­ult­ing from inter­ac­tions between the olfact­ory and taste sys­tems (Smith, 2013). This view con­ceives of the exper­i­ence as multi­s­ens­ory in so far as it involves pro­cessing in two dis­tinct sens­ory sys­tems, but the exper­i­ence itself is not taken to be mul­timod­al since it is thought of as being a kind of exper­i­ence in it’s own right, dis­tinct from either smelling or tast­ing. The mat­ter is com­plic­ated fur­ther by evid­ence that what we see and hear, and tact­ile sen­sa­tions with­in the mouth also con­trib­ute to our per­cep­tu­al exper­i­ence when we eat (Auvray & Spence, 2008).

Even if we can settle on a way of dis­tin­guish­ing the senses, there are fur­ther ques­tions about the kinds of inter­ac­tions that take place between the sens­ory sys­tems. One kind of inter­ac­tion might involve mere mod­u­la­tion of pro­cessing in one sens­ory sys­tem by pro­cessing in the oth­er. Another kind of inter­ac­tion might involve the integ­ra­tion of redund­ant inform­a­tion across the senses. A fur­ther kind of inter­ac­tion might involve the bind­ing togeth­er of inform­a­tion about dif­fer­ent prop­er­ties of the same object. For example, when you look at a key that you hold in your hand, visu­al inform­a­tion about col­our might be bound togeth­er with tact­ile inform­a­tion about tex­ture, gen­er­at­ing a multi­s­ens­ory rep­res­ent­a­tion of the key as smooth and sil­ver (O’Callaghan, 2014). These dif­fer­ent kinds of inter­ac­tion may have dif­fer­ent kinds of impact on per­cep­tu­al exper­i­ence. What, for example, is the nature of the inter­ac­tion between vis­ion and audi­tion in vent­ri­lo­quism and how does it impact per­cep­tu­al experience?

One approach to vent­ri­lo­quism explains the effect in terms of the mod­u­la­tion of inform­a­tion in audi­tion by inform­a­tion in vis­ion. Ventriloquism is often meas­ured by sub­jects’ point­ing responses to the aud­it­ory stim­u­lus. Subjects point to a pos­i­tion in between the actu­al loc­a­tions of the aud­it­ory and the visu­al stim­uli. How can we explain this in terms of mod­u­la­tion? We can say that the con­flict­ing visu­al inform­a­tion about loc­a­tion mod­i­fies the aud­it­ory inform­a­tion about loc­a­tion (and vice versa). The res­ult is that sub­jects hear the aud­it­ory stim­u­lus as being in between the actu­al pos­i­tion of the aud­it­ory and the visu­al stim­uli. This explan­a­tion of vent­ri­lo­quism is con­sist­ent with per­cep­tu­al exper­i­ences remain­ing modality-specific throughout.

There is, how­ever, an altern­at­ive explan­a­tion of the mis-localisation of aud­it­ory stim­uli in vent­ri­lo­quism. This altern­at­ive explains sub­jects’ point­ing beha­vi­or in terms of the integ­ra­tion of con­flict­ing spa­tial inform­a­tion. If sens­ory inform­a­tion is integ­rated, it seems pos­sible that this integ­ra­tion will res­ult in a single mul­timod­al exper­i­ence of an object at a loc­a­tion in space, in this case an audio-visual exper­i­ence. If there is integ­ra­tion (or bind­ing) of inform­a­tion across the senses, then we need to give some account of how the sens­ory sys­tems determ­ine that inform­a­tion belongs together.

The issues I’ve men­tioned here offer just one aven­ue that we can pur­sue in rethink­ing and revis­ing our views of per­cep­tu­al exper­i­ence in light of empir­ic­al dis­cov­er­ies about multi­s­ens­ory pro­cessing. Another aven­ue con­cerns cross­mod­al cor­res­pond­ences. We reli­ably match, for example, high pitch sounds with bright lights, high spa­tial elev­a­tions or small objects (Spence, 2011). How, though, are these asso­ci­ations between what seem to be dif­fer­ent kinds of prop­er­ties estab­lished? Are pairs or groups of appar­ently unre­lated fea­tures of objects, or dimen­sions of stim­uli, encoded in the brain in the same way?

A fur­ther line of research con­cerns syn­aes­thesia. In some cases of syn­aes­thesia an exper­i­ence in one sens­ory mod­al­ity seems to induce an exper­i­ence in anoth­er, non-stimulated sens­ory mod­al­ity. For instance, for some syn­aes­thetes, hear­ing sounds causes them to have col­our exper­i­ences. Franz Liszt and Olivier Messiaen reportedly exper­i­enced col­ours when they heard par­tic­u­lar tones in this way. As with cross­mod­al cor­res­pond­ences, syn­aes­thet­ic exper­i­ence is reli­able and robust: hear­ing par­tic­u­lar tones con­sist­ently induces exper­i­ences of par­tic­u­lar col­our hues. How do we explain the phe­nomen­on? Do syn­aes­thetes have two dis­tinct modality-specific experiences—an aud­it­ory exper­i­ence and a col­our exper­i­ence, for example—or are their exper­i­ences alto­geth­er dif­fer­ent, exper­i­ences of col­oured sounds, for instance (Deroy, in press)?

We are just now start­ing to under­stand the many and var­ied inter­ac­tions that occur across the sens­ory sys­tems and their impact on per­cep­tu­al exper­i­ence. What is clear, though, is that the multi­s­ens­ory nature of per­cep­tu­al exper­i­ence is rel­ev­ant to all us, not just to those who work on the philo­sophy of per­cep­tion or in psy­cho­logy, or to those who work in the arts or in mar­ket­ing, but to all of us, simply because we are perceivers.

 

 

REFERENCES

Auvray, M. & Spence, C. (2008). The multi­s­ens­ory per­cep­tion of fla­vour. Consciousness & Cognition. 17. p. 1016–1031. doi:10.1016/j.concog.2007.06.005

Bertelson, P. (1999). Ventriloquism: a case of cross-modal per­cep­tu­al group­ing. In Aschersleben, G., Bachmann, T., & Müsseler, J. (eds.). Cognitive con­tri­bu­tions to the per­cep­tion of spa­tial and tem­por­al events. Amsterdam: Elsevier.

Deroy, O. (in press). Can sounds be red? A new account of syn­aes­thesia as enriched exper­i­ence. In Coates, P. & Coleman, S. (eds.). Phenomenal qual­it­ies. Oxford: Oxford University Press.

Macpherson, F. (2011). Cross-modal exper­i­ences. Proceedings of the Aristotelian Society. 111 (3). p. 429 – 468. doi:10.1111/j.1467–9264.2011.00317.x

McGurk, H. & MacDonald, J. (1976). Hearing lips and see­ing voices. Nature. 264. p. 746 – 748. doi:10.1038/264746a0

O’Callaghan, C. (2014). Not all per­cep­tu­al exper­i­ence is mod­al­ity spe­cif­ic. In Stokes, D., Matthen, M. & Biggs, S. (eds.) Perception and Its Modalities. Oxford: OUP.

Smith, B. C. (2013). Philosophical Perspectives on Taste. In Pashler, H. (ed.). The Encyclopaedia of Mind. Newbury Park, CA.: Sage.

Spence, C. (2011). Crossmodal cor­res­pond­ences: A tutori­al review. Attention, Perception, & Psychophysics. 73. p. 971–995. doi:10.3758/s13414-010‑0073‑7

Does Action-oriented Predictive Processing offer an enactive account of sensory substitution?

Krzysztof Dołęga — PhD stu­dent — Ruhr-Universität Bochum, Institut für Philosophie

Action-oriented Predictive Processing (PP for short, also known as Prediction Error Minimization or Predictive Coding) is an excit­ing con­cep­tu­al frame­work emer­ging at the cross­roads of cog­nit­ive sci­ence, stat­ist­ic­al mod­el­ing, inform­a­tion the­ory, and philo­sophy of mind. Aimed at obtain­ing a uni­fied explan­a­tion of the pro­cesses respons­ible for cog­ni­tion, per­cep­tion and action, it is based on the hypo­thes­is that the brain’s archi­tec­ture con­sists of hier­arch­ic­ally organ­ized neur­al pop­u­la­tions per­form­ing stat­ist­ic­al infer­ence. Rather than accu­mu­lat­ing and com­pound­ing incom­ing inform­a­tion, the neur­al hier­arch­ies con­tinu­ously form hypo­theses about their future input. Thus, the tra­di­tion­al bottom-up approach to explain­ing cog­ni­tion is sub­sumed by a top-down organ­iz­a­tion in which only sens­ory inform­a­tion diver­ging from the pre­dicted pat­terns of activ­a­tion is propag­ated up the cor­tic­al hier­arch­ies. Due to its diver­gence from the ‘pre­dicted’ pat­terns, this inform­a­tion is often referred to as “pre­dic­tion error” (Clark, 2013). Minimizing error is pos­tu­lated to be the main func­tion of the brain; by accom­mod­at­ing unex­pec­ted inform­a­tion in its pre­dic­tions the brain can fine-tune its future hypo­theses regard­ing sens­ory input and track the states of the world caus­ing this input more accurately.

However, most of the frame­work’s appeal lies with the rel­at­ively recent pro­pos­al that the brain can min­im­ize error not only by revis­ing and con­struct­ing new hypo­theses about the input pat­terns, but also by inter­act­ing with the envir­on­ment in order to erase the source of mis­match between the best hypo­thes­is and pat­terns of sens­ory activ­a­tion. This fea­ture, referred to as “active-inference” (e.g. Hohwy, 2012), is primar­ily respons­ible for much of the frame­work’s appeal and its prom­ise of a uni­fied the­ory of brain organ­iz­a­tion, explain­ing how inform­a­tion about many dif­fer­ent cog­nit­ive func­tions is encoded and pro­cessed in the brain (Friston, 2010).

In my poster present­a­tion at the first iCog con­fer­ence, I tried to draw sim­il­ar­it­ies between Predictive Processing and anoth­er rad­ic­al pro­pos­al about the nature of per­cep­tion and cog­ni­tion – enact­iv­ism. Because of the rel­at­ive nov­elty of the PP frame­work, its rela­tion­ship to the embod­ied and sen­sor­imo­tor approaches to cog­ni­tion and per­cep­tion has not been well defined (at least at the time of the con­fer­ence, see the bib­li­o­graphy below for sev­er­al recent art­icles tack­ling these issues). What struck me as an inter­est­ing aven­ue for research was the sim­il­ar­ity between the notion of active-inference on the PP frame­work and the enact­ive focus on the role sen­sor­imo­tor con­tin­gen­cies and pos­sib­il­it­ies for action play in shap­ing per­cep­tion and phe­nomen­o­logy. Pursuing this cor­res­pond­ence is espe­cially valu­able for PP, as it does not yet offer a clear account of how phe­nomen­o­logy fits with­in its prob­ab­il­ist­ic archi­tec­ture. Due to perception’s breadth as a top­ic, I decided to focus on a very par­tic­u­lar case of Sensory Substitution Devices.

Sensory sub­sti­tu­tion devices emerged from the labor­at­ory led by Paul Bach-y-Rita in the ’60s. Bach-y-Rita (1983) set out to prove the extent of lifelong brain plas­ti­city (a highly con­tested thes­is at the time) by devis­ing gad­gets that would help han­di­capped people restore lost senses by sub­sti­tut­ing them with inputs com­ing from dif­fer­ent sens­ory mod­al­it­ies. The idea behind the pro­ject was to use the brain’s nat­ur­al abil­ity to adapt to the inputs it receives from dif­fer­ent sens­ory chan­nels in order to train it to recog­nize inform­a­tion spe­cif­ic to the lost mod­al­ity in pat­terns delivered through a dif­fer­ent sens­ory mod­al­ity. Bach-y-Rita’s work focused on tactile-visual sens­ory sub­sti­tu­tion (TVSS for short), in which visu­al inform­a­tion from a video cam­era was trans­lated into vibro-tactile input on the sub­jects skin. Despite its lim­it­a­tions, this meth­od proved to be a huge suc­cess, as sub­jects were able to learn to extract vision-like inform­a­tion (e.g. a pres­ence of a white X in front of the cam­era) from tact­ile stim­u­la­tion after a sur­pris­ingly short adapt­a­tion time. This dis­cov­ery jump­star­ted a whole new field of research. Below is a video of a recent TVSS device:

TVSS proved to be a fer­tile study ground for enact­iv­ism due to lim­it­a­tions and prob­lems inher­ent to the pro­ject of sens­ory sub­sti­tu­tion. Very early into his research, Bach-y-Rita dis­covered that sub­sti­tu­tion is mostly unsuc­cess­ful when sub­jects do not have con­trol over the cam­era move­ments. The crit­ic­al import­ance of explor­a­tion and act­ive sampling for TVSS fits well with the core enact­ive claim that per­cep­tion con­sists in ‘exer­cising a mas­tery of sen­sor­imo­tor con­tin­gen­cies’ (O’Regan & Noë 2001: 85), under­stood as prac­tic­al know-how about the pos­sible changes in per­cep­tion of objects caused by our actions (O’Regan & Noë 2001: 99). Moreover, O’Regan & Noë have argued that the con­tin­gen­cies of how our sens­ory mod­al­it­ies sample and inter­act with par­tic­u­lar objects explain cer­tain fea­tures of the brain’s plas­ti­city. For example, it is because of the dynam­ics par­tic­u­lar to our visu­al involve­ment with the world that the blind TVSS sub­jects show increased activ­ity in the visu­al cor­tex and can be said to genu­inely see (although in some impov­er­ished way, TVSS does not allow for col­or per­cep­tion). To sup­port this con­tro­ver­sial claim they point to neur­o­lo­gic­al data, as well as exper­i­ments demon­strat­ing sub­jects’ gull­ib­il­ity to dis­tinct­ively visu­al illu­sions exploit­ing the basic prop­er­ties of visu­al engage­ment with the envir­on­ment, such as mak­ing per­cep­tu­al con­tact only with the sur­faces facing the observ­er (Hurley & Noë 2003: 143).

Let us now return to Action-oriented Predictive Processing and how the frame­work can accom­mod­ate sens­ory sub­sti­tu­tion. PP assumes that the main func­tion of the brain is min­im­iz­a­tion of pre­dic­tion error res­ult­ing from com­par­ing actu­al sens­ory inputs with pre­dicted pat­terns of activ­a­tions gen­er­ated by the sys­tem. To pre­dict the sens­ory states effi­ciently, the brain tracks pat­terns of stat­ist­ic­al reg­u­lar­it­ies present in the incom­ing sig­nals and tries to infer the caus­al struc­ture of the world respons­ible for these reg­u­lar­it­ies. Thus the brain con­structs and main­tains a mod­el of the world, which it uses to pre­dict (i.e. gen­er­ate) its own sens­ory states. The par­tic­u­lars of this pro­pos­al are much more com­plex (I recom­mend Jakob Hohwy’s 2013 mono­graph for details), but these core ideas are suf­fi­cient to under­stand how PP can explain sens­ory substitution.

On Action-oriented Predictive Processing, what hap­pens dur­ing TVSS is a res­ult of the hier­arch­ic­al sys­tem recog­niz­ing and sub­sequently track­ing a set of stat­ist­ic­al reg­u­lar­it­ies spe­cif­ic to the visu­al mod­al­ity in sens­ory pat­terns delivered through tact­ile stim­u­la­tion. The sub­jects need to have con­trol over the input device (here a cam­era) in order to learn how the reg­u­lar­it­ies in the sens­ory stim­u­lus change with the sampling of the envir­on­ment. Having done this, the brain can pre­dict how the stim­u­lus will change in response to par­tic­u­lar actions, updat­ing its gen­er­at­ive mod­el accordingly.

The core of the PP explan­a­tion of TVSS is thus very sim­il­ar to the enact­ive treat­ment. In both cases it is the sys­tem’s abil­ity to recog­nize pos­sib­il­it­ies for action and the abil­ity to pre­dict how these actions will change the states of the sens­ory input that make sens­ory sub­sti­tu­tion pos­sible. One could try and push this sim­il­ar­ity fur­ther by say­ing that in PP, just like in enact­iv­ism, it is the sen­sor­imo­tor con­tin­gen­cies that shape the phe­nom­en­al qual­ity of the sub­sti­tu­tion. After all, the sys­tem tracks dis­tinct­ively visu­al reg­u­lar­it­ies obtain­ing between the body and the world. From the per­spect­ive of the brain, the man­ner of their present­a­tion (via tact­ile stim­u­lus vs ocu­lar nerves) plays a sec­ond­ary role to their con­tents (in both cases the brain has to infer the causes behind the pat­terns of sens­ory activations).

Despite these sim­il­ar­it­ies, one should be care­ful about cast­ing PP as sub­scrib­ing to an enact­ive under­stand­ing of per­cep­tion and sens­ory sub­sti­tu­tion. Though the views in ques­tion do over­lap in their explan­at­ory ambi­tions, they are built on dia­met­ric­ally oppos­ing assump­tions. In the pre­vi­ous para­graph I tried to speak about ‘the sys­tem’ rather than the brain or agent as a whole. This is because PP is usu­ally under­stood as a neuro­centric view (Hohwy, 2014), while enact­iv­ism instead stresses the situ­ated and embod­ied nature of cog­ni­tion (Noë, 2005). Moreover, PP is based on an infer­en­tial archi­tec­ture, often asso­ci­ated with rich rep­res­ent­a­tion­al con­tents – some­thing widely eschewed by enactivists.

The divide between the two pos­i­tions is not unsur­mount­able and much of present work by Andy Clark is focused on bridging the gap between these rad­ic­al views (see Clark’s forth­com­ing book). This post does not allow me to dive into the nuances of both views and how sim­il­arly they treat TVSS and ana­log­ous cases; how­ever, I hope I man­aged to spark some interest in these rad­ic­al views about per­cep­tion and how they may be related. Below is a list of ref­er­ences, some of which were unavail­able at the time of the ori­gin­al present­a­tion of this material.

 

REFERENCES AND BIBLIOGRAPHY

Bach-y-Rita, P. (1983). Tactile Vision Substitution: Past and Future. International Journal of       Neuroscience 19: 29–36.

Briscoe, R. (forth.). Bodily Action and Distal Attribution in Sensory Substitution. [online] Available from: http://philpapers.org/archive/BRIBAA [Retrived: 21, Nov. 2013].

Clark, A. (2013). Whatever next? Predictive brains, situ­ated agents, and the future of cog­nit­ive sci­ence. Behavioral and Brain Science 36(3): 181–204.

Friston, K. (2010). The free-energy prin­ciple: A uni­fied brain the­ory?. National Review of Neuroscience 11(2):127–138.

Friston, K. (2008). Hierarchical Models in the Brain, PLoS Computational Biology 4(11)      doi:10.1371/journal.pcbi.1000211.

Hohwy, J. (2014). Self-evidencing Brain. Nous48(1). doi: 10.1111/nous.12062

Hohwy, J. (2013). The Predictive Mind. Oxford: Oxford University Press.

Hohwy, J. (2012). Attention and con­scious per­cep­tion in the hypo­thes­is test­ing brain. Frontiers in         Psychology 3(96). doi: 10.3389/fpsyg.2012.00096.

Hurley, S. & Noë, A. (2003). Neural Plasticity and Consciousness. Biology and Philosophy 18: 131–168.

O’Regan, J.K. & Noë, A. (2001). What is it like to see: A sen­sor­imo­tor the­ory of per­cep­tu­al     exper­i­ence. Synthese 129(1): 79–103 .

Noë, A. (2005). Action in Perception. Cambridge, MA: MIT Press.

Pepper, K. (2013). Do Sensorimotor Dynamics Extend The Conscious Mind?. Adaptive Behavior           22(2): 99–108.

Pickering, M., Clark, A. (2014). Getting Ahead: Forward Models and their place in Cognitive Architecture. Trends in Cognitive Sciences 18(9): 451–456.

Prinz, J. (2009). Is Consciousness Embodied? in Robbins, P. & Aydede, M. (Eds.) Cambridge   Handbook of Situated Cognition. Cambridge: Cambridge University Press.

Rietveld, E., Bruineberg, J. (2014). Self-organization, free energy min­im­iz­a­tion, and optim­al grip           on a filed of afford­ances. Front. Hum. Neurosci 8(599). doi: 10.3389/fnhum.2014.00599.

Ward, D. (2012). Enjoying the Spread: Conscious Externalism Reconsidered. Mind 121(483):731- 751.

On what makes delusions pathological

Dr Kengo Miyazono – Research Fellow – University of Birmingham

Delusional beliefs are typ­ic­ally patho­lo­gic­al. Being patho­lo­gic­al is not the same as being false or being irra­tion­al. A woman might falsely believe that Istanbul is the cap­it­al of Turkey, but it might just be a simple mis­take. A man might believe without good evid­ence that he is smarter than his col­leagues, but it might just be a healthy self-deceptive belief. On the oth­er hand, when a patient with brain dam­age caused by a car acci­dent believes that his fath­er was replaced by an imposter, or when anoth­er patient with schizo­phrenia believes that ‘The Organization’ painted the doors of the houses on a street as a mes­sage to him, these beliefs are not merely false or irra­tion­al. They are pathological.

What makes delu­sion­al beliefs patho­lo­gic­al? One might think, for example, that delu­sions are patho­lo­gic­al because of their extreme irra­tion­al­ity. The prob­lem with this view, how­ever, is that it is not obvi­ous that delu­sion­al beliefs are extremely irra­tion­al. Maher (1974), for example, argues that delu­sions are reas­on­able explan­a­tions of abnor­mal experience.

“[T]he explan­a­tions (i.e. the delu­sions) of the patient are derived by cog­nit­ive activ­ity that is essen­tially indis­tin­guish­able from that employed by non-patients, by sci­ent­ists, and by people gen­er­ally. The struc­tur­al coher­ence and intern­al con­sist­ency of the explan­a­tion will be a reflec­tion of the intel­li­gence of the indi­vidu­al patient.” (Maher 1974, 103)

Again,  Coltheart and col­leagues (2010) argue that it is ration­al, from the Bayesian point of view, for a per­son with the Capgras delu­sion to adopt the delu­sion­al hypo­thes­is giv­en his neuro­psy­cho­lo­gic­al defi­cits. Bayes’s the­or­em pre­scribes a math­em­at­ic­al pro­ced­ure of updat­ing the prob­ab­il­ity of a hypo­thes­is on the basis of pri­or beliefs and new obser­va­tions. Coltheart and col­leagues claim that the delu­sion­al hypo­theses get high­er prob­ab­il­it­ies than com­pet­ing non-delusional hypo­theses giv­en rel­ev­ant pri­or beliefs and the obser­va­tions of the neuro­psy­cho­lo­gic­al deficits.

“The delu­sion­al hypo­thes­is provides a much more con­vin­cing explan­a­tion of the highly unusu­al data than the nondelu­sion­al hypo­thes­is; and this fact swamps the gen­er­al implaus­ib­il­ity of the delu­sion­al hypo­thes­is. So if the sub­ject with Capgras delu­sion uncon­sciously reas­ons in this way, he has up to this point com­mit­ted no mis­take of ration­al­ity on the Bayesian mod­el.” (Coltheart, Menzies, & Sutton 2010, 278)

The claim by Coltheart and col­leagues is, how­ever, con­tro­ver­sial. In response, McKay (2012) argues that adopt­ing delu­sion­al hypo­theses is due to the irra­tion­al bias of dis­count­ing the ratio of pri­or prob­ab­il­it­ies. Even if McKay is cor­rect, how­ever, it is not clear that delu­sion­al beliefs are extremely irra­tion­al since sim­il­ar biases might be found among nor­mal people as well.

For instance, in the fam­ous exper­i­ment by Kahneman and Tversky (1973), nor­mal sub­jects, first, received the base-rate inform­a­tion about a hypo­thet­ic­al group of people (e.g., “30 engin­eers and 70 law­yers”). Then, the per­son­al­ity descrip­tion of a par­tic­u­lar per­son in the group was provided and the sub­jects were asked to pre­dict the occu­pa­tion (e.g., an engin­eer or a law­yer) of the per­son. The cru­cial find­ing was that the manip­u­la­tion of the base-rate inform­a­tion, which provides the pri­or prob­ab­il­ity of the hypo­theses at issue (e.g., the hypo­thes­is that this per­son is a law­yer), had almost no effect on the pre­dic­tion of the sub­jects (“base-rate neg­lect”). The find­ing sug­gests that the bias of dis­count­ing pri­or prob­ab­il­it­ies can be seen among nor­mal people. As Bortolotti poin­ted out (2009), the irra­tion­al­ity that we find in people with delu­sions might not be very dif­fer­ent from the irra­tion­al­ity we find in nor­mal people.

It is even con­ceiv­able that people with delu­sions are more ration­al than nor­mal people. In the well-known exper­i­ment by Huq and col­leagues, the sub­jects were asked to determ­ine wheth­er a giv­en jar is the jar A, which con­tains 85 pink beads and 15 green beads, or the jar B, which con­tains 15 pink beads and 85 green beads, on the basis of the obser­va­tion of the beads drawn from it. It was found that the sub­jects with delu­sions need less evid­ence (i.e., less beads drawn from the jar) before com­ing to the con­clu­sion than the sub­jects in con­trol groups (“jumping-to-conclusion bias”). Interestingly, Huq and col­leagues do not take this to show that the sub­jects with delu­sions are irra­tion­al. Rather, they note; “it may be argued that the deluded sample reached a decision at an object­ively “ration­al” point. It may fur­ther be argued that the two con­trol groups were some­what over­cau­tious” (Huq et al. 1988, 809) (but see Van Der Leer et al. 2015).

In my paper, Delusions as Harmful Malfunctioning Beliefs (http://www.sciencedirect.com/science/article/pii/S1053810014002001), I also exam­ine the views accord­ing to which delu­sion­al beliefs are patho­lo­gic­al because of (1) their strange con­tent, (2) their res­ist­ance to folk psy­cho­lo­gic­al explan­a­tions and (3) the impaired responsibility-grounding capa­cit­ies. I provide some counter­examples as well as dif­fi­culties for these proposals.

I argue, fol­low­ing Wakefield’s (1992a, 1992b) harm­ful dys­func­tion ana­lys­is of dis­order, that delu­sion­al beliefs are patho­lo­gic­al because they involve some kinds of harm­ful mal­func­tions. In oth­er words, they have a sig­ni­fic­ant neg­at­ive impact on well­being (harm­ful) and, in addi­tion, some psy­cho­lo­gic­al mech­an­isms, dir­ectly or indir­ectly related to them, fail to per­form the func­tions for which they were selec­ted (mal­func­tion­ing).

There can be two types of objec­tions to the pro­pos­al. The first type of objec­tion is that delu­sion­al beliefs might not involve any harm­ful mal­func­tions. For example, delu­sion­al beliefs might be play­ing psy­cho­lo­gic­al defence func­tions. The second type of objec­tion is that involving harm­ful mal­func­tion­ings is not suf­fi­cient for a men­tal state to be patho­lo­gic­al. For example, false beliefs might involve some mal­func­tions accord­ing to tele­ose­mantics (Dretske 1991; Millikan 1989). But, there could be harm­ful false beliefs that are not patho­lo­gic­al. The paper defends the pro­pos­al from these objections.

 

REFERENCES

Bortolotti, L. 2010. Delusions and oth­er irra­tion­al beliefs. Oxford: Oxford University Press.

Coltheart, M., Menzies, P. and Sutton, J. 2010. Abductive infer­ence and delu­sion­al belief. Cognitive Neuropsychiatry 15(1–3), pp. 261–287.

Dretske, F. I. 1991. Explaining beha­vi­or: Reasons in a world of causes. Cambridge, MA: The MIT Press.

Huq, S., Garety, P. and Hemsley, D. 1988. Probabilistic judge­ments in deluded and non-deluded sub­jects. The Quarterly Journal of Experimental Psychology 40(4), pp. 801–812.

Kahneman, D. and Tversky, A. 1973. On the psy­cho­logy of pre­dic­tion. Psychological Review 80(4), pp 237- 251.

Maher, B. A. 1974. Delusional think­ing and per­cep­tu­al dis­order. Journal of Individual Psychology 30, pp. 98–113.

McKay, R. 2012. Delusional infer­ence. Mind & Language 27(3), pp. 330–355.

Millikan, R. G. 1989. Biosemantics. The Journal of Philosophy 86, pp. 281–297.

Van Der Leer, L., Hartig, B., Goldmanis, M. and McKay, R. 2015. Delusion-­proneness and ‘Jumping to Conclusions’: Relative and abso­lute effects. Psychological Medicine 19(3), pp. 257–67.

Wakefield, J. C. 1992a. The Concept of Mental Disorder: On the bound­ary between bio­lo­gic­al facts and social val­ues. American Psychologist 47(3), pp. 373–388.

Wakefield, J. C. 1992b. Disorder as harm­ful dys­func­tion: A con­cep­tu­al cri­tique of DSM-III‑R’s defin­i­tion of men­tal dis­order. Psychological Review 99(2), pp. 232–247.