An interesting time for the study of moral judgment and cognition

Veljko Dubljevic- Banting Postdoctoral Fellow in the Neuroethics Research Unit at the The Institut de recherches cli­niques de Montréal and the depart­ment of Neurology and Neurosurgery at McGill University- Co-Editor of the Springer Book Series “Advances in Neuroethics”

What is mor­al? Is it always good to save lives? Is killing always wrong? Is being caring always a vir­tue? Are there vari­ous factors that col­lect­ively affect mor­al judge­ments? Are these factors self-standing or do they interact?

Our mor­al judge­ments and mor­al intu­itions sug­gest answers to some of these ques­tions. This is so for both experts, such as mor­al philo­soph­ers and psy­cho­lo­gists, who study mor­al­ity in their dif­fer­ent ways, and layper­sons alike. The study of mor­al­ity among mor­al philo­soph­ers has long been marked by dis­agree­ment between util­it­ari­ans, deont­o­lo­gists and vir­tue the­or­ists on norm­at­ive issues (such as should we give pri­or­ity respect­ively to con­sequences, duties or vir­tues in mor­al judg­ment), as well as between cog­nit­iv­ists and non-cognitivists, real­ists and anti-realists (to name just a few oppos­ing views) on meta-ethical issues.

Moral psychology—the empir­ic­al and sci­entif­ic study of human morality—has, by con­trast, long shown con­sid­er­able con­ver­gence in its approach to mor­al judg­ment. Despite some vari­ation in the details, it is strik­ing that Kohlberg’s (1968) devel­op­ment­al mod­el has simply been adop­ted, even where it is cri­ti­cised (see e.g., Gilligan 1982). According to the devel­op­ment­al mod­el mor­al judg­ment is simply the applic­a­tion of mor­al reas­on­ing – delib­er­ate, effort­ful use of mor­al know­ledge (a sys­tem 2 pro­cess, in today’s par­lance). This is not to dis­reg­ard the vari­ety of view­points in mor­al philo­sophy – mor­al psy­cho­logy has taken these to reflect dis­tinct stages in the devel­op­ment of a ‘mature’ morality.

This all changed with a paradigm shift in mor­al psy­cho­logy towards a more diverse ‘intu­it­ive paradigm’, accord­ing to which mor­al judg­ment is most often auto­mat­ic and effort­less (a sys­tem 1 pro­cess). Studies reveal­ing auto­mat­ism in every­day beha­viour (Bargh and Chartrand 1999), cog­nit­ive illu­sions and sub­lim­in­al influ­ences such as ‘prim­ing’ (Tulving and Schacter 1990), ‘fram­ing’ (Tversky and Kahneman 1981), and ‘anchor­ing’ effects (Ariely 2008), provide ample empir­ic­al evid­ence that mor­al cog­ni­tion, decision-making and judg­ment are often a product of asso­ci­at­ive, hol­ist­ic, auto­mat­ic and quick pro­cesses which are cog­nit­ively undemand­ing (see Haidt 2001). This along with the ‘mor­al dumb­found­ing’ effect – the fact that most people make quick mor­al judg­ments and are hard pressed to offer a reasoned explan­a­tion for them – led to a shift away from the devel­op­ment­al mod­el which struggled to acco­mod­ate these findings.

As a res­ult, mor­al psy­cho­lo­gists now agree that mor­al judg­ment is not driv­en solely by sys­tem 2 reas­on­ing. However, they dis­agree on almost everything else. A range of com­pet­ing the­or­ies and mod­els offer explan­a­tions on how mor­al judg­ment takes place. Some claim that mor­al judg­ments are noth­ing more than basic emo­tion­al responses, per­haps fol­lowed by ration­al­iz­a­tions (Haidt 2001), while oth­ers claim that there are com­pet­ing emo­tion­al and ration­al pro­cesses that pull mor­al judg­ment in one or the oth­er dir­ec­tion (Greene 2008), while still oth­ers think that mor­al judg­ment is intu­it­ive, but not neces­sar­ily emo­tion­al (see, e.g., Mikhail 2007, Gigerenzer 2010, Dubljevic & Racine 2014)

Here, I will sum­mar­ize some rel­ev­ant inform­a­tion and con­clude by con­sid­er­ing which mod­els are still viable and which are not, based on cur­rently avail­able evidence.

Let’s start with the basic emo­t­iv­ist mod­el: As men­tioned earli­er, it was espoused by Jonathan Haidt (2001) in pion­eer­ing work that offered a con­struct­ive syn­thes­is of social and cog­nit­ive psy­cho­lo­gic­al work on auto­mati­city, intu­ition and emo­tion, and has also been cham­pioned by influ­en­tial mor­al philo­soph­ers, such as Walter Sinnott-Armstrong et al. (2010). However, it has been called into ques­tion by work that suc­cess­fully dis­so­ci­ated emo­tion from mor­al judg­ment. For example, con­sider the ‘tor­ture case’ study (Batson et al 2009, Batson 2011). In this study, American respond­ents were asked to rate the mor­al wrong­ness of spe­cif­ic cases of tor­ture and their emo­tion­al arous­al. The exper­i­ment­al group is presen­ted with a vign­ette in which an American sol­dier is tor­tured by mil­it­ants, while a con­trol group read a text in which a Sri-Lankan sol­dier is tor­tured by Tamil rebels. Even though there was no sig­ni­fic­ant dif­fer­ence in the intens­ity of mor­al judg­ment, the respond­ents were ‘riled-up’ emo­tion­ally only in the case of a mem­ber of their in-group being tor­tured. This does not put mor­al emo­tions per se in ques­tion, but it neatly under­mines a crude ‘mor­al judg­ment is just emo­tion’ model.

Now, let’s take a look at the ‘dual-process’ mod­el of mor­al judg­ment. Pioneering research in the neur­os­cience of eth­ics (e.g. Greene et al. 2001) for­mu­lated a clas­si­fic­a­tion of dilem­mas into so-called imper­son­al, such as the ori­gin­al trol­ley dilemma (e.g. wheth­er to throw a switch to save five people and killing one) and per­son­al dilem­mas, such as the foot­bridge dilemma (e.g. wheth­er to push one man to his death in order to save five people). Proponents of the view, take their data to show that the pat­terns of responses in trol­ley dilem­mas favour a “util­it­ari­an” view of mor­al­ity based on abstract think­ing and cal­cu­la­tion, while responses in the foot­bridge dilemma sug­gest that emo­tion­al reac­tions drive answers. The pur­por­ted upshot is that ration­al (driv­ing util­it­ari­an cal­cu­la­tion) and emo­tion­al (driv­ing aver­sion to per­son­ally caus­ing injury) pro­cesses are com­pet­ing for dominance.

Even though there were some ini­tial stud­ies that seemed to cor­rob­or­ate this hypo­thes­is, it remains con­tro­ver­sial, with cer­tain empir­ic­al find­ings appear­ing to remain at odds with the dual-process approach. In par­tic­u­lar, if util­it­ari­an, out­come based judg­ment, is caused by abstract think­ing (sys­tem 2), where­as non-consequentialist intent or duty based judg­ment is intu­it­ive (sys­tem 1) and thus irra­tion­al, how come chil­dren ages 4 to 10 focus more on out­come than on intent (see Cushman 2013)? Given that abstract thought is developed after age 12, ‘fully ration­al’ util­it­ari­an judg­ments should not be observ­able in chil­dren. And yet they are not only observed, but seem to dom­in­ate imma­ture and dys­func­tion­al mor­al cognition.

It is then safe to say that recent research has called the dual-process mod­el into ques­tion. Recent stud­ies have shown that favour­ing the “util­it­ari­an” option has been actu­ally linked to anti-social per­son­al­ity traits, such as Machiavelianism (Bartels & Pizarro, 2011), and psy­cho­pathy (Koenings 2012), as well as with tem­por­ary (increased anger, decreased respons­ib­il­ity, induced lower levels of sero­ton­in Crockett & Rini 2015) and per­man­ent con­di­tions, such as vmPFC dam­age (Koenings 2007) and Fronto-temporal demen­tia (Mendez 2009), that are prob­ably not facil­it­at­ing “ration­al” decision mak­ing. Perhaps the most damning piece of evid­ence is a recent study (Duke & Begue 2015) estab­lish­ing a cor­rel­a­tion between study par­ti­cipants’ blood alco­hol con­cen­tra­tions and util­it­ari­an pref­er­ences. All in all, the empir­ic­al evid­ence seems to sug­gest a stronger role for impaired social cog­ni­tion than intact delib­er­at­ive reas­on­ing in pre­dict­ing util­it­ari­an responses in the trol­ley dilemma, which in turn leads to a con­clu­sion that the dual pro­cess mod­el is on thin ice.

So which mod­el is true? The data seems to sug­gest that an intu­ition­ist mod­el of mor­al judg­ment is most likely, how­ever there are at least three com­pet­it­ors: the mor­al found­a­tions the­ory (Haidt & Graham 2007), the uni­ver­sal mor­al gram­mar (Mikhail 2007, 2011) and the ADC approach (Dubljevic & Racine 2014).

Due to reas­ons of space I will not go into the spe­cif­ics of all mod­els apart from men­tion­ing them and their feas­ib­il­ity, and since I am an inter­ested party in this debate, I will briefly can­vass the ADC approach.

The Agent-Deed-Consequence frame­work offers an insight into the types of simple and fast intu­it­ive pro­cesses involved in mor­al apprais­als. Namely, the heur­ist­ic prin­ciple of attrib­ute sub­sti­tu­tion – quickly and effi­ciently sub­sti­tut­ing a com­plex and intract­able prob­lem with more access­ible inform­a­tion – is applied to spe­cif­ic inform­a­tion rel­ev­ant for mor­al apprais­al. I argued (along with my co-author, Eric Racine) that there are three kinds of mor­al intu­itions stem­ming from three kinds of heur­ist­ic pro­cesses that sim­ul­tan­eously mod­u­late mor­al judg­ments. We pos­ited that they also form the basis of three dis­tinct kinds of mor­al the­ory by sub­sti­tut­ing the glob­al attrib­ute of mor­al praiseworthiness/blameworthiness with the sim­pler attrib­utes of virtue/vice of the agent or char­ac­ter (as in vir­tue the­ory), right/wrong deed or action (as in deont­o­logy) and good/bad con­sequences or out­comes (as in consequentialism).

The Agent-Deed-Consequence frame­work provides a vocab­u­lary to start break­ing down mor­al judg­ment into cog­nit­ive com­pon­ents, which could increase explan­at­ory and pre­dict­ive power of future work on mor­al judg­ment in gen­er­al and mor­al heur­ist­ics in par­tic­u­lar. Furthermore, this research cla­ri­fies a wide set of find­ings from empir­ic­al and the­or­et­ic­al mor­al psy­cho­logy (e.g., “intu­it­ive­ness” and “counter-intuitiveness” of cer­tain judg­ments, mor­al “dumb­foun­ded­ness”, “eth­ic­al blind spots” of tra­di­tion­al mor­al prin­ciples, etc.). The frame­work offers a descrip­tion of how mor­al judg­ment takes place (three aspects are com­puted at the same time), but also offers norm­at­ive guid­ance on dis­so­ci­at­ing and cla­ri­fy­ing rel­ev­ant norm­at­ive components.

Perhaps an example might be help­ful to put things into per­spect­ive. Consider this (real life) case:

In October 2002, police­men in Frankfurt, Germany were faced with a chilling dilemma. They had in cus­tody the man who they sus­pec­ted had kid­napped a banker’s 11-year-old son and asked for ransom. Although the man was arres­ted while try­ing to take the ransom money, he main­tained his inno­cence and denied hav­ing any know­ledge of the where­abouts of the child. In the mean­time, time was run­ning out – if the kid­nap­per was in cus­tody, who will feed and hydrate the child? The police officer in charge finally decided to use coer­cion to make the sus­pect talk. He had threatened to inflict ser­i­ous pain upon the sus­pec­ted kid­nap­per if he did not reveal where he had hid­den the child. The threat worked – how­ever, the child was already dead. (Dubljevic & Racine 2014, p. 12)

The ADC approach allows us to ana­lyze the norm­at­ive cues of the case. Here it is safe to assume that the eval­u­ation of the agent is pos­it­ive (as a vir­tu­ous per­son), eval­u­ation of the deed or action is neg­at­ive (tor­ture is wrong), where­as the con­sequences are unclear ([A+] [D-] [C?] = [MJ?]).

Modulating any of the ele­ments of the case can res­ult in a dif­fer­ent intu­it­ive judg­ment, and the pub­lic con­tro­versy in Germany fol­low­ing this case cre­ated two camps: one stress­ing the uncer­tainty of guilt and a pre­ced­ent of com­mit­ting tor­ture in police work, and the oth­er stress­ing the poten­tial to save a child by any means neces­sary. If the case is changed so that the con­sequence com­pon­ent is clearly bad (e.g., sus­pect is inno­cent AND the child died), the intu­it­ive responses would be spe­cif­ic, pre­cise and neg­at­ive ([A+] [D-] [C-] = [MJ-]). And vice-versa, if we mod­u­late the case so that the con­sequences are clearly good (e.g., the sus­pect is guilty AND a life has been saved), the intu­it­ive responses would be spe­cif­ic, pre­cise and clearly pos­it­ive ([A+] [D-] [C+] = [MJ+]).

This is just one example of the frugal­ity of the ADC frame­work. However, it would be pre­ma­ture to con­clude that this mod­el is obvi­ously true or bet­ter than the remain­ing com­pet­it­ors, the mor­al found­a­tions the­ory and uni­ver­sal mor­al gram­mar. Ultimately, it is most likely that evid­ence will force all mod­els to accom­mod­ate new data and insights, but one thing is clear: this is an inter­est­ing time for the study of mor­al judg­ment and cognition.


References :

Ariely, D. 2008. Predictably irra­tion­al: The hid­den forces that shape our decisions. New York, NY: Harper.

Bargh, J. A., and T. L. Chartrand. 1999. The unbear­able auto­mati­city of being. American Psychologist 54: 462–479.

Bartels, D.M. & Pizarro, D. (2011) : The mis­meas­ure of mor­als : Antisocial per­son­al­ity traits pre­dict util­it­ari­an responses to mor­al dilem­mas, Cognition 121 : 154–161.

Batson, C.D. (2011): What’s wrong with mor­al­ity?, Emotion Review 3 (3): 230–236.

Batson, C.D., Chao, M.C. & Givens, J.M. (2009): Pursuing mor­al out­rage: Anger at tor­ture, Journal of Experimental Social Psychology, 45: 155–160.

Crockett, M.J., Clark, L., Hauser, M.D. & Robbins, T.W. (2010): Serotonin select­lively influ­ences mor­al judg­ment and beha­vi­or through effect on harm aver­sion, PNAS, 107 (40): 17433–38.

Crockett, M.J. & Rini, R.A. (2015): Neuromodulators and the instabil­ity of mor­al cog­ni­tion, in Decety, J. & Wheatley, T. (Eds.): The Moral Brain: A Multidisciplinary Perspective, Cambridge, MA: MIT Press, pp. 221–235.

Dubljević, V. & Racine, E. (2014): The ADC of Moral Judgment: Opening the Black Box of Moral Intuitions with Heuristics about Agents, Deeds and Consequences, AJOB–Neuroscience, 5 (4): 3–20.

Duke, A.A. & Begue, L. (2015): The drunk util­it­ari­an: Blood alco­hol con­cen­tra­tion pre­dicts util­it­ari­an responses in mor­al dilem­mas, Cognition 134: 121–127

Gigerenzer, G. (2010): Moral sat­is­ficing: Rethinking mor­al beha­vi­or as bounded ration­al­ity. Topics in Cognitive Science, 2 (3): 528–554.

Greene, J. D. (2008): The secret joke of Kant’s soul, in Sinnott-Armstrong, W. (Ed.): Moral psy­cho­logy Vol. 3, The neur­os­cience of mor­al­ity, Cambridge, MA: MIT Press; 35–79.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. and Cohen, J. D. (2001): An fMRI invest­ig­a­tion of emo­tion­al engage­ment in mor­al judg­ment, Science 293: 2105 – 2108.

Haidt, J. 2001. The emo­tion­al dog and its ration­al tail: A social intu­ition­ist approach to mor­al judg­ment. Psychological Review 108 (4): 814–834.

Haidt, J., & Graham, J. (2007). When mor­al­ity opposes justice: Conservatives have mor­al intu­itions that lib­er­als may not recog­nize. Social Justice Research 20(1): 98–116.

Hauser, M., Young, L. & Cushman, F. (2008): Reviving Rawls´s Linguistic Analogy, in Walter Sinott-Armstrong (Ed.): Moral psy­cho­logy 2:, MIT Press, pp. 107–144.

Knoch, D., Pasqual-Leone, A., Meyer, K., Treyer, V. and Fehr, E. (2006): Diminishing recip­roc­al fair­ness by dis­rupt­ing the right pre­front­al cor­tex, Science 314: 829–832.

Knoch, D; Nitsche, M.A; Fischbacher, U; Eisenegger, C; Pasqual-Leone, A. and Fehr, E. (2008): Studying the neuro­bi­o­logy of social inter­ac­tion with tran­s­cra­ni­al dir­ect cur­rent stimulation—The example of pun­ish­ing unfair­ness, Cerebral Cortex; 18:1987–1990.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M. & Damasio, A. (2007): Damage to the pre­front­al cor­tex increases util­it­ari­an mor­al judge­ments, Nature, 446: 908–911.

Koenigs M, Kruepke M, Zeier J, Newman JP. (2012): Utilitarian mor­al judg­ment in psy­cho­pathy, SCAN; 7(6): 708–14;

Kohlberg, L. (1968): The child as a mor­al philo­soph­er, Psychology Today, 2: 25–30.

Mendez, M. F. 2009. The neuro­bi­o­logy of mor­al beha­vi­or: Review and neuro­psy­chi­at­ric implic­a­tions. CNS Spectrums 14(11): 608–620.

Mikhail, J. 2007. Universal mor­al gram­mar: Theory, evid­ence and

the future. Trends in Cognitive Sciences 11(4): 143–152.

Mikhail, J. (2011): Elements of mor­al cog­ni­tion, New York: Cambridge University Press.

Persson, I. & Savulescu, J. (2012): Unfit for the Future: The Need for Moral Enhancement. Oxford: Oxford University Press.

Sinnott‐Armstrong, W., Young, L. & Cushman, F. (2010): Moral Intuitions, in John M. Doris (Ed.): The Moral Psychology Handbook, Oxford: Oxford University Press, DOI: 10.1093/acprof:oso/9780199582143.003.0008

Terbeck, S., Kahane, G., McTavish, S., Savulescu, J., Levy, N., Hewstone, M. & Cowen, P.J. (2013): Beta adren­er­gic block­ade reduces util­it­ari­an judg­ment, Biological Psychology 92: 323–328.

Tversky, A., and D. Kahneman. 1981. The fram­ing of decisions and the psy­cho­logy of choice. Science 211(4481): 453–458.

Tulving, E., and D. L. Schacter. 1990. Priming and human memory sys­tems. Science 247(4940): 301–306.

Young, L; Camprodon J.A; Hauser, M; Pascual-Leone, A. and Saxe, R. (2010): Disruption of the right tem­poro­pari­et­al junc­tion with tran­s­cra­ni­al mag­net­ic stim­u­la­tion reduces the role of beliefs in mor­al judge­ments, PNAS, 107: 6753– 6758.

One thought on “An interesting time for the study of moral judgment and cognition”

  1. The ADC frame­work is out­dated, but we can still use it for use­ful insights.

Comments are closed.