Seeking social connection: How children recover from social exclusion

Amanda Mae Woodward, PhD can­did­ate, Department of Psychology, University of Maryland

Think of a time that you met up with a friend at a cof­fee shop. The two of you sat at a table, drank cof­fee, and filled each oth­er in on your lives. Over the course of the dis­cus­sion, you may have exper­i­enced pos­it­ive emo­tions like hap­pi­ness, and you left the café with a sense of social con­nec­tion. Positive social inter­ac­tions, like the one just described, cor­res­pond with our over­all well-being and help ful­fill a fun­da­ment­al human need: the need to belong with oth­ers (Baumeister & Leary, 1995; Wesselman & Williams, 2013). However, as we all know, not all social inter­ac­tions are pos­it­ive. Imagine anoth­er scen­ario. You call one of your friends to make din­ner plans. Your friend explains that he already has plans for din­ner and will not be able to join you. You ask about his plans and learn that he is going to din­ner with all of your mutu­al friends and no one has exten­ded an invit­a­tion to you. How would you feel? You may, expec­tedly, exper­i­ence neg­at­ive emo­tions and feel lonely.

This inter­ac­tion, and oth­ers like it, are instances of social exclu­sion. Being excluded neg­at­ively impacts social, cog­nit­ive, and physiolo­gic­al pro­cessing (Baumeister, Twenge, & Nuss, 2002; Blackhart, Eckel, & Tice, 2007; DeWall, Deckman, Pond & Bonser, 2011). Exclusion leads to exper­i­ences of neg­at­ive affect, decreases in mood, lowered self-esteem, and feel­ings of isol­a­tion (Leary & Cottrell, 2013; Maner, DeWall, Baumeister, & Schaller, 2007). If social exclu­sion occurs chron­ic­ally, the reper­cus­sions of exclu­sion com­pound and become more severe over time (Richman 2013; Williams, 2007). Even young chil­dren are sub­ject to the neg­at­ive effects of social exclu­sion. Socially excluded middle school chil­dren report more neg­at­ive emo­tions and have decreased feel­ings of belong­ing when com­pared to their included coun­ter­parts (Abrams, Weick, Thomas, Colbe, & Franklin, 2011; Wölfer & Scheithauer, 2013). Four- to six-year-old chil­dren exclude each oth­er fre­quently, and being excluded has a neg­at­ive influ­ence on their future social beha­vi­ors (Fanger, Frankel, & Frazen, 2012; Stenseng, Belsky, Skalicka, & Wichstrøm, 2014). Due to social exclusion’s doc­u­mented harm­ful con­sequences across the lifespan, it is import­ant for children’s over­all well­being to find a way to mit­ig­ate its effects. This post will explore some of the main strategies chil­dren use to mit­ig­ate such effects.

How do chil­dren ameli­or­ate the con­sequences of social exclu­sion? One effect­ive strategy involves the excluded child rees­tab­lish­ing a social con­nec­tion (Maner et al., 2007).  Connecting with oth­ers sat­is­fies children’s need to belong and reduces neg­at­ive affect. To use this strategy, chil­dren must find poten­tial social part­ners with whom they are likely to have pos­it­ive inter­ac­tions. If they think future inter­ac­tions with the per­son who excluded them are likely, chil­dren may seek to recon­nect with the excluder through the use of ingra­ti­at­ing beha­vi­or (e.g., mim­icry or con­form­ing to another’s opin­ions). In oth­er cases, such as when recon­nect­ing with the excluder is unlikely, chil­dren may look for new approach­able social part­ners or con­texts with which to form pos­it­ive rela­tion­ships (Molden & Maner, 2013).

Young children’s responses to exclu­sion sup­port the use of both strategies. Five-year-olds who are excluded by group mem­bers imit­ate oth­er in-group mem­bers with more fidel­ity than chil­dren who were not excluded (Watson-Jones, Whitehouse, & Legare, 2015). Imitation is a type of flat­tery, so by mim­ick­ing the beha­vi­or of poten­tial social part­ners, chil­dren sig­nal that they will be a good per­son with whom to inter­act (Over & Carpenter, 2009). Excluded chil­dren also demon­strate their open­ness to new social inter­ac­tion in oth­er wa­­ys. For instance, 5‑year-olds who are excluded have been shown to engage in more men­tal­iz­ing and to attend to the feel­ings of oth­ers more often than included chil­dren (White et al., 2016). Even wit­ness­ing exclu­sion leads chil­dren to stra­tegic­ally seek social part­ners. After observing a peer exper­i­ence exclu­sion, chil­dren have been shown to dis­play beha­vi­ors that facil­it­ate social con­nec­tion, includ­ing imit­at­ing oth­ers more fre­quently, draw­ing more affil­i­at­ive pic­tures, and sit­ting phys­ic­ally closer to oth­ers (Marinovic, Wahl, & Träuble, 2017; Over & Carpenter, 2009; Song, Over, & Carpenter, 2015).

Less work has examined oth­er strategies chil­dren may use to reduce the harm­ful effects of social exclu­sion, par­tic­u­larly when they have, or believe that they have, restric­ted means by which to rees­tab­lish a social con­nec­tion. When the per­ceived like­li­hood of social recon­nec­tion is low, excluded people may react aggress­ively in order to estab­lish feel­ings of con­trol over their own lives (Wesselman & Williams, 2013). For instance, adults respond to social exclu­sion in anti­so­cial ways when they are unlikely to recon­nect with oth­ers (Maner & Molden, 2013). Indeed, adults have been shown to behave more aggress­ively and to engage in less proso­cial beha­vi­or after being excluded (DeWall & Twenge, 2013; Twenge, Baumeister, Tice, & Stucke, 2001). Some recent research has explored children’s aggress­ive beha­vi­or after exclu­sion and has found sim­il­ar evid­ence for the use of an aggress­ive strategy: chil­dren who were already high in aggres­sion demon­strated increases in aggres­sion fol­low­ing exclu­sion (Fanger, Frankel, & Frazen, 2012; Ostrov, 2010).

A final strategy to avoid or alle­vi­ate the harm­ful effects of social exclu­sion involves avoid­ing social inter­ac­tions with people who are likely to exclude you. It is pos­sible, and thus reas­on­able to infer, that people who have excluded you in the past would be likely to exclude you in the future, so you could cir­cum­vent the exper­i­ence of social exclu­sion by refrain­ing from inter­act­ing with them in the first place. Using this strategy requires excluded chil­dren to track social excluders and remem­ber pre­vi­ous inter­ac­tions. Our lab, the Lab for Early Social Cognition at the University of Maryland College Park, is cur­rently work­ing on a series of exper­i­ments to estab­lish if and when chil­dren can use this strategy to effect­ively reduce the odds of exper­i­en­cing social exclu­sion in the future.

Overall, social exclu­sion is harm­ful and can lead to dev­ast­ing effects, the con­sequences of which apply to both adults and young chil­dren. It is thus essen­tial to under­stand when chil­dren begin to exper­i­ence instances of social exclu­sion and to estab­lish how they can respond in order to pre­vent harm to them­selves. This work may also have implic­a­tions for the con­struc­tion and imple­ment­a­tion of inter­ven­tions designed to help chil­dren reduce instances of social exclu­sion that they may carry with them into adult­hood.

 

References

Abrams, D., Weick, M., Thomas, D., Colbe, H., & Franklin, K. M. (2011). On-line ostra­cism affects chil­dren dif­fer­ently from adoles­cents and adults. The British Journal of Developmental Psychology, 29(Pt 1), 110–123. http://doi.org/10.1348/026151010X494089

Baumiester, R.F. & Leary, M.R. (1995). The need to belong: Desire for inter­per­son­al attach­ments as a fun­da­ment­al human motiv­a­tion. Psychological Bulletin, 117(3), 497–529.

Baumeister, R.F., Twenge, J.M., & Nuss, C.K. (2002). Effects of social exclu­sion on cog­nit­ive pro­cesses: anti­cip­ated alone­ness reduces intel­li­gent thought. Journal of per­son­al­ity and social psy­cho­logy, 83(4), 817.

Blackhart, G.C., Eckel, L.A., & Tice, D.M. (2007). Salivary cortisol in response to acute social rejec­tion and accept­ance by peers. Biological psy­cho­logy, 75(3), 267–276. doi: 10.1016/j.biopsycho.2007.03.005

DeWall, N.C., Deckman, T., Pond, R.S., Bonser, I. (2011). Belongingness as a core per­son­al­ity trait: How social exclu­sion influ­ences social func­tion­ing and per­son­al­ity expres­sion.

Journal of Personality, 79(6), 1281- 1314. doi: 10.1111/j.1467–6494.2010.00695.x

DeWall, C.N & Twenge, J.M. (2013). Rejection and aggres­sion: Explaining the para­dox. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (3–8). Oxford: Oxford University Press.

Fanger, S.M., Frankel, L.A., & Hazen, N. (2012). Peer exclu­sion in preschool children’s play: Naturalistic obser­va­tions in a play­ground set­ting. Merrill-Palmer Quarterly, 58(2), 224–254.

Leary, M.R. & Cottrell, C.A. (2013). Evolutionary per­spect­ives on inter­per­son­al accept­ance and rejec­tion. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (9–19). Oxford: Oxford University Press.

Maner, J.K., DeWall, C.N, Baumeister, R.F., & Schaller, M. (2007). Does social exclu­sion motiv­ate inter­per­son­al recon­nec­tion? Resolving the “por­cu­pine prob­lem.” Journal of Personality and Social Psychology, 92(1), 42–55. doi: 10.1037/0022–3514.92.1.42.

Molden, D.C. & Maner, J.K. (2013). How and when exclu­sion motiv­ates social recon­nec­tion. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (121–131). Oxford: Oxford University Press.

Marinovic, V. & Träuble, B. (2018). Vicarious social exclu­sion and memory in young chil­dren. Developmental Psychology, 54(11), 2067–2076. doi: 10.1037/dev0000593

Marinovic, V., Wahl, S., & & Träuble, B. (2017). “Next to you” – Young chil­dren sit closer to a per­son fol­low­ing vicari­ous ostra­cism. Journal of Experimental Child Psychology, 156, 179–185. doi: 10.1016/j.jecp.2016.11.011

Over, H., & Carpenter, M. (2009). Priming third-party ostra­cism increases affil­i­at­ive imit­a­tion in chil­dren. Developmental Science, 12(3), 1–8. doi: 10.1111/j.1467–7687.2008.00820.x

Ostrov, J. (2010). Prospective asso­ci­ations between peer vic­tim­iz­a­tion and aggres­sion. Child Development, 81(6), 1670–1677.

Richman, L.S. (2013). The multi-motive mod­el of responses to rejection-related exper­i­ences. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (9–19). Oxford: Oxford University Press.

Song, R., Over, H., & Carpenter, M. (2015). Children draw more affil­i­at­ive pic­tures fol­low­ing prim­ing with third-party ostra­cism. Developmental Psychology, 51(6), 831–840. doi: 10.1037/a0039176

Stenseng, F., Belsky, J., Skalicka, V. & Wichstrom, L. (2014). Social exclu­sion pre­dicts impaired self-regulation: A 2‑year lon­git­ud­in­al pan­el study includ­ing the trans­ition from preschool to school. Journal of Personality, 83(2), 213–220. doi: 10.1111/jopy.12096

Twenge, J.M., Baumeister, R.F., Tice, D.M., & Stucke, T.S. (2001). If you can’t join them, beat them: Effects of social exclu­sion on aggress­ive beha­vi­or. Journal of Personality and Social Psychology, 81(6), 1058–1069). doi: 10.1037/0022–3514.81.6.1058.

Watson-Jones, R.E., Whitehouse, H., & Legare, C.H. (2015). In-group ostra­cism increases high-fidelity imit­a­tion in early child­hood. Psychological Science, 27(1), 34–42. doi: 10.1177/0956797615607205

Wesselman, E.D., & Williams, K.D. (2013). Ostracism and stages of cop­ing. In C.N. DeWall (Ed.), The Oxford Handbook of Social Exclusion (20–30). Oxford: Oxford University Press.

White, L.O., Klein, A.M., von Klitzing, K., Graneist, A., Otto, Y., Hill, J., Over, H., Fonagy,P., & Crowley, M.J. (2016). Putting ostra­cism into per­spect­ive: Young chil­dren tell more men­tal­ist­ic stor­ies after exclu­sion, but not when anxious. Frontiers in Psychology, 7, 1–15. doi: 10.3389/fpsyg.2016.01926

Williams, K.D. (2007). Ostracism. Annual Review of Psychology, 58, 425–452. doi: 10.1146/annurev.psych.58.110405.085641.

Wölfer, R., & Scheithauer, H. (2013). Ostracism in child­hood and adoles­cence: Emotional, cog­nit­ive, and beha­vi­or­al effects of social exclu­sion. Social Influence, 8(4), 217–236. doi: 10.1080/15534510.2012.706233

Understanding others’ minds: Social context matters

Paula Fischer — PhD Candidate, Cognitive Development Centre, Department of Cognitive Science, Central European University

Imagine that you are walk­ing with your friend through the forest, and sud­denly you find yourselves next to a bush filled with red ber­ries. Let’s sup­pose that you know a lot about dif­fer­ent plants, and you imme­di­ately recog­nise that these ber­ries are not only red ber­ries, but that they are also dan­ger­ous. In fact, they are pois­on­ous. However, you can see the sparkle in your friend’s eyes, and that he is already reach­ing towards the ber­ries to replen­ish his energy levels after the long walk. What do you do? Well, if you would like to save the life of your friend, or at least pre­vent him from an unpleas­ant exper­i­ence, you would warn him. You would do this because you under­stand that he believes that these ber­ries are good to eat, and you know that he wouldn’t go for these ber­ries if he knew that they were dan­ger­ous.

From this example and oth­er every­day exper­i­ences, we can see that humans pos­sess highly soph­ist­ic­ated abil­it­ies to ‘read’ oth­ers’ minds. This abil­ity, called the Theory of Mind (ToM), enables us to attrib­ute men­tal states to oth­ers, and to make pre­dic­tions and draw infer­ences from their beha­vi­or and actions to their men­tal states. It is there­fore essen­tial for social inter­ac­tions, because it under­pins our being able to effect­ively coordin­ate and com­mu­nic­ate with oth­ers. Researchers have been invest­ig­at­ing this ability’s char­ac­ter­ist­ics for dec­ades, and much of this research has focused on when and how it devel­ops. In this post, I will pro­pose that one aven­ue for mak­ing pro­gress in resolv­ing open ques­tions about the devel­op­ment of ToM can be made by appeal­ing to when we use ToM.

Since Dennett (1978) poin­ted out that attrib­ut­ing true beliefs to oth­ers can­not be empir­ic­ally dis­tin­guished from agents simply mak­ing pre­dic­tions about the actions of oth­ers on the basis of their own know­ledge and beliefs about the world, the con­ven­tion­al test for ToM became prob­ing false belief (FB) under­stand­ing. One typ­ic­al way to test for the under­stand­ing of false beliefs in chil­dren is the location-change task (Wimmer & Perner 1983; Baron-Cohen, Leslie & Frith, 1985). In such a stand­ard false belief task, par­ti­cipants are exposed to a story in which the main char­ac­ter has a false belief regard­ing a loc­a­tion of an object (as a second char­ac­ter changed its loc­a­tion while she was absent). When to expli­citly indic­ate where the first char­ac­ter will look for the object, chil­dren typ­ic­ally fail to take into account her false belief before the age of 4, answer­ing (or point­ing) towards the new (actu­al) loc­a­tion of the object (Wimmer & Perner, 1983, Perner, Leekam & Wimmer, 1987).

There has been an ongo­ing debate as to wheth­er the abil­ity to under­stand oth­ers’ (false) beliefs is early devel­op­ing, or wheth­er it devel­ops only from around the age of 4 with the emer­gence of oth­er abil­it­ies, e.g. exec­ut­ive func­tion and lan­guage (see for example Slade & Ruffmann, 2005). Two main lines of research have col­lec­ted evid­ence either for or against these state­ments. One line of research which uses impli­cit meas­ures of false-belief under­stand­ing, mostly influ­enced by Leslie’s the­ory on pre­tence (Leslie, 1987), sug­gests that infants are sens­it­ive to oth­ers’ beliefs from very early on. For example, Onishi and Baillargeon (2005) found evid­ence of false-belief under­stand­ing in 15-month-olds using a viol­a­tion of expect­a­tions paradigm (see Scott & Baillargeon, 2017 for a review on this research). The oth­er line of research instead sug­gests that full-blown ToM devel­ops only after the age of 4. This line of research attempts to explain pos­it­ive find­ings with young­er infants by appeal­ing to either low level cues (e.g. Heyes, 2014), or a min­im­al ToM account (Apperly & Butterfill 2009) which pro­poses that an early devel­op­ing sys­tem is rich enough to rep­res­ent belief-like states only (but not beliefs per se).

How can this puzzle regard­ing early mind read­ing be solved? One may ask: if there is a con­cep­tu­al change around the age of 4, then what exactly hap­pens around that time that allows or trig­gers such change? I will sug­gest that focus­ing on why ToM is cru­cial in sev­er­al aspects of our every­day social lives (from lan­guage devel­op­ment and com­mu­nic­a­tion, to cooper­a­tion and altru­ist­ic beha­viour) may provide a means of answer­ing this ques­tion.

Can the basic abil­ity to track oth­ers’ men­tal states con­trib­ute to lan­guage acquis­i­tion? Some exper­i­ment­al evid­ence sup­ports the hypo­thes­is that, from a rel­at­ively early age, infants are sens­it­ive to semant­ic incon­gru­ity. That is, they under­stand when an object is labelled incon­gru­ently from its real mean­ing (e.g. Friedrich & Friederici, 2005; 2008). A study by Forgács and col­leagues (2018) invest­ig­ated wheth­er infants would track such semant­ic incon­gru­it­ies by oth­ers’ per­spect­ives. They meas­ured 14-months-olds event-related poten­tial (ERP) sig­nals, and found that infants show N400 activ­a­tion (a well-established neuro­psy­cho­lo­gic­al indic­at­or of semant­ic incon­gru­ity) not only when objects are incon­gru­ently labelled from their own view­point, but also from their com­mu­nic­at­ive partner’s point of view (see also Kutas & Federmeier, 2011; Kutas & Hillyard, 1980). These find­ings sug­gest that infants track the men­tal states of social part­ners, keep such attrib­uted rep­res­ent­a­tions updated, and use them to assess oth­ers’ semant­ic pro­cessing. This study can fur­ther be taken as indic­at­ing that rep­res­ent­a­tion­al capa­cit­ies (such as those required for belief ascrip­tion) are present at 14-month-olds in a com­mu­nic­at­ive con­text.

Such belief attri­bu­tion in sim­il­arly young infants can also be observed in ostensive-communicative infer­en­tial con­texts. In a study by Tauzin and Gergely (2018), infants’ look­ing time was meas­ured dur­ing the obser­va­tion of unfa­mil­i­ar com­mu­nic­at­ive agents; chil­dren needed to inter­pret the turn-taking exchange of vari­able tone sequences, which was indic­at­ive of com­mu­nic­at­ive trans­fer of goal rel­ev­ant inform­a­tion from a know­ledge­able to a naïve agent. In their exper­i­ments, infants observed the fol­low­ing inter­ac­tion: one of the agents placed a ball in a cer­tain loc­a­tion, and later saw the ball mov­ing to a dif­fer­ent loc­a­tion. The oth­er agent, who had not observed the location-switch, later tried to retrieve the ball. Based on their look­ing times, infants only expec­ted the ball-retrieving agent to go to where the ball really was if the first agent (who observed the location-switch) com­mu­nic­ated the trans­fer. Based on these find­ings, the authors sug­ges­ted that 13-months-old infants recog­nised these turn-taking exchanges as com­mu­nic­at­ive inform­a­tion trans­fer, sug­gest­ing that they can attrib­ute communication-based beliefs to oth­er agents if they can infer the rel­ev­ant inform­a­tion that is being trans­mit­ted.

Besides play­ing a role in chil­dren com­ing to under­stand import­ant aspects of com­mu­nic­a­tion, ToM may play a cru­cial part in cooper­a­tion and altru­ist­ic beha­viour. The ques­tion as to how ToM relates to, for instance, instru­ment­al help­ing, has received rel­at­ively little atten­tion. One of the first stud­ies prob­ing the rela­tion­ship between false belief under­stand­ing and help­ing comes from Buttelmann, Carpenter and Tomasello (2009). During their exper­i­ments, infants observed a prot­ag­on­ist strug­gling to open a box in order to obtain a toy. In the crit­ic­al part of the exper­i­ment the toy was moved by anoth­er agent from its ini­tial box to a dif­fer­ent box. The prot­ag­on­ist either observed this move, or had left the room.  When the main prot­ag­on­ist had left the room and then tried to open the box which ini­tially con­tained the toy, infants spon­tan­eously helped him by indic­at­ing that he should try to open the altern­at­ive box instead. However, when the main prot­ag­on­ist observed the location-switch, infants helped him open the ini­tial box. This sug­gests that by 18 months of age, help­ing beha­viour is guided by the beliefs of the helpee. This study, amongst oth­ers (see also Matsui & Miura, 2008), sup­port the hypo­thes­is that rep­res­ent­ing oth­ers’ men­tal states is a key fea­ture for help­ing and cooper­at­ing, and that infants are cap­able of tak­ing into account oth­ers’ beliefs when help­ing spon­tan­eously from very early on.

The abil­ity to rep­res­ent oth­ers’ men­tal states plays a cru­cial part in our social lives. Understanding what oth­ers think is import­ant not only for high-level cooper­at­ive or com­pet­it­ive prob­lem solv­ing, but even in smal­ler day-to-day social inter­ac­tions when we need to act fast (e.g., pre­vent­ing our friends from com­ing to harm dur­ing a walk). The stud­ies dis­cussed here sug­gest that from a rel­at­ively early age, humans are able to adjust their help­ing beha­viour on the basis of oth­ers’ beliefs, and the beliefs of oth­ers may shape children’s under­stand­ing of com­mu­nic­at­ive epis­odes. Future research may do well to keep in mind that when it comes to ToM, social con­text seems to mat­ter.

 

References

Apperly, I. A., & Butterfill, S. A. (2009). Do humans have two sys­tems to track beliefs and belief-like states?. Psychological review116(4), 953.

Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the aut­ist­ic child have a “the­ory of mind”?. Cognition21(1), 37–46.

Buttelmann, D., Carpenter, M., & Tomasello, M. (2009). Eighteen-month-old infants show false belief under­stand­ing in an act­ive help­ing paradigm. Cognition112(2), 337–342.

Dennett, D. C. (1978). Beliefs about beliefs [P&W, SR&B]. Behavioral and Brain sci­ences1(4), 568–570.

Forgács, B., Parise, E., Csibra, G., Gergely, G., Jacquey, L., & Gervain, J. (2018). Fourteen-month-old infants track the lan­guage com­pre­hen­sion of com­mu­nic­at­ive part­ners. Developmental sci­ence, e12751.

Friedrich, M., & Friederici, A. D. (2005). Lexical prim­ing and semant­ic integ­ra­tion reflec­ted in the event-related poten­tial of 14-month-olds. Neuroreport16(6), 653–656.

Friedrich, M., & Friederici, A. D. (2008). Neurophysiological cor­rel­ates of online word learn­ing in 14-month-old infants. Neuroreport19(18), 1757–1761.

Heyes, C. (2014). Submentalizing: I am not really read­ing your mind. Perspectives on Psychological Science9(2), 131–143.

Kutas, M., & Federmeier, K. D. (2011). Thirty years and count­ing: Finding mean­ing in the N400 com­pon­ent of the event-related brain poten­tial (ERP). Annual Review of Psychology, 62, 621–647.

Kutas, M., & Hillyard, S. A. (1980). Reading sense­less sen­tences: Brain poten­tials reflect semant­ic incon­gru­ity. Science, 207(4427), 203–205.

Leslie, A. M. (1987). Pretense and rep­res­ent­a­tion: The ori­gins of” the­ory of mind.”. Psychological review94(4), 412

Matsui, T., & Miura, Y. (2008). Pro-social motive pro­motes early under­stand­ing of false belief.

Onishi, K. H., & Baillargeon, R. (2005). Do 15-month-old infants under­stand false beliefs?. sci­ence308(5719), 255–258.

Perner, J., Leekam, S. R., & Wimmer, H. (1987). Three-year-olds’ dif­fi­culty with false belief: The case for a con­cep­tu­al defi­cit. British journ­al of devel­op­ment­al psy­cho­logy5(2), 125–137.

Scott, R. M., & Baillargeon, R. (2017). Early false-belief under­stand­ing. Trends in Cognitive Sciences21(4), 237–249.

Tauzin, T., & Gergely, G. (2018). Communicative mind-reading in pre­verbal infants. Scientific reports8(1), 9534.

Slade, L., & Ruffman, T. (2005). How lan­guage does (and does not) relate to the­ory of mind: A lon­git­ud­in­al study of syn­tax, semantics, work­ing memory and false belief. British Journal of Developmental Psychology23(1), 117–141.

Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and con­strain­ing func­tion of wrong beliefs in young chil­dren’s under­stand­ing of decep­tion. Cognition13(1), 103–128.

 

 

 

Representing the Self in Predictive Processing

Elmarie Venter — PhD can­did­ate, Department of Philosophy, Ruhr-Universität Bochum

Who do you think you are? Or, less con­front­a­tion­ally, what ingredi­ents (e.g. memor­ies, beliefs, desires) go into the mod­el of your self? In this post, I explore dif­fer­ent con­cep­tions of how the self is rep­res­en­ted in the pre­dict­ive pro­cessing (PP) frame­work. At the core of PP is the notion that the brain is in the busi­ness of mak­ing pre­dic­tions about the world, and that the brain is primar­ily an organ that func­tions to min­im­ize pre­dic­tion error (i.e. the dif­fer­ence between pre­dic­tions about the state of the world and the observed state of the world) (Clark, 2017, p.727). Predictive pro­cessing neces­sit­ates mod­el­ing the causes of our sens­ory per­turb­a­tions and since agents them­selves are also such causes, a self-model is required under PP. The intern­al mod­els of the self will include “…rep­res­ent­a­tions of the agent’s own body and its tra­ject­or­ies and inter­ac­tions with oth­er causes in the world” (Hohwy & Michael, 2017, p.367).

In this post I will dis­cuss accounts of how the self is mod­elled under two PP camps: Conservative PP and Radical PP. Broadly speak­ing, Conservative PP holds that the mind is infer­en­tially secluded from the envir­on­ment — the body also forms part of the extern­al envir­on­ment. All pre­dic­tion error min­im­iz­a­tion occurs behind an ‘evid­en­tiary bound­ary’ which implies that the brain recon­structs the state of the world (Hohwhy, 2016, p.259). In con­trast, Radical PP holds that rep­res­ent­a­tions of the world are a mat­ter of embod­ied and embed­ded cog­ni­tion (Dolega, 2017, p.6). Perceiving my self, oth­er agents, and the world, is not a pro­cess of recon­struc­tion but rather a coupled pro­cess between per­cep­tion and action. How does the view of a self-model align with these ver­sions of pre­dict­ive pro­cessing? I will argue that Radical PP’s account of self-modelling is prefer­able because it avoids two key con­cerns that arise from Conservative PP’s mod­el­ing of the self.

On the side of Conservative PP, Hohwy & Michael (2017) con­ceive of the self-model as one that cap­tures “…rep­res­ent­a­tions of the agent’s own body…” as well as hid­den, endo­gen­ous causes, such as “…char­ac­ter traits, biases, reac­tion pat­terns, affec­tions, stand­ing beliefs, desires, inten­tions, base-level intern­al states, and so on” (Hohwy & Michael, 2017, p.369). On this view, the self is just anoth­er set of causes that is modeled in order to min­im­ize pre­dic­tion error. This view likens the mod­el of the self to mod­els of the envir­on­ment and oth­er people (and their men­tal states), and is in line with the Conservative PP account advoc­ated by Hohwy (2016) under which there is an ‘evid­en­tiary bound­ary’ between mind and world, behind which pre­dic­tion error min­im­iz­a­tion takes place. Any parts of our body “…that are not func­tion­ally sens­ory organs are bey­ond the bound­ary… [and are] just the kinds of states that should be modeled in intern­al, hier­arch­ic­al mod­els of a (pre­dic­tion error min­im­iz­a­tion) sys­tem.” (Hohwy, 2016, p.269).

As I see it, Conservative PP’s self-modeling (as described by Hohwy & Michael (2017)) is prob­lem­at­ic in two ways:

1) Our access to inform­a­tion about our own body is neg­lected by Conservative PP. Agents typ­ic­ally have access to cer­tain inform­a­tion about their body that is immune to error through misid­en­ti­fic­a­tion; this immunity does not extend to inform­a­tion about the world and oth­er agents.

2) Conservative PP ignores the marked dif­fer­ence in how we rep­res­ent ourselves and oth­er agents. Other agents can only enter our inten­tion­al states as part of the con­tent, where­as we ourselves can also enter our inten­tion­al states in anoth­er way.

In deal­ing with these con­cerns I pro­pose that the self is rep­res­en­ted along two dimen­sions: as-subject and as-object (a dis­tinc­tion that can be traced back to Wittgenstein’s Blue Book, and which can be fleshed out by appeal to debates on ref­er­ence and inten­tion­al­ity). The fun­da­ment­al idea here is that there is a cer­tain kind of error — in identi­fy­ing the per­son that some­thing is true of (e.g. a bod­ily pos­i­tion or a men­tal state) — that can occur when identi­fy­ing the self as-object which can­not occur in identi­fy­ing the self as-subject (Longuenesse, 2017, p.20; Evans 1982). Imagine that I per­ceive a cof­fee mug in front of me, and once I have seen it I reach out my hand to grasp the mug in order to drink from it. Now envi­sion a sim­il­ar situ­ation, in which I am act­ing like this while at the same time look­ing at myself in a mir­ror. In the lat­ter situ­ation I have two sources of inform­a­tion for obtain­ing know­ledge about myself grasp­ing the cup of cof­fee. One source of inform­a­tion is proprio­cept­ive and kin­es­thet­ic, and there­fore provides me with inform­a­tion about myself from the inside. The oth­er source of inform­a­tion is visu­al, and provides me with inform­a­tion from the out­side. The lat­ter source could provide me with inform­a­tion about the actions of oth­er agents as well, where­as the former can only be a source of inform­a­tion about my own self.

Since I am rep­res­en­ted in the con­tent of my visu­al exper­i­ence in the mir­ror scen­ario, I can mis­rep­res­ent myself as the inten­tion­al object of that very visu­al exper­i­ence. I could be mis­taken with respect to whom I am see­ing in the mir­ror grasp­ing the cof­fee mug; I may mis­takenly believe that I am in fact observing someone else grasp­ing the cup. No such mis­take is pos­sible in the con­trast case, in which I gain inform­a­tion about grasp­ing the mug from a proprio­cept­ive and kin­es­thet­ic source. A more rad­ic­al example of this dis­tinc­tion between self as-object and as-subject comes from indi­vidu­als with soma­to­pa­raphrenia. Such indi­vidu­als do not identi­fy some parts of their body as their own, e.g. they may believe that their arm belongs to someone else, but they are not mis­taken about who is identi­fy­ing their arm as belong­ing to someone else (Kang, 2016; Vallar & Ronchi, 2009). Recanati (2007, pp.147–148) spells out this dif­fer­ence by dis­tin­guish­ing between the con­tent and mode of an inten­tion­al state: “The con­tent is a rela­tiv­ized pro­pos­i­tion, true at a per­son, and the intern­al mode determ­ines the per­son rel­at­ive to which that rela­tiv­ized con­tent is eval­u­ated: myself”. With this dis­tinc­tion in mind, the prob­lems with Conservative PP becomes clear: the agent and their body are not rep­res­en­ted in the same way as any oth­er distal state in the world. Instead of the agent and their body only form­ing part of the con­tent of an inten­tion­al state (as Hohwy & Michael’s account would imply), they enter the state through the mode of per­cep­tion as well.

Clark (2017, p.729) provides an ana­logy that illus­trates the first prob­lem with self-modeling under Conservative PP: “The pre­dict­ing brain seems to be in some­what the same pre­dic­a­ment as the imprisoned agents in Plato’s “allegory of the cave”.” That is, under Conservative PP, distal states can only be inferred by the secluded brain, just as the pris­on­ers in the cave can only infer what the shad­ows on the walls are shad­ows of. The con­sequence of this is that we have no dir­ect (and, there­fore, error-immune) access to our own bod­ies. However, as has been illus­trated above, the self enters inten­tion­al states through mode (per­ceiv­ing, ima­gin­ing, remem­ber­ing, etc.) as well as con­tent, and this provides us with cer­tain inform­a­tion that is immune from error. In con­trast, Radical PP does not con­ceive of the body as a distal object. Instead, the agent’s body plays an act­ive role in determ­in­ing the sens­ory inform­a­tion that we have access to; it plays a fun­da­ment­al role in how we sample, and act in, the world. This act­ive role is such that cer­tain inform­a­tion is avail­able to us error free – even if I am mis­taken about anoth­er agent grasp­ing the cup, I can­not be mis­taken that it is me that is see­ing someone grasp the cup. In this sense, Radical PP provides us with a prefer­able story about how whole embod­ied agents are mod­els of the envir­on­ment and min­im­ize pre­dic­tion error through a vari­ety of adapt­ive strategies (Clark, 2017,  p.742).

 The two dimen­sions of self can also shed light on the second con­cern with Conservative PP because this dis­tinc­tion illus­trates how we per­ceive and inter­act with oth­er agents. As dis­cussed above, the self as-object enters inten­tion­al states as part of the con­tent, and the self as-subject enters such states through mode. The world, includ­ing oth­er agents and their men­tal states, only ever form part of the con­tent of our inten­tion­al states. Referring back to the example spelled out above: anoth­er agent can only ever play the same role in per­cep­tion as I do in the mir­ror case, i.e. as con­tent of the inten­tion­al struc­ture. I do not have access to oth­er agents “from the inside,” how­ever. For instance, I do not have the same access to the reas­ons behind oth­ers’ actions (are they grasp­ing the cup to drink from it, to clear it from the table, to see if there is still cof­fee in it?), nor do I have access to wheth­er the oth­er agent will suc­cess­fully grasp the mug (is their grip wide enough, do they have enough strength in their wrist?). There is thus a dimen­sion of the self to which one has priv­ileged access. We only have access to oth­er agents through per­cep­tu­al infer­ence (i.e. by observing their beha­vi­or and infer­ring its causes), where­as we have both per­cep­tu­al and act­ive infer­en­tial access to our own beha­viours. Though Conservative PP pro­ponents main­tain that the secluded brain only has per­cep­tu­al infer­en­tial access to our own body (Hohwy, 2016, p.276), there is some­thing markedly dif­fer­ent in what enables us to mod­el the causes of our own beha­vi­or and men­tal states to that of oth­er agents. I have proprio­cept­ive, kin­es­thet­ic, and intero­cept­ive access to inform­a­tion about myself; I only have extero­cept­ive inform­a­tion about oth­er agents.

 For Conservative PP, the body (and by exten­sion, the self) is just anoth­er object in the world that receives com­mands to act in ser­vice of pre­dic­tion error min­im­iz­a­tion. I have high­lighted two con­cerns about this view: the body is treated as a distal object, and the body (and self) placed on the same side of the evid­en­tiary bound­ary as oth­er agents. This means that the dimen­sion of self which is immune to error through misid­en­ti­fic­a­tion is not accom­mod­ated, and the marked dif­fer­ence in our access to inform­a­tion about our own states and those of oth­er agents is ignored. Radical PP, how­ever, avoids both con­cerns by tak­ing into account the two rep­res­ent­a­tion­al dimen­sions of the self and employ­ing an embod­ied approach to cog­ni­tion. The Radical PP account there­fore provides a more refined ver­sion of self-modeling. My beliefs, desires, and bod­ily shape can all be inferred in the mod­el of self-as-object, but self-as-subject cap­tures the part of the self that is not inferred: it con­tains inform­a­tion about me and my body from the inside, which is an essen­tial part of who we think we are.

References:

Clark, A., 2017. Busting Out: Predictive Brains, Embodied Minds, and The Puzzle of The Evidentiary Veil. Noûs, 51(4): 727–753.

Dolega, K., 2017. Moderate Predictive Processing. In T. Metzinger & W. Wiese (Eds.) Philosophy and Predictive Processing. Frankfurt Am Main: MIND Group.

Evans, J., 1982. The Varieties of Reference. Oxford: Clarendon Press.

Friston, K. J. and Stephan, K. E., 2007. Free-energy and the Brain. Synthese, 159(3): 417–458.

Hohwy, J., 2016. The Self-Evidencing Brain. Noûs, 50(2): 259–285.

Hohwy, J. and Michael, J., 2017. Why Should Any Body Have A Self? In F. de Vignemont & A. Alsmith (Eds.) The Subject’s Matter: Self-Consciousness And The Body. Cambridge, Massachusetts: MIT Press.

Kang, S. P., 2016. Somatoparaphrenia, the Body Swap Illusion, and Immunity to Error through Misidentification. The Journal of Philosophy, 113(9): 463–471.

Longuenesse, B., 2017. I, Me, Mine: Back To Kant, And Back Again. New York: Oxford University Press.

Michael, J. and De Bruin, L., 2015. How Direct is Social Perception. Consciousness and Cognition, 36: 373–375.

Recanati, F., 2007. Perspectival Thought: A Plea For (Moderate) Relativism. Clarendon Press.

Thompson, E. and Varela, F. J., 2001. Radical Embodiment: Neural Dynamics and Consciousness. Trends in Cognitive Sciences, 5(10): 418–425.

Vallar, G. and Ronchi, R., 2009. Somatoparaphrenia: A Body Delusion. A Review of the Neuropsychological Literature. Experimental Brain Research, 192(3): 533–551.

Wittgenstein, L. 1960. Blue Book. Oxford: Blackwell.

 

The frustrating family of pain

Sabrina Coninx — PhD can­did­ate, Department of Philosophy, Ruhr-Universität Bochum

What is pain? At first glance this ques­tion seems straight­for­ward — almost every­one knows what it feels like to be in pain. We have all felt that shoot­ing sen­sa­tion when hit­ting the funny bone, or the dull throb of a head­ache after a stress­ful day. There is also much com­mon ground with­in the sci­entif­ic com­munity with respect to this ques­tion. Typically, pain is taken to be best defined as a cer­tain kind of men­tal phe­nomen­on exper­i­enced by sub­jects as pain. For instance, this cor­res­ponds to the (still widely accep­ted) defin­i­tion of pain giv­en by the International Association for the Study of Pain (1986). Most cog­nit­ive sci­ent­ists are not merely inter­ested in know­ing that vari­ous phe­nom­en­al exper­i­ences qual­i­fy as pain from a first-person per­spect­ive, how­ever. Instead, pain research­ers primar­ily focus on search­ing for neces­sary and suf­fi­cient con­di­tions for pain, such that a the­ory can be developed which allows for inform­at­ive dis­crim­in­a­tions and ideally far-reaching gen­er­al­iz­a­tions. Pain has proven to be a sur­pris­ingly frus­trat­ing object of research in this regard. In the fol­low­ing, I will out­line one of the main reas­ons for this frus­tra­tion, namely the lack of a suf­fi­cient and neces­sary neur­al cor­rel­ate for pain. Subsequently, I will briefly review three solu­tions to this chal­lenge, arguing that the third is the most prom­ising option.

Neuroscientifically speak­ing, pain is typ­ic­ally under­stood as an integ­rated phe­nomen­on which emerges with the inter­ac­tion of sim­ul­tan­eously act­ive neur­al struc­tures that are widely dis­trib­uted across cor­tic­al and sub­cor­tic­al areas (e.g. Apkarian et al., 2005; Peyron et al., 1999). Interestingly, and per­haps sur­pris­ingly, the activ­a­tion of these neur­al struc­tures is neither suf­fi­cient nor neces­sary for the exper­i­ence of pain (Wartolowska, 2011). Those neur­al struc­tures that are highly cor­rel­ated with the exper­i­ence of pain are not pain-specific (e.g. Apkarian, Bushnell, & Schweinhardt, 2013), and even the activ­a­tion of the entire neur­al net­work is not suf­fi­cient for pain. For instance, itch and pain are pro­cessed in the same ana­tom­ic­ally defined net­work (Mochizuki & Kakigi, 2015). There also does not seem to be any neur­al struc­ture whose activ­a­tion is neces­sary for pain (Tracey, 2011). Even patients with sub­stan­tial lesions in those neur­al struc­tures that are often regarded as most cent­ral for pain pro­cessing are still able to exper­i­ence pain (e.g. Starr et al., 2009).

Figure 1 Human brain processing pain, retrieved from Apkarian et al. (2005). Original picture caption: Cortical and subcortical regions involved in pain perception, their inter-connectivity and ascending pathways. Location of brain regions involved in pain perception are color-coded in a schematic drawing and in an example MRI. (a) Schematic shows the regions, their inter-connectivity and afferent pathways. The schematic is modified from Price (2000) to include additional brain areas and connections. (b) The areas corresponding to those shown in the schematic are shown in an anatomical MRI, on a coronal slice and three sagittal slices as indicated at the coronal slice. The six areas used in meta-analysis are primary and secondary somatosensory cortices (SI, SII, red and orange), anterior cingulate (ACC, green), insula (blue), thalamus (yellow), and prefrontal cortex (PC, purple). Other regions indicated include: primary and supplementary motor cortices (M1 and SMA), posterior parietal cortex (PPC), posterior cingulate (PCC), basal ganglia (BG, pink), hypothalamus (HT), amygdala (AMYG), parabrachial nuclei (PB), and periaqueductal grey (PAG).

Given the dif­fi­culties of char­ac­ter­iz­ing pain by appeal to unique neur­al struc­tures or a spe­cial­ized net­work, some research­ers have attemp­ted to char­ac­ter­ize pain by appeal to neur­osig­na­tures. ‘Neurosignature’ refers to the spatio-temporal activ­ity pat­tern gen­er­ated by a net­work of inter­act­ing neur­al struc­tures (Melzack, 2001). Thus, neur­osig­na­tures are less about the mere involve­ment of an ana­tom­ic­ally defined neur­al net­work, but rather about how involved struc­tures are activ­ated and how their activ­ity is coordin­ated (Reddan & Wager, 2017). Most inter­est­ingly, it has been shown that the neur­osig­na­ture of pain dif­fers from the neur­osig­na­ture of oth­er soma­to­sensory stim­u­la­tions, such as itch and warmth (Forster & Handwerker, 2014; Wager et al., 2013).

Unfortunately, dif­fer­ent kinds of pain sub­stan­tially dif­fer with respect to their under­ly­ing neur­osig­na­tures. For instance, neur­osig­na­tures found in patients with chron­ic pain sub­stan­tially dif­fer from those of healthy sub­jects exper­i­en­cing acute pain (Apkarian, Baliki, & Geha, 2009), because the cent­ral nervous sys­tem of sub­jects who live in per­sist­ing pain is con­tinu­ously reor­gan­ized as the brain’s mor­pho­logy, plas­ti­city and chem­istry change over time (Kuner & Flor, 2016; Schmidt-Wilcke, 2015). At most, there­fore, we can state that a par­tic­u­lar coordin­a­tion of neur­al activ­ity is suf­fi­cient to dis­tin­guish a par­tic­u­lar kind of pain from cer­tain non-pain phe­nom­ena. However, there seems to be no single neur­osig­na­ture that is neces­sary for pain to emerge.

We have arrived at the dilemma that makes pain such a frus­trat­ing object of research. On one hand, research­ers mostly agree that all and only pains are best defined by means of them being sub­ject­ively exper­i­enced as pains. On the oth­er hand, cog­nit­ive sci­ent­ists are unable to identi­fy a single set of neur­al pro­cesses that cap­ture the cir­cum­stances under which all and only pains are exper­i­enced as such. Thus, the sci­entif­ic com­munity has been unable to provide an inform­at­ive and gen­er­al­iz­able account of pain. Two solu­tions to this dilemma have been offered in the lit­er­at­ure.

The first solu­tion involves relin­quish­ing the notion of pain as a cer­tain kind of phe­nom­en­al exper­i­ence, which is an ante­cedence for most cog­nit­ive sci­ent­ists. Instead, neur­os­cientif­ic data alone are sup­posed to be the primary cri­terion for the iden­ti­fic­a­tion of pain (e.g. Hardcastle, 2015). This solu­tion there­fore elim­in­ates the first part of the dilemma. There are two main prob­lems faced by this solu­tion. Firstly, neur­al data do not reveal the func­tion of neur­al struc­tures, net­works or sig­na­tures by them­selves. The func­tion of these neur­al char­ac­ter­ist­ics are only revealed by their being cor­rel­ated with some sort of addi­tion­al data (which, in the case of pain, is typ­ic­ally the subject’s qual­i­fic­a­tion of their own exper­i­ence as pain). Thus, remov­ing the sub­ject­ive aspect from pain is ana­log­ous to bit­ing the hand that feeds you. Secondly, ser­i­ous eth­ic­al prob­lems arise when sub­ject­ive exper­i­ence is no longer treated as the decis­ive cri­terion for the iden­ti­fic­a­tion of pain. Because neur­al data may dif­fer from the sub­ject­ive qual­i­fic­a­tion, this approach may lead to a rejec­tion of med­ic­al sup­port for patients that under­go a phe­nom­en­al exper­i­ence of pain. This is a con­sequence that the major­ity of con­tem­por­ary research­ers are — for good reas­ons — unwill­ing to take (Davis et al., 2018).

As a second solu­tion, one might relin­quish the idea that it is pos­sible to devel­op a single the­ory of pain. Instead, research­ers should focus on the devel­op­ment of sep­ar­ate the­or­ies for sep­ar­ate kinds of pain (see, for instance, Jennifer Corns, 2016, 2017). An ana­logy might illus­trate this approach. The gem class ‘jade’ is uni­fied due to the appar­ent prop­er­ties of the respect­ive stones, such as col­or and tex­ture. However, in sci­entif­ic terms the class of jade is com­posed of jadeite and neph­rite, which are of dif­fer­ent chem­ic­al com­pos­i­tions. Thus, it is pos­sible to devel­op a the­ory that enables a dis­tinct char­ac­ter­iz­a­tion with far-reaching gen­er­al­iz­a­tions for either jadeite or neph­rite, but not for jade itself (which lacks a suf­fi­cient and neces­sary chem­ic­al com­pos­i­tion). Similarly, this solu­tion to the pain dilemma holds that all and only pains are uni­fied due to their phe­nom­en­al exper­i­ence as pain, but they can­not be cap­tured in terms of a single sci­entif­ic the­ory. Instead, we need a mul­ti­pli­city of the­or­ies for pain which refer to those sub­classes that reveal a neces­sary and suf­fi­cient neur­al pro­file.

This solu­tion avoids the meth­od­o­lo­gic­al and eth­ic­al prob­lems faced by the first solu­tion because it is com­pat­ible with pains being defined as a cer­tain sub­ject­ive men­tal phe­nomen­on. However, because this solu­tion denies that it is pos­sible to devel­op a single the­ory of pain, the phe­nomen­on that the sci­entif­ic com­munity is inter­ested in study­ing could not thereby be com­pletely accoun­ted for. If we did devel­op mul­tiple the­or­ies of pain (one for acute pain and one for chron­ic pain, say), it is far from clear that these the­or­ies could explain why all and only pains are sub­ject­ively exper­i­enced as pain. At best, this might explain why cer­tain cases are acute or chron­ic pains, but not why they are both pains. What is miss­ing is a the­or­et­ic­al link that con­nects the dif­fer­ent kinds of pain that, accord­ing to this solu­tion, emerge only as inde­pend­ent neur­al phe­nom­ena in sep­ar­ated the­or­ies. In terms of the pre­vi­ous ana­logy, we need some­thing which plays the role of the resemb­lances in chem­ic­al com­pos­i­tion between jadeite and neph­rite that explains why both of them appear as ‘jade’.

I would like to offer a third solu­tion to the dilemma which avoids the con­cerns faced by the first solu­tion, and which provides the miss­ing the­or­et­ic­al link required by the more prom­ising second solu­tion. This is to hold a fam­ily resemb­lance the­ory of pain. The idea of fam­ily resemb­lance comes from Ludwig Wittgenstein (1953) (although he devel­ops this idea with respect to the mean­ing of con­cepts rather than the prop­er­ties of nat­ur­al phe­nom­ena). A fam­ily resemb­lance the­ory of pain takes the phe­nom­en­al char­ac­ter of pain to uni­fy all and only pains; one’s own sub­ject­ive exper­i­ence of pain as such is the cri­terion of iden­ti­fic­a­tion that picks out mem­bers of the ‘fam­ily’ of pain. Moreover, the fam­ily resemb­lance the­ory of pain denies the pres­ence of an under­ly­ing suf­fi­cient and neces­sary neur­al con­di­tion for pain; there is no neur­al pro­cess that dis­tinct­ively and essen­tially char­ac­ter­izes pain. Thus, the sub­ject­ive qual­i­fic­a­tion iden­ti­fies all and only cases of pain, although they do not share any fur­ther neces­sary or suf­fi­cient neur­al fea­ture. Nonetheless, a fam­ily resemb­lance the­ory fur­ther claims that it is still pos­sible to devel­op a sci­en­tific­ally use­ful neurally-based the­ory of pain that accounts for the phe­nomen­on that the sci­entif­ic com­munity is inter­ested in.

For this third solu­tion, all and only those phe­nom­ena that are exper­i­enced as pain are con­nec­ted through a struc­ture of sys­tem­at­ic resemb­lances that hold between their diver­gent neur­al pro­files. For instance, con­sider, again, acute and chron­ic pain. Both are exper­i­enced as pain, and they are sub­stan­tially dif­fer­ent from each oth­er from a neur­al per­spect­ive when dir­ectly com­pared. However, the trans­form­a­tion from acute to chron­ic pain is a gradu­al pro­cess, whereby the respect­ive dur­a­tion of pain cor­rel­ates with the extent of dif­fer­ences in their neur­al pro­file (Apkarian, Baliki, & Geha, 2009). Thus, the neur­al pro­cess of a pain’s first occur­rence is rel­at­ively sim­il­ar to its second occur­rence, which itself only slightly dif­fers from its third occur­rence, and so forth, until it has trans­formed into some com­pletely dif­fer­ent neur­al phe­nomen­on. This con­nec­tion of resemb­lances over time enables us, how­ever, to explain why sub­jects exper­i­ence all of these kinds of pain as pain: acute and chron­ic pain are bound togeth­er under the fam­ily resemb­lance the­ory through the resemb­lance rela­tions that hold between the vari­ety of pains that con­nect them.

Moreover, the fam­ily resemb­lance the­ory motiv­ates the invest­ig­a­tion of pain’s resemb­lance rela­tions which might prove the­or­et­ic­ally as well as prac­tic­ally use­ful. In fur­ther devel­op­ing research pro­jects of this kind, it appears plaus­ible that, for instance, pains that are more sim­il­ar to each oth­er are more respons­ive to the same kind of treat­ment, even though they do not share a neces­sary and suf­fi­cient neur­al core prop­erty. Understanding the gradu­al trans­ition with­in the resemb­lance rela­tions that lead from acute to chron­ic pain might also offer new pos­sib­il­it­ies of inter­ven­tion. Thus, instead of devel­op­ing a sep­ar­ate the­ory for dif­fer­ent kinds of pain, this third approach motiv­ates the invest­ig­a­tion of the diversity of neur­al pro­files that occur with­in the fam­ily of pain and of the exact struc­ture of their resemb­lance rela­tions, and indeed first steps in this dir­ec­tion are already being taken (e.g. Roy & Wager, 2017).

In sum, when it comes to men­tal phe­nom­ena, such as pain, the under­ly­ing neur­al sub­strate reaches a com­plex­ity and diversity which pre­vents the iden­ti­fic­a­tion of neces­sary and suf­fi­cient neur­al con­di­tions. The fam­ily of pain there­fore con­sti­tutes a frus­trat­ing research object. However, we do not need to throw out the baby with the bathwa­ter and relin­quish the defin­i­tion of pain as a cer­tain kind of men­tal phe­nomen­on, or the idea of a sci­en­tific­ally use­ful the­ory of pain. Of course, a fam­ily resemb­lance the­ory will be lim­ited with respect to its dis­crim­in­at­ive and pre­dict­ive value, since it acknow­ledges that there is no neces­sary or suf­fi­cient neur­al sub­strate for pain. However, it is the most reduct­ive the­ory of pain that can be developed in accord­ance with recent empir­ic­al data, and which can account for the fact that all and only pains are exper­i­enced as pain.

 

References

Apkarian, A. V, Bushnell, M. C., Treede, R.-D., & Zubieta, J.-K. (2005). Human brain mech­an­isms of pain per­cep­tion and reg­u­la­tion in health and dis­ease. European Journal of Pain, 9(4), 463–484.

Apkarian, A. V., Baliki, M. N., & Geha, P. Y. (2009). Towards a the­ory of chron­ic pain. Progress in Neurobiology, 87(2), 81–97.

Apkarian, A. V., Bushnell, M. C., & Schweinhardt, P. (2013). Representation of pain in the brain. In S. B. McMahon, M. Koltzenburg, I. Tracey, & D. C. Turk (Eds.), Wall and Melzack’s Textbook of Pain (6th ed., pp. 111–128). Philadelphia: Elsevier Ltd.

Corns, J. (2016). Pain elim­in­ativ­ism: sci­entif­ic and tra­di­tion­al. Synthese, 193(9), 2949–2971.

Corns, J. (2017). Introduction: pain research: where we are and why it mat­ters. In J. Corns (Ed.), The Routledge Handbook of Philosophy of Pain (pp. 1–13). London; New York: Routledge.

Davis, K. D., Flor, H., Greely, H. T., Iannetti, G. D., Mackey, S., Ploner, M., Pustilnik, A., Tracey, I., Treede, R.-F., & Wager, T. D. (2018). Brain ima­ging tests for chron­ic pain: med­ic­al, leg­al and eth­ic­al issues and recom­mend­a­tions. Nature Reviews Neurology, in press.

Forster, C., & Handwerker, H. O. (2014). Central nervous pro­cessing of itch and pain. In E. E. Carstens & T. Akiyama (Eds.), Itch: Mechanisms and Treatment (pp. 409–420). Boca Raton (FL): CRC Press/Taylor & Francis.

Hardcastle, V. G. (2015). Perception of pain. In M. Matthen (Ed.), The Oxford Handbook of Philosophy of Perception (pp. 530–542). Oxford: Oxford University Press.

IASP Subcommitte on Classification. (1986). Pain terms: a cur­rent list with defin­i­tions and notes on usage. Pain, 24(sup­pl. 1), 215–221.

Kuner, R., & Flor, H. (2016). Structural plas­ti­city and reor­gan­iz­a­tion in chron­ic pain. Nature Reviews Neuroscience, 18(1), 20–30.

Melzack, R. (2001). Pain and the neur­omat­rix in the brain. Journal of Dental Education, 65(12), 1378–1382.

Mochizuki, H., & Kakigi, R. (2015). Central mech­an­isms of itch. Clinical Neurophysiology, 126(9), 1650–1660.

Peyron, R., García-Larrea, L., Grégoire, M. C., Costes, N., Convers, P., Lavenne, F., Maugière, F., Michel, D., & Laurent, B. (1999). Haemodynamic brain responses to acute pain in humans. Sensory and atten­tion­al net­works. Brain, 122(9), 1765–1779.

Reddan, M. C., & Wager, T. D. (2017). Modeling pain using fMRI: from regions to bio­mark­ers. Neuroscience Bulletin, 34(1), 208–215.

Roy, M., & Wager, T. D. (2017). Neuromatrix the­ory of pain. In J. Corns (Ed.), Routledge Handbook of Philosophy of Pain (pp. 87–97). London; New York: Routledge.

Schmidt-Wilcke, T. (2015). Neuroimaging of chron­ic pain. Best Practice and Research: Clinical Rheumatology, 29(1), 29–41.

Starr, C. J., Sawaki, L., Wittenberg, G. F., Burdette, J. H., Oshiro, Y., Quevedo, A. S., & Coghill, R. C. (2009). Roles of the insu­lar cor­tex in the mod­u­la­tion of pain: insights from brain lesions. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 29(9), 2684–2694.

Tracey, I. (2011). Can neuroima­ging stud­ies identi­fy pain endophen­o­types in humans? Nature Reviews. Neurology, 7(3), 173–181.

Wager, T. D., Atlas, L. Y., Lindquist, M. A., Roy, M., Woo, C.-W., & Kross, E. (2013). An fMRI-based neur­o­lo­gic sig­na­ture of phys­ic­al pain. The New England Journal of Medicine, 368(15), 1388–1397.

Wartolowska, K. (2011). How neuroima­ging can help us to visu­al­ize and quanti­fy pain? European Journal of Pain Supplements, 5(2), 323–327.

Wittgenstein, L. (1953). Philosophical invest­ig­a­tions. G. E. M. Anscombe & R. Rhees (Eds.). Oxford: Blackwell Publishing.

How can I credibly commit to others?

Francesca Bonalumi — PhD can­did­ate, Department of Cognitive Science, Central European University

 

 

Imagine that you plan to go to the gym with your friend Kate. You decide togeth­er to meet in the lock­er room at 6pm. Why would you expect that Kate will hon­our this agree­ment to meet you at the gym? Now, ima­gine that at 5.30pm you dis­cov­er that some oth­er friends are gath­er­ing at 6pm, and you would love to join them. What restrains you from join­ing them, even if this is now your pre­ferred option? Your answers to these kinds of dilem­mas that are faced in every­day life will prob­ably involve some ref­er­ence to the fact that a com­mit­ment was in place between you and Kate.

The notion of a com­mit­ment is worth invest­ig­at­ing, in part, because it applies to such a wide vari­ety of cases: we are com­mit­ted to our part­ners, our faith, our work, our prom­ises, our goals, and even ourselves. Although there is an obvi­ous sim­il­ar­ity between all these situ­ations, I will restrict this post to instances of inter­per­son­al com­mit­ment, namely those com­mit­ments that are made by one indi­vidu­al to anoth­er indi­vidu­al (cfr. Clark 2006). According to a stand­ard philo­soph­ic­al defin­i­tion of inter­per­son­al com­mit­ment, a com­mit­ment is a rela­tion among one com­mit­ted agent, one agent to whom the com­mit­ment has been made, and an action which the com­mit­ted agent is oblig­ated to per­form (Searle 1969; Scanlon 1998).

The abil­ity to make and assess inter­per­son­al com­mit­ments is cru­cial in sup­port­ing our proso­cial beha­viour: being motiv­ated to com­ply with those courses of action that we have com­mit­ted to, and being able to assess wheth­er we can rely on oth­ers’ com­mit­ments, enables us to per­form a wide range of jointly coordin­ated and inter­per­son­al activ­it­ies that wouldn’t oth­er­wise be feas­ible (Michael & Pacherie, 2015). This abil­ity requires psy­cho­lo­gic­al mech­an­isms that induce indi­vidu­als to fol­low rules or plans even when it is not in their short-term interests: this can sus­tain phe­nom­ena from the inhib­i­tion of short-term self-interested actions to the motiv­a­tion for mor­al beha­viour. I will focus on one key, yet under­ap­pre­ci­ated, aspect of this rela­tion which sus­tains the whole act of com­mit­ting: how the com­mit­ted agent gives assur­ance to the oth­er agent that she will per­form the rel­ev­ant action. That is, how she makes her com­mit­ment cred­ible.

Making a com­mit­ment can be defined as an act that aims to influ­ence anoth­er agent’s beha­viour by chan­ging her expect­a­tions (e.g. my com­mit­ting to help a friend influ­ences my friend’s beha­viour, inso­far as she can now rely on my help), and by this act the com­mit­ter gains addi­tion­al motiv­a­tion for per­form­ing the action that she com­mit­ted to (Nesse 2001; Schelling 1980). The key ele­ment in all of this is cred­ib­il­ity: how do I cred­ibly per­suade someone that I will do some­thing that I wouldn’t do oth­er­wise? And why would I remain motiv­ated to do some­thing that is no longer in my interest to do? Indeed, a dilemma faced by recip­i­ents in any com­mu­nic­at­ive inter­ac­tion is determ­in­ing wheth­er they can rely on the sig­nal of the sender (i.e. how to rule out the pos­sib­il­ity that the sender is send­ing a fake sig­nal) (Sperber et al., 2010). Likewise, in a cooper­at­ive con­text the prob­lem for any agent is how to dis­tin­guish between a cred­ible com­mit­ment and a fake com­mit­ment, and how to sig­nal a cred­ible com­mit­ment without being mis­taken for a defect­or (Schelling, 1980).

The most per­suas­ive way to make my com­mit­ment cred­ible is to dis­card altern­at­ive options in order to change my future incent­ives, such that com­pli­ance with my com­mit­ments will remain in my best interests (or be my only pos­sible choice). Odysseus instruct­ing his crew to tie him to the mast of the ves­sel and to ignore his future orders is one strong example of com­mit­ting to res­ist the Sirens’ call in this man­ner; avoid­ing cof­fee while try­ing to quit smoking (when hav­ing a cigar­ette after a cof­fee was a well-established habit) is anoth­er example.

How can we per­suade oth­ers that our com­mit­ments are cred­ible when incent­ives are less tan­gible, and altern­at­ive options can­not be com­pletely removed? Consider a mar­riage, in which both part­ners rely on the fact that the oth­er will remain faith­ful even if future incent­ives change. Emotions might be one way of sig­nalling my will­ing­ness to guar­an­tee the exe­cu­tion of the com­mit­ment (Frank 1998; Hirshleifer 2001). If two indi­vidu­als decide to com­mit to a rela­tion­ship, the emo­tion­al ties that they form ensure that neither will recon­sider the costs and bene­fits of the rela­tion­ship[1].  Likewise, if, dur­ing a fight, one indi­vidu­al dis­plays uncon­trol­lable rage, she is giv­ing her audi­ence reas­on to believe that she won’t give up the fight even if con­tinu­ing to fight is to her dis­ad­vant­age. One reas­on that emo­tions are taken to be cred­ible is because they are allegedly hard to con­vin­cingly fake: some stud­ies sug­gest that humans are intu­it­ively able to recog­nize the appro­pri­ate emo­tions when observing a face (Elfenbein & Ambady, 2002), and to some extent humans are able to effect­ively dis­crim­in­ate between genu­ine and fake emo­tion­al expres­sion (Ekman, Davidson, & Friesen, 1990; Song, Over, & Carpenter, 2016).

Formalising a com­mit­ment by mak­ing prom­ises, oaths or vows is anoth­er way of increas­ing the cred­ib­il­ity of your com­mit­ment. Interestingly, with such form­al­ised declar­a­tions people not only mani­fest an emo­tion­al attach­ment to the object of the com­mit­ment; they also sig­nal a will­ing­ness to put their repu­ta­tion at risk. This is because the more pub­lic the com­mit­ment is (and the more people are aware of the com­mit­ment), the high­er the repu­ta­tion­al stakes will be for the com­mit­ted indi­vidu­al.

Securing a com­mit­ment by alter­ing your incent­ives, by risk­ing your repu­ta­tion, or by express­ing it via emo­tion­al dis­plays are import­antly sim­il­ar: the ori­gin­al set of mater­i­al pay­offs for per­form­ing each action changes, because now the costs of smoking or unty­ing your­self from the mast of a ves­sel are too high (if it is even still pos­sible to pay these costs). But we can ima­gine the emo­tion­al costs paid in case of a fail­ure (e.g. the dis­ap­point­ment from slip­ping back into our undesir­able habit of smoking), as well as the social costs (e.g. dam­age to our repu­ta­tion as a reli­able indi­vidu­al), as incent­ives to com­ply with the action that was com­mit­ted to (Fessler & Quintelier 2014).

 

 

Cheating Non-cheating
Before the com­mit­ment p -p
After the com­mit­ment p – (m + r + e) -p

Fig.1 Payoff mat­rix of the decision to cheat on your part­ner: p is the pleas­ure you get out of cheat­ing, where­as m is the mater­i­al costs paid in such cases (e.g. a costly divorce), r is the repu­ta­tion­al costs and e is the emo­tion­al bur­den that will be paid in such cases. When p is not high­er than the sum of r, m and e, and the indi­vidu­al accur­ately pre­dict the like­li­hood of these out­comes, we’ll have a situ­ation in which break­ing a com­mit­ment is not worth­while.

 

Consistent with the idea that com­mit­ments change your pay­off mat­rix (see Fig.1), sev­er­al stud­ies have shown that com­mit­ments facil­it­ate coordin­a­tion and cooper­a­tion in mul­tiple eco­nom­ic games. Promises were found to increase an agent’s trust­worthy beha­viour as well as her partner’s pre­dic­tions about her beha­viour in a trust game (Charness and Dufwenberg 2006), and they were found to increase one’s rates of dona­tions in a dic­tat­or game (Sally 1995; Vanberg 2008). Spontaneous prom­ises have also been found to be pre­dict­ive of cooper­at­ive choices in a Prisoner’s Dilemma game (Belot, Bhaskar & Van de Ven 2010). The will­ing­ness to be bound to a spe­cif­ic course of action (e.g. as Ulysses) has also been found to be highly bene­fi­cial in Hawk-Dove and Battle-of-Sexes games, as com­mit­ted play­ers are more likely to obtain their pre­ferred out­comes (Barclay 2017).

Interestingly, the pay­off struc­tures that an agent faces when they make a com­mit­ment is sim­il­ar to the pay­off struc­ture of a threat: If you are involved in a drivers’ game of chick­en, the out­come you want is the one in which you don’t swerve. But your part­ner prefers the out­come in which she does not swerve, and the worst out­come would be the one in which the two cars crash because neither of you swerved. The key factor is, again, wheth­er you can cred­ibly sig­nal to the oth­er driver that you won’t spin the wheel, no mat­ter what.

Some of the same means by which cred­ib­il­ity can be con­veyed in cases com­mit­ment apply to threats as well. For instance, one effic­a­cious way by which you can cred­ibly per­suade the oth­er driver is by remov­ing the steer­ing wheel and throw­ing it out of the win­dow, thereby phys­ic­ally pre­vent­ing your­self from chan­ging the dir­ec­tion of your car (Kahn 1965); anoth­er is by play­ing a war of nerves, con­vey­ing the idea that you are so emo­tion­ally con­nec­ted to your goal that you would be will­ing to pay the highest cost if neces­sary.

Threat is an inter­est­ing phe­nomen­on to con­sider when invest­ig­at­ing the role of cred­ib­il­ity in com­mit­ment because it might help us to under­stand how com­mit­ment works, and how threat and com­mit­ment might have evolved in sim­il­ar fash­ion. What leads a non-human anim­al to cred­ibly sig­nal an inten­tion to behave in a cer­tain way to its audi­ence, and what lead its audi­ence to rely on this sig­nal, is highly rel­ev­ant for invest­ig­at­ing com­mit­ment. It is still uncer­tain just how threat sig­nals have sta­bil­ized evol­u­tion­ar­ily, giv­en that a select­ive pres­sure for fak­ing the threat would also be evol­u­tion­ar­ily advant­age­ous (Adams & Mesterton-Gibbons 1995). The same select­ive pres­sure apply to human threats and com­mit­ments: if the goal is to sig­nal future com­pli­ance to an action in order to change the audience’s beha­viour (by chan­ging her expect­a­tions), what motiv­ates us to then com­ply to that sig­nal instead of, say, simply tak­ing advant­age of the change in our audience’s beha­viour?

In oth­er words, the phe­nomen­on of com­mit­ment is intrins­ic­ally tied to the prob­lem of recog­nising (and maybe even pro­du­cing) fake sig­nals, and deceiv­ing oth­ers, just as in the case of mak­ing a threat. That being said, it is worth keep­ing in mind that the phe­nomen­on of threat dif­fers import­antly from the phe­nomen­on of com­mit­ment, inso­far as the former does not entail any motiv­a­tion for proso­cial beha­viour. In this respect, the phe­nom­ena of quiet calls and nat­al attrac­tion, in which anim­als sig­nal poten­tial cooper­a­tion or a dis­pos­i­tion not to engage in a fight, are also worth invest­ig­at­ing fur­ther for the sake of bet­ter under­stand­ing how cred­ib­il­ity can be estab­lished in the case of com­mit­ment (Silk 2001).

Most of our social life is built upon com­mit­ments that are either impli­cit or expli­citly expressed. We expect people to do things even in the absence of a verbal agree­ment to do so, and we act in accord­ance with these expect­a­tions. Investigating the factors that carry this motiv­a­tion­al force, such as cred­ib­il­ity, is the next big chal­lenge in bet­ter grasp the com­plex­it­ies of this import­ant notion, and would help us to bet­ter under­stand its onto­gen­et­ic and phylo­gen­et­ic devel­op­ment.

 

REFERENCES

Adams, E. S., & Mesterton-Gibbons, M. (1995). The cost of threat dis­plays and the sta­bil­ity of decept­ive com­mu­nic­a­tion. Journal of Theoretical Biology, 175(4), 405–421.

Barclay, P. (2017). Bidding to Commit. Evolutionary Psychology, 15(1), 1474704917690740.

Belot, M., Bhaskar, V., & van de Ven, J. (2010). Promises and cooper­a­tion: Evidence from a TV game show. Journal of Economic Behavior & Organization, 73(3), 396–405.

Charness, G., & Dufwenberg, M. (2006). Promises and Partnership. Econometrica, 74, 1579–1601.

Clark, H. H. (2006). Social actions, social com­mit­ments. In S.C. Levinson, N.J. Enfield (Eds.), Roots of human social­ity: Culture, cog­ni­tion and inter­ac­tion, (pp. 126–150). New York: Bloomsbury.

Ekman, P., Davidson, R. J., & Friesen, W. V. (1990). The Duchenne smile: Emotional expres­sion and brain physiology: II. Journal of Personality and Social Psychology, 58(2), 342–353.

Elfenbein, H. A., & Ambady, N. (2002). On the uni­ver­sal­ity and cul­tur­al spe­cificity of emo­tion recog­ni­tion: A meta-analysis. Psychological Bulletin, 128(2), 203–235.

Fessler, D. M. T., & Quintelier, K. (2014). Suicide Bombers, Weddings, and Prison Tattoos. An Evolutionary Perspective on Subjective Commitment and Objective Commitment. In R. Joyce, K. Sterelny, & B. Calcott (Eds.), Cooperation and its evol­u­tion (pp. 459–484). Cambridge, MA: The MIT Press.

Frank, R. H. (1988). Passion with­in reas­on. New York, NY: W.W. Norton & Company.

Hirshleifer, J. (2001). On the Emotions as Guarantors of Threats and Promises. In The Dark Side of the Force: Economic Foundations of Conflict Theory (pp. 198–219). Cambridge, MA: Cambridge University Press.

Kahn, H. (1965). On Escalation: Metaphors and Scenarios. New York, NY: Praeger Publ. Co.

Michael, J., & Pacherie, E. (2015). On Commitments and Other Uncertainty Reduction Tools in Joint Action. Journal of Social Ontology, 1(1).

Nesse, R. M. (2001). Natural Selection and the Capacity for Subjective Commitment. In R. M. Nesse (Ed.), Evolution and the Capacity for Commitment (pp. 1–43). New York, NY: Russell Sage Foundation.

Sally, D. (1995). Conversation and cooper­a­tion in social dilem­mas a meta-analysis of exper­i­ments from 1958 to 1992. Rationality and soci­ety, 7(1), 58–92.

Scanlon, T. M. (1998). What We Owe to Each Other. Cambridge, MA: Harvard University Press.

Schelling, T. C. (1980). The Strategy of Conflict. Cambridge, MA: Harvard University Press.

Searle, J. R. (1969). Speech Acts: An essay in the philo­sophy of lan­guage. Cambridge, MA: Cambridge University Press.

Silk, J. B. (2001). Grunts, Girneys, and Good Intentions: The Origins of Strategic Commitment in Nonhuman Primates. In R. M. Nesse (Ed.), Evolution and the Capacity for Commitment (pp. 138–157). New York, NY: Russell Sage Foundation.

Song, R., Over, H., & Carpenter, M. (2016). Young chil­dren dis­crim­in­ate genu­ine from fake smiles and expect people dis­play­ing genu­ine smiles to be more proso­cial. Evolution and Human Behavior, 37(6), 490–501.

Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigil­ance. Mind and Language, 25(4), 359–93.

Vanberg, C. (2008). Why Do People Keep Their Promises? An Experimental Test of Two Explanations. Econometrica, 76(6), 1467–1480.

 

[1] Indeed, mar­riage itself may be a way of increas­ing the like­li­hood that a com­mit­ment will be respec­ted in the future. This is because form­al­ising the rela­tion­ship in this man­ner increases the exit costs of a rela­tion­ship.

Is the Future More Valuable than the Past?

Alison Fernandes — Post-Doctoral Fellow on the AHRC pro­ject ‘Time: Between Metaphysics and Psychology’, Department of Philosophy, University of Warwick

 

We dif­fer markedly in our atti­tudes towards the future and past. We look for­ward in anti­cip­a­tion to tonight’s tasty meal or next month’s sunny hol­i­day. While we might fondly remem­ber these pleas­ant exper­i­ences, we don’t hap­pily anti­cip­ate them once they’re over. Conversely, while we might dread the meet­ing tomor­row, or doing this year’s taxes, we feel a dis­tinct sort of relief when they’re done. We seem to also prefer pleas­ant exper­i­ences to be in the future, and unpleas­ant exper­i­ences to be in the past. While we can’t swap tomorrow’s meet­ing and make it have happened yes­ter­day, we might prefer that it had happened yes­ter­day and was over and done with.

Asymmetries like these in how we care about the past and future can seem to make a lot of sense. After all, what’s done is done, and can’t be changed. Surely we’re right to focus our care, effort and atten­tion on what’s to come. But do we some­times go too far in valu­ing past and future events dif­fer­ently? In this post I’ll con­sider one par­tic­u­lar tem­por­al asym­metry of value that doesn’t look so ration­al, and how its appar­ent irra­tion­al­ity speaks against cer­tain meta­phys­ic­al ways of explain­ing the asym­metry.

Eugene Caruso, Daniel Gilbert, and Timothy Wilson, invest­ig­ated a tem­por­al asym­metry in how we value past and future events (2008). Suppose that I ask you how much com­pens­a­tion would be fair to receive for under­tak­ing 5 hours of data entry work. The answer that you give seems to depend cru­cially on when the work is described as tak­ing place. Subjects judged that they should receive 101% more money if the work is described as tak­ing place one month in the future ($125.04 USD on aver­age), com­pared to one month in the past ($62.20 USD on aver­age). Even for purely hypo­thet­ic­al scen­ari­os, where no one actu­ally expects the work to take place, we judge future work to be worth much more than past work.

This value asym­metry appears in oth­er scen­ari­os as well (Caruso et al., 2008). Say your friend is let­ting you bor­row their vaca­tion home for a week. How expens­ive a bottle of wine do you buy as a thank you gift? If the hol­i­day is described as tak­ing place in the future, sub­jects select wine that is 37% more expens­ive. Suppose that you help your neigh­bour move. What would be an appro­pri­ate thank you gift for you to receive? Subjects judge they should receive 71% more expens­ive bottles of wine for help­ing in the future, com­pared to the past. Say you’re award­ing dam­ages for the suf­fer­ing of an acci­dent vic­tim. Subjects judge that vic­tims should be awar­ded 42% more com­pens­a­tion when they ima­gine their suf­fer­ing as tak­ing place in the future, com­pared to the past.

Philosophers like Craig Callender (2017) have become increas­ingly inter­ested in the value asym­metry stud­ied by Caruso and his col­leagues. This is partly because there has been a long his­tory of using asym­met­ries in how we care about past and future events to argue for par­tic­u­lar meta­phys­ic­al views about time (Prior, 1959). For example, say you hold a ‘grow­ing block’ view of time, accord­ing to which the present and past exist (and are there­fore ‘fixed’) while future events are not yet real (so the future is unsettled and ‘open’). One might argue that a meta­phys­ic­al pic­ture with an open future like this is needed to make sense of why we care about future events more than past events. If past events are fixed, they’re not worth spend­ing our time over—so we value them less. But because future events are up for grabs, we reas­on­ably place great­er value in them in the present.

Can one argue from the value asym­metry Caruso and his team stud­ied, to a meta­phys­ic­al view about time? Much depends on what fea­tures the asym­metry has, and how these might be explained. When it comes to explain­ing the tem­por­al value asym­metry, Caruso and his team dis­covered that it is closely aligned to anoth­er asym­metry: a tem­por­al emo­tion­al asym­metry. More spe­cific­ally, we tend to feel stronger emo­tions when con­tem­plat­ing future events, com­pared to con­tem­plat­ing past events.

These asym­met­ries are cor­rel­ated in such a way as to sug­gest the emo­tion­al asym­metry is a cause of the value asym­metry. Part of the evid­ence comes from the fact that the emo­tion­al and value asym­metry share oth­er fea­tures in com­mon. For example, we tend to feel stronger emo­tions when con­tem­plat­ing our own mis­for­tunes, or those of oth­ers close to us, than we do con­tem­plat­ing the mis­for­tune of strangers. The value asym­metry shares this fea­ture. It is also much more strongly pro­nounced for events that con­cern one­self, com­pared to oth­ers. Subjects judge their own 5 hours of data entry work to be worth nearly twice as much money when it takes place in the future, com­pared to the past. But they judge the equi­val­ent work of a stranger to be worth sim­il­ar amounts of money, inde­pend­ently of wheth­er the work is described as tak­ing place in the future or in the past.

The same fea­tures that point towards an emo­tion­al explan­a­tion of the value asym­metry also point away from a meta­phys­ic­al explan­a­tion. The value asym­metry is, in a cer­tain sense, ‘perspectival’—it is strongest con­cern­ing one­self. But if meta­phys­ic­al facts were to explain why future events were more valu­able than past ones, it would make little sense for the asym­metry to be per­spectiv­al. After all, on meta­phys­ic­al views of time like the grow­ing block view, events are either future or not. If future events being ‘open’ is to explain why we value them more, the asym­metry in value shouldn’t depend on wheth­er they con­cern one­self or oth­ers. Future events are not only open when they con­cern me – they are also open when they con­cern you. So the meta­phys­ic­al explan­a­tion of the value asym­metry does not look prom­ising.

If we instead explain the value asym­metry by appeal to an emo­tion­al asym­metry, we can also trace the value asym­metry back to fur­ther asym­met­ries. Philosophers and psy­cho­lo­gists have giv­en evol­u­tion­ary explan­a­tions of why we feel stronger emo­tions towards future events than past events (Maclaurin & Dyke, 2002; van Boven & Ashworth, 2007). Emotions help focus our ener­gies and atten­tion. If we gen­er­ally need to align our efforts and atten­tion towards the future (which we can con­trol) rather than being overly con­cerned with the past (which we can’t do any­thing about), then it makes sense that we’re geared to feel stronger emo­tions when con­tem­plat­ing future events than past ones. Note that this evol­u­tion­ary explan­a­tion requires that our emo­tion­al responses to future and past events ‘overgen­er­al­ise’. Even when we’re asked about future events we can’t con­trol, or purely hypo­thet­ic­al future events, we still feel more strongly about them than com­par­at­ive past events, because feel­ing more strongly about the future in gen­er­al is so use­ful when the future events are ones that we can con­trol.

A final nail in the coffin for a meta­phys­ic­al explan­a­tion of the value asym­metry comes from think­ing about wheth­er sub­jects take the value asym­metry to be ration­al. I began with some examples of asym­met­ries that do seem ration­al. It seems ration­al to prefer past pains to future ones, and to feel relief when unpleas­ant exper­i­ences are over. Whether asym­met­ries like these are in fact ration­al is a top­ic of con­tro­versy in philo­sophy (Sullivan, forth.; Dougherty, 2015). Regardless, there is strong evid­ence that the value asym­metry that Caruso stud­ied is taken to be irra­tion­al, even by sub­jects whose judge­ments dis­play the asym­metry.

The meth­od­o­logy Caruso used involved ‘coun­ter­bal­an­cing’: some sub­jects were asked about the future event first, some were asked about the past event first. When the res­ults with­in any single group were con­sidered, no value asym­metry was found. That is, when you ask a single per­son how they value an event (say, using a friend’s vaca­tion home for a week) they think its value now shouldn’t depend on wheth­er the event is in the past or future. It is only when you com­pare res­ults across the two groups that the asym­metry emerges (see Table 1). It’s as if we apply a con­sist­ency judge­ment and think that future and past events should be worth the same. But when we can’t make the com­par­is­on, we value them dif­fer­ently. This strongly sug­gests that the asym­metry is not being driv­en by a con­scious judge­ment that the future is really is worth more than the past, or by a meta­phys­ic­al pic­ture accord­ing to which it is. If it were, we would expect the asym­metry to be more pro­nounced when sub­jects were asked about both the past and the future. Instead, the asym­metry dis­ap­pears.

 

Order of eval­u­ation
Use of a friend’s vaca­tion home Past event first Future event first
Past $89. 17 $129.06
Future $91.73 $121.98

Table 1: Average amount of money (USD) that sub­jects judge they would spend on a thank you gift for using a friend’s vaca­tion home in the past or future (Caruso et al., 2008).

 

Investigations into how tem­por­al asym­met­ries in value arise are allow­ing philo­soph­ers and psy­cho­lo­gists to build up a much more detailed pic­ture of how we think about time. It can seem intu­it­ive to think of the past as fixed, and the future as open. Such intu­itions have long been used to sup­port cer­tain meta­phys­ic­al views about time. But, while meta­phys­ic­al views might seem to ration­al­ise asym­met­ries in our atti­tudes, their actu­al explan­a­tion seems to lie else­where, in much deep­er evolution-driven responses. We may even be adopt­ing meta­phys­ic­al views as ration­al­isers of our much more basic emo­tion­al responses. If this is right, the value asym­metry not only provides a case study of how we can get by explain­ing asym­met­ric fea­tures of our exper­i­ence without appeal to meta­phys­ics. It sug­gests that psy­cho­logy can help explain why we’re so temp­ted towards cer­tain meta­phys­ic­al views in the first place.

 

REFERENCES

Callender, Craig. 2017. What Makes Time Special. Oxford: Oxford University Press.

Caruso, Eugene M. Gilbert, D. T., and Wilson, T. D. 2008. A wrinkle in time: Asymmetric valu­ation of past and future events. Psychological Science 19(8): 796–801.

Dougherty, Tom. 2015. Future-Bias and Practical Reason. Philosophers’ Imprint. 15(30): 1−16.

Maclaurin, James & Dyke, Heather. 2002. ‘Thank Goodness That’s Over’: The Evolutionary Story. Ratio 15 (3): 276–292.

Prior, Arthur. N. 1959. Thank Goodness That’s Over. Philosophy. 34(128): 12−17.

Sullivan, Meghan. forth. Time Biases: A Theory of Rational Planning and Personal Persistence. New York: Oxford University Press.

Van Boven, Leaf & Ashworth, Laurence. 2007. Looking Forward, Looking Back: Anticipation Is More Evocative Than Retrospection. Journal of Experimental Psychology. 136(2): 289–300.

What hand gestures tell us about the evolution of language

Suzanne Aussems — Post-Doctoral Fellow/Early Career Fellow, Language & Learning Group, Department of Psychology, University of Warwick

Imagine that you are vis­it­ing a food mar­ket abroad and you want to buy a slice of cake. You know how to say “hello” in the nat­ive lan­guage, but oth­er­wise your know­ledge of the lan­guage is lim­ited. When it is your turn to order, you greet the vendor and point at the cake of your choice. The vendor then places his knife on the cake and looks at you to see if you approve of the size of the slice. You quickly shake both of your hands and indic­ate that you would like a smal­ler width for the slice using your thumb and index fin­ger. The vendor then cuts a smal­ler piece for you and you hap­pily pay for your cake. In this example, you achieved suc­cess­ful com­mu­nic­a­tion with the help of three ges­tures: a point­ing ges­ture, a con­ven­tion­al ges­ture, and an icon­ic ges­ture.

As humans, we are the only spe­cies that engage in the com­mu­nic­a­tion of com­plex and abstract ideas. This abstract­ness is even present in a seem­ingly simple example such as indic­at­ing the size of a slice of cake you desire. After all, size con­cepts such as ‘small’ and ‘large’ are learnt dur­ing devel­op­ment. What makes this sort of com­mu­nic­a­tion pos­sible are the lan­guage and ges­tures that we have at our dis­pos­al. How is it that we came to devel­op lan­guage when oth­er anim­als did not, and what is the role of ges­ture in this? In this blo­g­post, I intro­duce one his­tor­ic­ally dom­in­ant the­ory about the ori­gins of human lan­guage – the gesture-primacy hypo­thes­is (see Hewes, 1999 for an his­tor­ic over­view).

According to the gesture-primacy hypo­thes­is, humans first com­mu­nic­ated in a sym­bol­ic way using ges­ture (e.g. move­ment of the hands and body to express mean­ing). Symbolic ges­tures are, for example, pan­to­mimes that sig­ni­fy actions (e.g., shoot­ing an arrow) or emblems (e.g., rais­ing an index fin­ger to your lips to indic­ate “be quiet”) that facil­it­ate social inter­ac­tions (McNeil, 1992; 2000). The gesture-primacy hypo­thes­is sug­gests that spoken lan­guage emerged through adapt­a­tion of ges­tur­al com­mu­nic­a­tion (Corballis, 2002, Hewes, 1999). Central to this view is the idea that ges­ture and speech emerged sequen­tially.

Much of the evid­ence in favour of the gesture-primacy hypo­thes­is comes from stud­ies on non­hu­man prim­ates and great apes. Within each mon­key or ape spe­cies, indi­vidu­als seem to have the same basic vocal rep­er­toire. For instance, indi­vidu­als raised in isol­a­tion and indi­vidu­als raised by anoth­er spe­cies still pro­duce calls that are typ­ic­al for their own spe­cies, but not calls that are typ­ic­al for the foster spe­cies (Tomasello, 2008, p. 16). This sug­gests that these vocal calls are not learned, but are innate in non­hu­man prim­ates and great apes. Researchers believe that con­trolled, com­plex verbal com­mu­nic­a­tion (such as that found in humans) could not have evolved from these lim­ited innate com­mu­nic­at­ive rep­er­toires (Kendon, 2017). This line of think­ing is partly con­firmed by failed attempts to teach apes how to speak, and failed attempts to teach them to pro­duce their own calls on com­mand (Tomasello, 2008, p. 17).

However, the rep­er­toire of ape ges­tures seems to vary much more per indi­vidu­al than the vocal rep­er­toire (Pollick & de Waal, 2007), and research­ers have suc­ceeded in teach­ing chim­pan­zees manu­al actions with the help of sym­bol­ic ges­tures that were derived from American Sign Language (Gardner & Gardner, 1969). Moreover, bonobos have been observed to use ges­tures to com­mu­nic­ate more flex­ibly than they can use calls (Pollick & de Waal, 2007). The degree of flex­ib­il­ity in the pro­duc­tion and under­stand­ing of ges­tures, espe­cially in great apes, makes this com­mu­nic­at­ive tool seem a more plaus­ible medi­um through which lan­guage could have first emerged than vocal­isa­tion.

In this regard, it is not­able that great apes that have been raised by humans point at food, objects, or toys they desire. For example, some human-raised apes point to a locked door when they want access to what’s behind it, so that the human will open it for them (Tomasello, 2008). It is thus clear that human-raised apes under­stand that humans can be led to act in bene­fi­cial ways via attention-directing com­mu­nic­at­ive ges­tures. Admittedly, there does seem to be an import­ant type of point­ing that apes seem incap­able of; namely, declar­at­ive point­ing (i.e., point­ing for the sake of shar­ing atten­tion, rather than merely dir­ect­ing atten­tion) (Kendon, 2017). Nonetheless, ges­ture seems to be a flex­ible and effect­ive com­mu­nic­at­ive medi­um that is avail­able to non-human prim­ates. This fact, and the fact that vocal­isa­tions seem to be rel­at­ively inflex­ible in these spe­cies, play a sig­ni­fic­ant role in mak­ing the gesture-primacy hypo­thes­is a com­pel­ling the­ory for the ori­gins of human lan­guage.

What about human evid­ence that might sup­port the gesture-primacy hypo­thes­is? Studies on the emer­gence of speech and ges­ture in human infants show that babies pro­duce point­ing ges­tures before they pro­duce their first words (Butterworth, 2003). Shortly after their first birth­day, when most chil­dren have already star­ted to pro­duce some words, they pro­duce com­bin­a­tions of point­ing ges­tures (point at bird) and one-word utter­ances (“eat”). These ges­ture and speech com­bin­a­tions occur roughly three months before pro­du­cing two-word utter­ances (“bird eats”). From an onto­gen­et­ic stand­point, then, ref­er­en­tial beha­viour appears in point­ing ges­tures before it shows in speech. Many research­ers there­fore con­sider ges­ture to pave the way for early lan­guage devel­op­ment in babies (Butterworth, 2003; Iverson & Goldin-Meadow, 2005).

Further evid­ence con­cerns the spon­tan­eous emer­gence of sign lan­guage in deaf com­munit­ies (Senghas, Kita, & Özyürek, 2004). When sign lan­guage is passed on to new gen­er­a­tions, chil­dren use rich­er and more com­plex struc­tures than adults from the pre­vi­ous gen­er­a­tion, and so they build upon the exist­ing sign lan­guage. This phe­nomen­on has led some research­ers to believe that the devel­op­ment of sign lan­guage over gen­er­a­tions could be used as a mod­el for the evol­u­tion of human lan­guage more gen­er­ally (Senghas, Kita, & Özyürek, 2004). The fact that deaf com­munit­ies spon­tan­eously devel­op fully func­tion­al lan­guages using their hands, face, and body, fur­ther sup­ports the gesture-primacy hypo­thes­is.

Converging evid­ence also comes from the field of neur­os­cience. Xu and col­leagues (2009) used func­tion­al MRI to invest­ig­ate wheth­er sym­bol­ic ges­ture and spoken lan­guage are pro­cessed by the same sys­tem in the human brain. They showed par­ti­cipants mean­ing­ful ges­tures, and the spoken lan­guage equi­val­ent of these ges­tures. The same spe­cif­ic areas in the left side of the brain lit up for map­ping sym­bol­ic ges­tures and spoken words onto com­mon, cor­res­pond­ing con­cep­tu­al rep­res­ent­a­tions. Their find­ings sug­gest that the core of the brain’s lan­guage sys­tem is not exclus­ively used for lan­guage pro­cessing, but func­tions as a modality-independent semi­ot­ic sys­tem that plays a broad­er role in human com­mu­nic­a­tion, link­ing mean­ing with sym­bols wheth­er these are spoken words or sym­bol­ic ges­tures.

In this post, I have dis­cussed com­pel­ling evid­ence in sup­port of the gesture-primacy hypo­thes­is. An intriguing ques­tion that remains unanswered is why our closest evol­u­tion­ary rel­at­ives, chim­pan­zees and bonobos, can flex­ibly use ges­ture, but not speech, for com­mu­nic­a­tion. Further com­par­at­ive stud­ies could shed light on the evol­u­tion­ary his­tory of the rela­tion between ges­ture and speech. One thing is cer­tain: ges­ture plays an import­ant com­mu­nic­at­ive role in our every­day lives, and fur­ther study­ing the phylo­geny and onto­geny of ges­ture is import­ant for under­stand­ing how human lan­guage emerged. And it may also come in handy when order­ing some cake on your next hol­i­day!

 

REFERENCES

Butterworth, G. (2003). Pointing is the roy­al road to lan­guage for babies. In S. Kita (Ed.) Pointing: Where Language, Culture, and Cognition Meet (pp. 9–34). Mahwah, NJ: Lawrence Erlbaum Associates.

Corballis, M. C. (2002). From hand to mouth: The ori­gins of lan­guage. Princeton, NJ: Princeton University Press.

Gardner, R. A., & Gardner, B. (1969). Teaching sign lan­guage to a chim­pan­zee. Science, 165, 664–672.

Hewes, G. (1999). A his­tory of the study of lan­guage ori­gins and the ges­tur­al primacy hypo­thes­is. In: A. Lock, & C.R. Peters (Eds.), Handbook of human sym­bol­ic evol­u­tion (pp. 571–595). Oxford, UK: Oxford University Press, Clarendon Press.

Iverson, J. M., & Goldin-Meadow, S. (2005). Gesture paves the way for lan­guage devel­op­ment. Psychological Science, 16(5), 367–371. Doi: 10.1111/j.0956–7976.2005.01542.x

Kendon, A. (2017). Reflections on the “gesture-first” hypo­thes­is of lan­guage ori­gins. Psychonomic Bulletin & Review, 24(1), 163–170. Doi: 10.3758/s13423-016‑1117‑3

McNeill, D. (1992). Hand and mind. Chicago, IL: Chicago University Press.

McNeill, D. (Ed.). (2000). Language and ges­ture. Cambridge, UK: Cambridge University Press.

Pollick, A., & de Waal, F. (2007). Ape ges­tures and lan­guage evol­u­tion. PNAS, 104(19), 8184–8189. Doi: 10.1073/pnas.0702624104

Senghas, A., Kita, S., & Özyürek, A. (2004). Children cre­at­ing core prop­er­ties of lan­guage: evid­ence from an emer­ging sign lan­guage in Nicaragua. Science, 17, 305(5691), 1779–82. Doi: 10.1126/science.1100199

Tomasello, M. (2008). The ori­gins of human com­mu­nic­a­tion. Cambridge, MA: MIT Press.

Xu, J., Gannon, P. J., Emmorey, K., Smith, J. F., & Braun, A. R. (2009). Symbolic ges­tures and spoken lan­guage are pro­cessed by a com­mon neur­al sys­tem. PNAS, 106(49), 20664–20669. Doi: 10.1073/pnas.0909197106