Reading Minds & Reading Maps

Do non­hu­man anim­als know they’re not alone? Of course they must know there are lots of things in the world around them – rocks, water, trees, oth­er creatures and what have you. But do they know that they inhab­it a world pop­u­lated by minded creatures – that the anim­als around them see and know things, that they have beliefs, inten­tions and desires? Can they attrib­ute men­tal states to oth­er anim­als, and use those attri­bu­tions to pre­dict or explain their beha­viour? If so, then they’re what philo­soph­ers and psy­cho­lo­gists call ‘mindread­ers’.

Whether anim­als are mindread­ers has been a con­tested ques­tion in com­par­at­ive cog­ni­tion for around forty years (begin­ning with Premack & Woodruff, 1978), and it remains con­tro­ver­sial. My interest in this post is not so much wheth­er anim­als are mindread­ers but rather, if anim­als are mindread­ers, what kind of mindread­ers might they be? The motiv­at­ing thought is this: even if anim­als do rep­res­ent and reas­on about the men­tal states of oth­ers, their under­stand­ing of men­tal those states might be some­what dif­fer­ent from ours.

The idea that anim­als might have a lim­ited or ‘min­im­al’ under­stand­ing of men­tal states has been explored in a num­ber of places (see, for instance, Bermúdez, 2011; Butterfill & Apperly, 2013; Call & Tomasello, 2008). These pro­pos­als dif­fer, but they have in com­mon the idea that anim­als don’t con­strue men­tal states as rep­res­ent­a­tions – that is, as states which rep­res­ent the world, and which can do so accur­ately or inac­cur­ately. If these pro­pos­als are right, anim­als might be able to rep­res­ent oth­ers as hav­ing fact­ive men­tal states like see­ing or know­ing, but would not be able to make sense of anoth­er agent hav­ing a false belief, or any state that mis­rep­res­ents the world.

Recent work on mindread­ing in chim­pan­zees puts pres­sure on this sort of pro­pos­al. Christopher Krupenye and col­leagues (Krupenye, Kano, Hirata, Call, & Tomasello, 2016) found that chim­pan­zees were able to pre­dict the beha­viour of a human with a false belief. It’s not uncon­tro­ver­sial (see Andrews, 2018 for dis­cus­sion), but for the sake of argu­ment let’s say that this is indeed evid­ence that chimps under­stand false beliefs, as states that mis­rep­res­ent the world. Does that mean that chimps’ under­stand­ing of men­tal states is essen­tially the same as our own?

I’ve argued that it doesn’t. That’s because there are import­ant ways in which mindread­ers might dif­fer from one anoth­er, even if they rep­res­ent men­tal states as rep­res­ent­a­tion­al. To see that, let’s think a bit more about rep­res­ent­a­tions. A rep­res­ent­a­tion has a con­tent – how it rep­res­ents the world as being – which can be accur­ate or inac­cur­ate. The sen­tence ‘Santa is in the chim­ney’ is a rep­res­ent­a­tion whose con­tent is that Santa is in the chim­ney.It’s accur­ate if Santa is in the chim­ney, and inac­cur­ate if he’s some­where else. But a rep­res­ent­a­tion also has a format– it exploits a par­tic­u­lar rep­res­ent­a­tion­al sys­tem in order to rep­res­ent what it rep­res­ents. ‘Santa is in the chim­ney’ is a rep­res­ent­a­tion with a sen­ten­tial, lin­guist­ic format. But we could rep­res­ent the same con­tent in a num­ber of oth­er formats. For instance, we might rep­res­ent it pictori­ally by draw­ing Santa in the chim­ney, as in Figure 1. Or we might draw up a map rep­res­ent­ing the same thing, as in Figure 2.

Given that rep­res­ent­a­tions may dif­fer with respect to the rep­res­ent­a­tion­al format they exploit, mindread­ers might dif­fer with respect to the rep­res­ent­a­tion­al format they take men­tal states to have. Some might treat beliefs as some­thing like ‘sen­tences in the head’. Others might treat them as more picture-like. Still oth­ers might be what I’ve called ‘mindmap­pers’ (Boyle, 2019) – they might take lit­er­ally the idea that a belief is a ‘map of the neigh­bour­ing space by which we steer’ (Ramsey, 1931).

This mat­ters, because the rep­res­ent­a­tion­al format one takes men­tal states to have has a sig­ni­fic­ant impact on one’s mindread­ing abil­it­ies – because dif­fer­ent rep­res­ent­a­tion­al formats them­selves dif­fer from one anoth­er in sys­tem­at­ic ways.

Take maps. As I’m using the term, a map makes use of a lex­icon of icons, each of which stands for a par­tic­u­lar (type of) thing, which it com­bines accord­ing to the prin­ciple of spa­tial iso­morph­ism. Simply put, by pla­cing two icons in a par­tic­u­lar spa­tial rela­tion­ship on a map, one thereby rep­res­ents that the two things denoted by the icons stand in an iso­morph­ic spa­tial rela­tion­ship in real­ity. That’s all there is to it.

If you want to rep­res­ent the spa­tial lay­out of a num­ber of objects in a par­tic­u­lar region of space, there are lots of advant­ages to using a map: it’s a very nat­ur­al and user-friendly way to rep­res­ent that kind of inform­a­tion. A single map can con­tain an awful lot of inform­a­tion about the spa­tial lay­out of a region. To con­vey the con­tent of a map in lan­guage would usu­ally require a large and unwieldy set of sen­tences (or a very lengthy sen­tence). And updat­ing inform­a­tion in a map without intro­du­cing incon­sist­ency is easy to do. Updating the rep­res­en­ted loc­a­tion of an object by mov­ing an icon thereby also updates the rep­res­en­ted rela­tion­ships between that object and everything else on the map, keep­ing the whole con­sist­ent. If one rep­res­en­ted all of this spa­tial inform­a­tion sen­ten­tially, it would be easy to intro­duce incon­sist­en­cies. (See Camp, 2007 for a fuller dis­cus­sion of maps’ rep­res­ent­a­tion­al features.)

For all that, maps are an extremely lim­it­ing rep­res­ent­a­tion­al format: all they can really rep­res­ent is the spa­tial lay­out of objects in a region. If you want to rep­res­ent that Christmas is com­ing, that the goose is get­ting fat, or that Santa is really your dad, a map would be a poor format to choose. These are not the kinds of con­tents that a map can express. For that kind of thing, you need a more express­ively power­ful format – like language.

The point is that the dis­tinct­ive strengths and weak­nesses of rep­res­ent­a­tion­al formats will show up in their mindread­ing abil­it­ies and beha­viour. Humans can ascribe an appar­ently unlim­ited range of beliefs – beliefs about Santa’s true iden­tity, about death and resur­rec­tion, about pos­sible presents with no known loc­a­tion. I think this is good evid­ence that we take men­tal states to be lin­guist­ic, or at least to have a format which mir­rors language’s express­ive power.

But anim­als might not be like us in that respect: they might think of beliefs as maps in the head. If they do, they would be able to cap­ture what oth­ers think about where things are to be found, but they wouldn’t be able to make sense of beliefs about object iden­tit­ies or about non-spatial prop­er­ties – and nor could they make sense of someone hav­ing a belief about an object whilst hav­ing no belief about its loc­a­tion. To my know­ledge, wheth­er anim­als can rep­res­ent these non-spatial beliefs has not been invest­ig­ated. So, it remains an open empir­ic­al ques­tion wheth­er they treat beliefs as map-like, lin­guist­ic, or hav­ing some oth­er format. But it’s a ques­tion worth invest­ig­at­ing. If anim­als con­strued men­tal states as hav­ing a non-linguistic format, there would remain a sig­ni­fic­ant sense in which anim­als’ mindread­ing abil­it­ies differed qual­it­at­ively from ours.

 

References

Andrews, K. (2018). Do chim­pan­zees reas­on about belief? In K. Andrews & J. Beck (Eds.), The Routledge Handbook of Philosophy of Animal Minds. Abingdon: Routledge.

Bermúdez, J. L. (2011). The force-field puzzle and mindread­ing in non-human prim­ates. Review of Philosophy and Psychology, 2(3), 397–410. https://doi.org/10.1007/s13164-011‑0077‑9

Boyle, A. (2019). Mapping the minds of oth­ers. Review of Philosophy and Psychology. https://doi.org/10.1007/s13164-019–00434‑z

Butterfill, S. A., & Apperly, I. A. (2013). How to con­struct a min­im­al the­ory of mind. Mind & Language, 28(5), 606–637.

Call, J., & Tomasello, M. (2008). Does the chim­pan­zee have a the­ory of mind? 30 years later. Trends in Cognitive Sciences, 12(5), 187–192.

Camp, E. (2007). Thinking with maps. Philosophical Perspectives, 21, 145–182.

Krupenye, C., Kano, F., Hirata, S., Call, J., & Tomasello, M. (2016). Great apes anti­cip­ate that oth­er indi­vidu­als will act accord­ing to false beliefs. Science, 354(6308), 110–114. https://doi.org/10.1126/science.aaf8110

Premack, D., & Woodruff, G. (1978). Does the chim­pan­zee have a the­ory of mind? Behavioral and Brain Sciences, 4, 515–526.

Ramsey, F. P. (1931). The Foundations of Mathematics. London: Kegan Paul.