Trusting the Uncanny Valley: Exploring the Relationship Between AI, Mental State Ascriptions, and Trust.

uncanny-valley-humanoid-android-with-creator-468x312

Henry Powell- PhD Candidate in Philosophy at the University of Warwick

Interactive arti­fi­cial agents such as social and pal­li­at­ive robots have become increas­ingly pre­val­ent in the edu­ca­tion­al and med­ic­al fields (Coradeshi et al. 2006). Different kinds of robots, how­ever, seem to engender dif­fer­ent kinds of inter­act­ive exper­i­ences from their users. Social robots, for example, tend to afford pos­it­ive inter­ac­tions that look ana­log­ous to the ones we might have with one anoth­er. Industrial robots, on the oth­er hand, rarely, if ever, are treated in the same way. Some very life­like humanoid robots seem to fit some­where out­side of these two spheres, inspir­ing feel­ings of dis­com­fort or dis­gust from people who come into con­tact with them. One way of under­stand­ing why this phe­nomen­on obtains is via a con­jec­ture developed by the Japanese roboti­cist Masahiro Mori in 1970 (Mori, 1970, pp. 33–35). This so called “uncanny val­ley” con­jec­ture has a num­ber of poten­tially inter­est­ing the­or­et­ic­al rami­fic­a­tions. Most import­antly, that it may help us to under­stand a set of con­di­tions under which humans could poten­tially ascribe men­tal states to beings without minds – in this case, that trust­ing an arti­fi­cial agent can lead one to do just that. With this in mind the aims of this post are two-fold. Firstly, I wish to provide an intro­duc­tion to the uncanny val­ley con­jec­ture and secondly, I want to raise doubts con­cern­ing its abil­ity to shed light on the con­di­tions under which men­tal state ascrip­tions occur. Specifically, in exper­i­ment­al paradigms that see sub­jects as trust­ing their AI coact­ors.

Mori’s uncanny val­ley con­jec­ture pro­poses that as robots increase in their like­ness to human beings, their famili­ar­ity like­wise increases. This trend con­tin­ues up to a point at which their life­like qual­it­ies are such that we become uncom­fort­able inter­act­ing with them. At around 75% human like­ness, robots are seen as uncan­nily like human beings and are viewed with dis­com­fort, or, in more extreme cases, dis­gust, sig­ni­fic­antly hinder­ing their poten­tial to gal­van­ise pos­it­ive social inter­ac­tions.

uncanny-valley-graph-450x351

This effect has been explained in a num­ber of ways. For instance, Saygin et al. (2011, 2012), have sug­ges­ted that the uncanny val­ley effect is pro­duced when there is a per­ceived incon­gru­ence between an arti­fi­cial agent’s form and its motion. If an agent is seen to be clearly robot­ic but move in a very human-like way, or vice-versa, there is an incom­pat­ib­il­ity effect in the pre­dict­ive, action sim­u­lat­ing cog­nit­ive mech­an­isms that seek to pick out and fore­cast the actions of human­like and non-humanlike objects. This pre­dict­ive cod­ing mech­an­ism is provided con­tra­dict­ing inform­a­tion by the visu­al sys­tem ([human agent] with [non­hu­man move­ment]) that pre­vents it from car­ry­ing out pre­dict­ive oper­a­tions to its nor­mal degree of accur­acy (Urgen & Miller, 2015). I take it that the out­put of this cog­nit­ive sys­tem is presen­ted in our exper­i­ence as being uncer­tain and that this uncer­tainty accounts for the feel­ings of unease that we exper­i­ence when inter­act­ing with these uncanny arti­fi­cial agents.

Of par­tic­u­lar philo­soph­ic­al interest in this regard is a strand of research that has sug­ges­ted that humans can be seen to make men­tal state ascrip­tions to arti­fi­cial agents that fall out­side the uncanny val­ley in giv­en situ­ations. This story was pos­ited in two stud­ies pub­lished in 2011 and 2015 by Kurt Gray & Daniel Wegner and Maya Mathur & David Reichling respect­ively. As I believe that it con­tains the most inter­est­ing evid­en­tial basis for think­ing along these lines I will lim­it my dis­cus­sion here to the lat­ter exper­i­ment.

Mathur & Reichling’s study saw sub­jects par­take in an “invest­ment game” (Berg et al. 1995) – a gen­er­ally accep­ted exper­i­ment­al stand­ard in meas­ur­ing trust – with a num­ber of arti­fi­cial agents whose facial fea­tures var­ied in their human like­ness. This was to test wheth­er sub­jects were will­ing to trust dif­fer­ent kinds of arti­fi­cial agents depend­ing on where they fell on the uncanny val­ley scale. What they found was that sub­jects played the game in such a way that indic­ated that they trus­ted robots with cer­tain kinds of facial fea­tures to act in cer­tain ways so as to reach an out­come that was mutu­ally bene­fi­cial to both of them, rather than favour­ing one or the oth­er. The authors sur­mised that because the sub­jects seemed to trust these arti­fi­cial agents, in a way that sug­ges­ted that they had thought about what the arti­fi­cial agent’s inten­tions might be, the sub­jects had ascribed men­tal states to their robot­ic part­ners in these cases.

It was pro­posed that sub­jects had believed that the arti­fi­cial agents had men­tal states encom­passing inten­tion­al pro­pos­i­tion­al atti­tudes (beliefs, desires, inten­tions etc.). This was because sub­jects seemed to assess the arti­fi­cial agent’s decision mak­ing pro­cesses in the form of what the robots “interests” in the vari­ous out­comes might be. This res­ult is poten­tially very excit­ing but I think that it jumps to con­clu­sions rather too quickly. I’d now like to briefly give reas­ons for my think­ing along these lines.

Mathur and Reichling seem to be mak­ing two claims in the dis­cus­sion of their study’s res­ults.

  1. That sub­jects trus­ted the arti­fi­cial agents.
  2. That this trust implies the ascrip­tion of men­tal states.

My objec­tions here are the fol­low­ing. I think that i) is more com­plic­ated than the authors make it out to be and that ii) is just not at all obvi­ous and does not fol­low from i) when i) is ana­lysed in the prop­er way. Let us address i) first as it leads into the prob­lem with ii).

When elab­or­ated, I think that i) is mak­ing a claim that the sub­jects believed that the arti­fi­cial agents would act in a cer­tain way and that this action would be sat­is­fact­or­ily reli­able. I think that this is plaus­ible but I also think that the form of trust here is not that which is inten­ded by Mathur and Reichling and is thus unin­ter­est­ing in rela­tion to ii). There are, as far as I can tell, at least two ways in which we can trust things. The first and per­haps most inter­est­ing form of trust is that one express­ible in sen­tences like “I trust my broth­er to return the money that I lent him”. This implies that I think of my broth­er as the sort of per­son who would not, giv­en the oppor­tun­ity and upon ration­al reflec­tion, do some­thing con­trary to what he had told me he would do. The second form of trust is that which we might have towards a lad­der or some­thing sim­il­ar. We might say of such objects that “I trust that if I walk up this lad­der it will not col­lapse because I know that it is sturdy”. The dif­fer­ence here should be obvi­ous. I trust the lad­der because I can infer from its phys­ic­al state that it will per­form its des­ig­nated func­tion. It has no loose fix­tures, rot­ting parts or any­thing else that might make it col­lapse when I walk up it. To trust the lad­der in this way I do not think that it has to make com­mit­ments to the action expec­ted of it based on a giv­en set of eth­ic­al stand­ards. In the case of trust­ing my broth­er, my trust in him is express­ible as a belief in the idea that giv­en the oppor­tun­ity to choose not do what I have asked of him he will chose in favour of that which I have asked. The trust that I have in my broth­er requires that I believe that he has men­tal states that inform and help him to choose to act in favour of my ask­ing him to do some­thing. One form of trust implies the exist­ence of men­tal states, the oth­er does not. In regards to ii) then, as has just been argued, trust only implies men­tal states if it is of the form that I would ascribe to my broth­er in the example just giv­en, but not if that trust was of the sort that we would nor­mally ascribe to reli­ably func­tion­al objects like lad­ders. So ii) only fol­lows from i) if the former kind of trust is evinced and not oth­er­wise.

This ana­lys­is sug­gests that if we are to believe that the sub­jects in this exper­i­ment ascribed men­tal states to the arti­fi­cial agents (or indeed sub­jects in any oth­er exper­i­ment that reaches the same con­clu­sions) then we need suf­fi­cient reas­ons for think­ing that the sub­jects were treat­ing the arti­fi­cial agents like I would treat my broth­er and not like I would treat the lad­der in respect to ascrip­tions of trust. Mathur and Reichling are silent as to these and thus we have no good reas­on for think­ing that men­tal state ascrip­tions were tak­ing place in the minds of the sub­jects in their exper­i­ment. While I do not think that it is entirely impossible that such a thing might obtain in some cir­cum­stances it is just not clear from this exper­i­ment that it obtains in this instance.

What I have hope­fully shown in this post is that is import­ant that pro­ceed with cau­tion when mak­ing claims about our will­ing­ness to ascribe oth­er minds to cer­tain kinds of objects and agents (either arti­fi­cial or oth­er­wise). Specifically, it is import­ant to do so in rela­tion to our abil­ity to hold such things in seem­ingly spe­cial kinds of rela­tions with ourselves, trust being an import­ant example of this.

 

References:

Berg, J., Dickhaut J., McCabe, K., (1995). Trust, Reciprocity, and Social History. Game and Economic Behaviour, 10, 122–142.

Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S. C., Thielscher, M., Breazeal, C., … Ishida, H. (2006). Human-inspired robots. IEEE Intelligent Systems, 21(4), 74–85.

Gray, K., & Wegner, D. M. (2012). Feeling robots and human zom­bies: Mind per­cep­tion and the uncanny val­ley. Cognition, 125(1), 125–130.

MacDorman, K. F. (2005). Androids as an exper­i­ment­al appar­at­us: Why is there an uncanny val­ley and can we exploit it. In CogSci-2005 work­shop: toward social mech­an­isms of android sci­ence (pp. 106–118).

  1. B. Mathur and D. B. Reichling, “An uncanny game of trust: Social trust­wor­thi­ness of robots inferred from subtle anthro­po­morph­ic facial cues,“Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on, La Jolla, CA, 2009, pp. 313–314.

Saygin, A. P. (2012). What can the Brain Tell us about Interactions with Artificial Agents and Vice Versa? In Workshop on Teleoperated Androids, 34th Annual Conference of the Cognitive Science Society.

Saygin, A. P., & Stadler, W. (2012). The role of appear­ance and motion in action pre­dic­tion. Psychological Research, 76(4), 388–394. http://doi.org/10.1007/s00426-012‑0426‑z

Urgen, B. A., & Miller, L. E. (2015). Towards an Empirically Grounded Predictive Coding Account of Action Understanding. Journal of Neuroscience, 35(12), 4789–4791.