Does Action-oriented Predictive Processing offer an enactive account of sensory substitution?

Krzysztof Dołęga — PhD stu­dent — Ruhr-Universität Bochum, Institut für Philosophie

Action-oriented Predictive Processing (PP for short, also known as Prediction Error Minimization or Predictive Coding) is an excit­ing con­cep­tu­al frame­work emer­ging at the cross­roads of cog­nit­ive sci­ence, stat­ist­ic­al mod­el­ing, inform­a­tion the­ory, and philo­sophy of mind. Aimed at obtain­ing a uni­fied explan­a­tion of the pro­cesses respons­ible for cog­ni­tion, per­cep­tion and action, it is based on the hypo­thes­is that the brain’s archi­tec­ture con­sists of hier­arch­ic­ally organ­ized neur­al pop­u­la­tions per­form­ing stat­ist­ic­al infer­ence. Rather than accu­mu­lat­ing and com­pound­ing incom­ing inform­a­tion, the neur­al hier­arch­ies con­tinu­ously form hypo­theses about their future input. Thus, the tra­di­tion­al bottom-up approach to explain­ing cog­ni­tion is sub­sumed by a top-down organ­iz­a­tion in which only sens­ory inform­a­tion diver­ging from the pre­dicted pat­terns of activ­a­tion is propag­ated up the cor­tic­al hier­arch­ies. Due to its diver­gence from the ‘pre­dicted’ pat­terns, this inform­a­tion is often referred to as “pre­dic­tion error” (Clark, 2013). Minimizing error is pos­tu­lated to be the main func­tion of the brain; by accom­mod­at­ing unex­pec­ted inform­a­tion in its pre­dic­tions the brain can fine-tune its future hypo­theses regard­ing sens­ory input and track the states of the world caus­ing this input more accurately.

However, most of the frame­work’s appeal lies with the rel­at­ively recent pro­pos­al that the brain can min­im­ize error not only by revis­ing and con­struct­ing new hypo­theses about the input pat­terns, but also by inter­act­ing with the envir­on­ment in order to erase the source of mis­match between the best hypo­thes­is and pat­terns of sens­ory activ­a­tion. This fea­ture, referred to as “active-inference” (e.g. Hohwy, 2012), is primar­ily respons­ible for much of the frame­work’s appeal and its prom­ise of a uni­fied the­ory of brain organ­iz­a­tion, explain­ing how inform­a­tion about many dif­fer­ent cog­nit­ive func­tions is encoded and pro­cessed in the brain (Friston, 2010).

In my poster present­a­tion at the first iCog con­fer­ence, I tried to draw sim­il­ar­it­ies between Predictive Processing and anoth­er rad­ic­al pro­pos­al about the nature of per­cep­tion and cog­ni­tion – enact­iv­ism. Because of the rel­at­ive nov­elty of the PP frame­work, its rela­tion­ship to the embod­ied and sen­sor­imo­tor approaches to cog­ni­tion and per­cep­tion has not been well defined (at least at the time of the con­fer­ence, see the bib­li­o­graphy below for sev­er­al recent art­icles tack­ling these issues). What struck me as an inter­est­ing aven­ue for research was the sim­il­ar­ity between the notion of active-inference on the PP frame­work and the enact­ive focus on the role sen­sor­imo­tor con­tin­gen­cies and pos­sib­il­it­ies for action play in shap­ing per­cep­tion and phe­nomen­o­logy. Pursuing this cor­res­pond­ence is espe­cially valu­able for PP, as it does not yet offer a clear account of how phe­nomen­o­logy fits with­in its prob­ab­il­ist­ic archi­tec­ture. Due to perception’s breadth as a top­ic, I decided to focus on a very par­tic­u­lar case of Sensory Substitution Devices.

Sensory sub­sti­tu­tion devices emerged from the labor­at­ory led by Paul Bach-y-Rita in the ’60s. Bach-y-Rita (1983) set out to prove the extent of lifelong brain plas­ti­city (a highly con­tested thes­is at the time) by devis­ing gad­gets that would help han­di­capped people restore lost senses by sub­sti­tut­ing them with inputs com­ing from dif­fer­ent sens­ory mod­al­it­ies. The idea behind the pro­ject was to use the brain’s nat­ur­al abil­ity to adapt to the inputs it receives from dif­fer­ent sens­ory chan­nels in order to train it to recog­nize inform­a­tion spe­cif­ic to the lost mod­al­ity in pat­terns delivered through a dif­fer­ent sens­ory mod­al­ity. Bach-y-Rita’s work focused on tactile-visual sens­ory sub­sti­tu­tion (TVSS for short), in which visu­al inform­a­tion from a video cam­era was trans­lated into vibro-tactile input on the sub­jects skin. Despite its lim­it­a­tions, this meth­od proved to be a huge suc­cess, as sub­jects were able to learn to extract vision-like inform­a­tion (e.g. a pres­ence of a white X in front of the cam­era) from tact­ile stim­u­la­tion after a sur­pris­ingly short adapt­a­tion time. This dis­cov­ery jump­star­ted a whole new field of research. Below is a video of a recent TVSS device:

TVSS proved to be a fer­tile study ground for enact­iv­ism due to lim­it­a­tions and prob­lems inher­ent to the pro­ject of sens­ory sub­sti­tu­tion. Very early into his research, Bach-y-Rita dis­covered that sub­sti­tu­tion is mostly unsuc­cess­ful when sub­jects do not have con­trol over the cam­era move­ments. The crit­ic­al import­ance of explor­a­tion and act­ive sampling for TVSS fits well with the core enact­ive claim that per­cep­tion con­sists in ‘exer­cising a mas­tery of sen­sor­imo­tor con­tin­gen­cies’ (O’Regan & Noë 2001: 85), under­stood as prac­tic­al know-how about the pos­sible changes in per­cep­tion of objects caused by our actions (O’Regan & Noë 2001: 99). Moreover, O’Regan & Noë have argued that the con­tin­gen­cies of how our sens­ory mod­al­it­ies sample and inter­act with par­tic­u­lar objects explain cer­tain fea­tures of the brain’s plas­ti­city. For example, it is because of the dynam­ics par­tic­u­lar to our visu­al involve­ment with the world that the blind TVSS sub­jects show increased activ­ity in the visu­al cor­tex and can be said to genu­inely see (although in some impov­er­ished way, TVSS does not allow for col­or per­cep­tion). To sup­port this con­tro­ver­sial claim they point to neur­o­lo­gic­al data, as well as exper­i­ments demon­strat­ing sub­jects’ gull­ib­il­ity to dis­tinct­ively visu­al illu­sions exploit­ing the basic prop­er­ties of visu­al engage­ment with the envir­on­ment, such as mak­ing per­cep­tu­al con­tact only with the sur­faces facing the observ­er (Hurley & Noë 2003: 143).

Let us now return to Action-oriented Predictive Processing and how the frame­work can accom­mod­ate sens­ory sub­sti­tu­tion. PP assumes that the main func­tion of the brain is min­im­iz­a­tion of pre­dic­tion error res­ult­ing from com­par­ing actu­al sens­ory inputs with pre­dicted pat­terns of activ­a­tions gen­er­ated by the sys­tem. To pre­dict the sens­ory states effi­ciently, the brain tracks pat­terns of stat­ist­ic­al reg­u­lar­it­ies present in the incom­ing sig­nals and tries to infer the caus­al struc­ture of the world respons­ible for these reg­u­lar­it­ies. Thus the brain con­structs and main­tains a mod­el of the world, which it uses to pre­dict (i.e. gen­er­ate) its own sens­ory states. The par­tic­u­lars of this pro­pos­al are much more com­plex (I recom­mend Jakob Hohwy’s 2013 mono­graph for details), but these core ideas are suf­fi­cient to under­stand how PP can explain sens­ory substitution.

On Action-oriented Predictive Processing, what hap­pens dur­ing TVSS is a res­ult of the hier­arch­ic­al sys­tem recog­niz­ing and sub­sequently track­ing a set of stat­ist­ic­al reg­u­lar­it­ies spe­cif­ic to the visu­al mod­al­ity in sens­ory pat­terns delivered through tact­ile stim­u­la­tion. The sub­jects need to have con­trol over the input device (here a cam­era) in order to learn how the reg­u­lar­it­ies in the sens­ory stim­u­lus change with the sampling of the envir­on­ment. Having done this, the brain can pre­dict how the stim­u­lus will change in response to par­tic­u­lar actions, updat­ing its gen­er­at­ive mod­el accordingly.

The core of the PP explan­a­tion of TVSS is thus very sim­il­ar to the enact­ive treat­ment. In both cases it is the sys­tem’s abil­ity to recog­nize pos­sib­il­it­ies for action and the abil­ity to pre­dict how these actions will change the states of the sens­ory input that make sens­ory sub­sti­tu­tion pos­sible. One could try and push this sim­il­ar­ity fur­ther by say­ing that in PP, just like in enact­iv­ism, it is the sen­sor­imo­tor con­tin­gen­cies that shape the phe­nom­en­al qual­ity of the sub­sti­tu­tion. After all, the sys­tem tracks dis­tinct­ively visu­al reg­u­lar­it­ies obtain­ing between the body and the world. From the per­spect­ive of the brain, the man­ner of their present­a­tion (via tact­ile stim­u­lus vs ocu­lar nerves) plays a sec­ond­ary role to their con­tents (in both cases the brain has to infer the causes behind the pat­terns of sens­ory activations).

Despite these sim­il­ar­it­ies, one should be care­ful about cast­ing PP as sub­scrib­ing to an enact­ive under­stand­ing of per­cep­tion and sens­ory sub­sti­tu­tion. Though the views in ques­tion do over­lap in their explan­at­ory ambi­tions, they are built on dia­met­ric­ally oppos­ing assump­tions. In the pre­vi­ous para­graph I tried to speak about ‘the sys­tem’ rather than the brain or agent as a whole. This is because PP is usu­ally under­stood as a neuro­centric view (Hohwy, 2014), while enact­iv­ism instead stresses the situ­ated and embod­ied nature of cog­ni­tion (Noë, 2005). Moreover, PP is based on an infer­en­tial archi­tec­ture, often asso­ci­ated with rich rep­res­ent­a­tion­al con­tents – some­thing widely eschewed by enactivists.

The divide between the two pos­i­tions is not unsur­mount­able and much of present work by Andy Clark is focused on bridging the gap between these rad­ic­al views (see Clark’s forth­com­ing book). This post does not allow me to dive into the nuances of both views and how sim­il­arly they treat TVSS and ana­log­ous cases; how­ever, I hope I man­aged to spark some interest in these rad­ic­al views about per­cep­tion and how they may be related. Below is a list of ref­er­ences, some of which were unavail­able at the time of the ori­gin­al present­a­tion of this material.

 

REFERENCES AND BIBLIOGRAPHY

Bach-y-Rita, P. (1983). Tactile Vision Substitution: Past and Future. International Journal of       Neuroscience 19: 29–36.

Briscoe, R. (forth.). Bodily Action and Distal Attribution in Sensory Substitution. [online] Available from: http://philpapers.org/archive/BRIBAA [Retrived: 21, Nov. 2013].

Clark, A. (2013). Whatever next? Predictive brains, situ­ated agents, and the future of cog­nit­ive sci­ence. Behavioral and Brain Science 36(3): 181–204.

Friston, K. (2010). The free-energy prin­ciple: A uni­fied brain the­ory?. National Review of Neuroscience 11(2):127–138.

Friston, K. (2008). Hierarchical Models in the Brain, PLoS Computational Biology 4(11)      doi:10.1371/journal.pcbi.1000211.

Hohwy, J. (2014). Self-evidencing Brain. Nous48(1). doi: 10.1111/nous.12062

Hohwy, J. (2013). The Predictive Mind. Oxford: Oxford University Press.

Hohwy, J. (2012). Attention and con­scious per­cep­tion in the hypo­thes­is test­ing brain. Frontiers in         Psychology 3(96). doi: 10.3389/fpsyg.2012.00096.

Hurley, S. & Noë, A. (2003). Neural Plasticity and Consciousness. Biology and Philosophy 18: 131–168.

O’Regan, J.K. & Noë, A. (2001). What is it like to see: A sen­sor­imo­tor the­ory of per­cep­tu­al     exper­i­ence. Synthese 129(1): 79–103 .

Noë, A. (2005). Action in Perception. Cambridge, MA: MIT Press.

Pepper, K. (2013). Do Sensorimotor Dynamics Extend The Conscious Mind?. Adaptive Behavior           22(2): 99–108.

Pickering, M., Clark, A. (2014). Getting Ahead: Forward Models and their place in Cognitive Architecture. Trends in Cognitive Sciences 18(9): 451–456.

Prinz, J. (2009). Is Consciousness Embodied? in Robbins, P. & Aydede, M. (Eds.) Cambridge   Handbook of Situated Cognition. Cambridge: Cambridge University Press.

Rietveld, E., Bruineberg, J. (2014). Self-organization, free energy min­im­iz­a­tion, and optim­al grip           on a filed of afford­ances. Front. Hum. Neurosci 8(599). doi: 10.3389/fnhum.2014.00599.

Ward, D. (2012). Enjoying the Spread: Conscious Externalism Reconsidered. Mind 121(483):731- 751.

On what makes delusions pathological

Dr Kengo Miyazono – Research Fellow – University of Birmingham

Delusional beliefs are typ­ic­ally patho­lo­gic­al. Being patho­lo­gic­al is not the same as being false or being irra­tion­al. A woman might falsely believe that Istanbul is the cap­it­al of Turkey, but it might just be a simple mis­take. A man might believe without good evid­ence that he is smarter than his col­leagues, but it might just be a healthy self-deceptive belief. On the oth­er hand, when a patient with brain dam­age caused by a car acci­dent believes that his fath­er was replaced by an imposter, or when anoth­er patient with schizo­phrenia believes that ‘The Organization’ painted the doors of the houses on a street as a mes­sage to him, these beliefs are not merely false or irra­tion­al. They are pathological.

What makes delu­sion­al beliefs patho­lo­gic­al? One might think, for example, that delu­sions are patho­lo­gic­al because of their extreme irra­tion­al­ity. The prob­lem with this view, how­ever, is that it is not obvi­ous that delu­sion­al beliefs are extremely irra­tion­al. Maher (1974), for example, argues that delu­sions are reas­on­able explan­a­tions of abnor­mal experience.

“[T]he explan­a­tions (i.e. the delu­sions) of the patient are derived by cog­nit­ive activ­ity that is essen­tially indis­tin­guish­able from that employed by non-patients, by sci­ent­ists, and by people gen­er­ally. The struc­tur­al coher­ence and intern­al con­sist­ency of the explan­a­tion will be a reflec­tion of the intel­li­gence of the indi­vidu­al patient.” (Maher 1974, 103)

Again,  Coltheart and col­leagues (2010) argue that it is ration­al, from the Bayesian point of view, for a per­son with the Capgras delu­sion to adopt the delu­sion­al hypo­thes­is giv­en his neuro­psy­cho­lo­gic­al defi­cits. Bayes’s the­or­em pre­scribes a math­em­at­ic­al pro­ced­ure of updat­ing the prob­ab­il­ity of a hypo­thes­is on the basis of pri­or beliefs and new obser­va­tions. Coltheart and col­leagues claim that the delu­sion­al hypo­theses get high­er prob­ab­il­it­ies than com­pet­ing non-delusional hypo­theses giv­en rel­ev­ant pri­or beliefs and the obser­va­tions of the neuro­psy­cho­lo­gic­al deficits.

“The delu­sion­al hypo­thes­is provides a much more con­vin­cing explan­a­tion of the highly unusu­al data than the nondelu­sion­al hypo­thes­is; and this fact swamps the gen­er­al implaus­ib­il­ity of the delu­sion­al hypo­thes­is. So if the sub­ject with Capgras delu­sion uncon­sciously reas­ons in this way, he has up to this point com­mit­ted no mis­take of ration­al­ity on the Bayesian mod­el.” (Coltheart, Menzies, & Sutton 2010, 278)

The claim by Coltheart and col­leagues is, how­ever, con­tro­ver­sial. In response, McKay (2012) argues that adopt­ing delu­sion­al hypo­theses is due to the irra­tion­al bias of dis­count­ing the ratio of pri­or prob­ab­il­it­ies. Even if McKay is cor­rect, how­ever, it is not clear that delu­sion­al beliefs are extremely irra­tion­al since sim­il­ar biases might be found among nor­mal people as well.

For instance, in the fam­ous exper­i­ment by Kahneman and Tversky (1973), nor­mal sub­jects, first, received the base-rate inform­a­tion about a hypo­thet­ic­al group of people (e.g., “30 engin­eers and 70 law­yers”). Then, the per­son­al­ity descrip­tion of a par­tic­u­lar per­son in the group was provided and the sub­jects were asked to pre­dict the occu­pa­tion (e.g., an engin­eer or a law­yer) of the per­son. The cru­cial find­ing was that the manip­u­la­tion of the base-rate inform­a­tion, which provides the pri­or prob­ab­il­ity of the hypo­theses at issue (e.g., the hypo­thes­is that this per­son is a law­yer), had almost no effect on the pre­dic­tion of the sub­jects (“base-rate neg­lect”). The find­ing sug­gests that the bias of dis­count­ing pri­or prob­ab­il­it­ies can be seen among nor­mal people. As Bortolotti poin­ted out (2009), the irra­tion­al­ity that we find in people with delu­sions might not be very dif­fer­ent from the irra­tion­al­ity we find in nor­mal people.

It is even con­ceiv­able that people with delu­sions are more ration­al than nor­mal people. In the well-known exper­i­ment by Huq and col­leagues, the sub­jects were asked to determ­ine wheth­er a giv­en jar is the jar A, which con­tains 85 pink beads and 15 green beads, or the jar B, which con­tains 15 pink beads and 85 green beads, on the basis of the obser­va­tion of the beads drawn from it. It was found that the sub­jects with delu­sions need less evid­ence (i.e., less beads drawn from the jar) before com­ing to the con­clu­sion than the sub­jects in con­trol groups (“jumping-to-conclusion bias”). Interestingly, Huq and col­leagues do not take this to show that the sub­jects with delu­sions are irra­tion­al. Rather, they note; “it may be argued that the deluded sample reached a decision at an object­ively “ration­al” point. It may fur­ther be argued that the two con­trol groups were some­what over­cau­tious” (Huq et al. 1988, 809) (but see Van Der Leer et al. 2015).

In my paper, Delusions as Harmful Malfunctioning Beliefs (http://www.sciencedirect.com/science/article/pii/S1053810014002001), I also exam­ine the views accord­ing to which delu­sion­al beliefs are patho­lo­gic­al because of (1) their strange con­tent, (2) their res­ist­ance to folk psy­cho­lo­gic­al explan­a­tions and (3) the impaired responsibility-grounding capa­cit­ies. I provide some counter­examples as well as dif­fi­culties for these proposals.

I argue, fol­low­ing Wakefield’s (1992a, 1992b) harm­ful dys­func­tion ana­lys­is of dis­order, that delu­sion­al beliefs are patho­lo­gic­al because they involve some kinds of harm­ful mal­func­tions. In oth­er words, they have a sig­ni­fic­ant neg­at­ive impact on well­being (harm­ful) and, in addi­tion, some psy­cho­lo­gic­al mech­an­isms, dir­ectly or indir­ectly related to them, fail to per­form the func­tions for which they were selec­ted (mal­func­tion­ing).

There can be two types of objec­tions to the pro­pos­al. The first type of objec­tion is that delu­sion­al beliefs might not involve any harm­ful mal­func­tions. For example, delu­sion­al beliefs might be play­ing psy­cho­lo­gic­al defence func­tions. The second type of objec­tion is that involving harm­ful mal­func­tion­ings is not suf­fi­cient for a men­tal state to be patho­lo­gic­al. For example, false beliefs might involve some mal­func­tions accord­ing to tele­ose­mantics (Dretske 1991; Millikan 1989). But, there could be harm­ful false beliefs that are not patho­lo­gic­al. The paper defends the pro­pos­al from these objections.

 

REFERENCES

Bortolotti, L. 2010. Delusions and oth­er irra­tion­al beliefs. Oxford: Oxford University Press.

Coltheart, M., Menzies, P. and Sutton, J. 2010. Abductive infer­ence and delu­sion­al belief. Cognitive Neuropsychiatry 15(1–3), pp. 261–287.

Dretske, F. I. 1991. Explaining beha­vi­or: Reasons in a world of causes. Cambridge, MA: The MIT Press.

Huq, S., Garety, P. and Hemsley, D. 1988. Probabilistic judge­ments in deluded and non-deluded sub­jects. The Quarterly Journal of Experimental Psychology 40(4), pp. 801–812.

Kahneman, D. and Tversky, A. 1973. On the psy­cho­logy of pre­dic­tion. Psychological Review 80(4), pp 237- 251.

Maher, B. A. 1974. Delusional think­ing and per­cep­tu­al dis­order. Journal of Individual Psychology 30, pp. 98–113.

McKay, R. 2012. Delusional infer­ence. Mind & Language 27(3), pp. 330–355.

Millikan, R. G. 1989. Biosemantics. The Journal of Philosophy 86, pp. 281–297.

Van Der Leer, L., Hartig, B., Goldmanis, M. and McKay, R. 2015. Delusion-­proneness and ‘Jumping to Conclusions’: Relative and abso­lute effects. Psychological Medicine 19(3), pp. 257–67.

Wakefield, J. C. 1992a. The Concept of Mental Disorder: On the bound­ary between bio­lo­gic­al facts and social val­ues. American Psychologist 47(3), pp. 373–388.

Wakefield, J. C. 1992b. Disorder as harm­ful dys­func­tion: A con­cep­tu­al cri­tique of DSM-III‑R’s defin­i­tion of men­tal dis­order. Psychological Review 99(2), pp. 232–247.

The Symbolic Mind

Dr Julian Kiverstein — Assistant Professor in Neurophilosophy — Institute for Language, Logic and Computation, University of Amsterdam. 

In 1976 the com­puter sci­ent­ists and founders of cog­nit­ive sci­ence Allen Newell and Herbert Simon pro­posed a hypo­thes­is they called “the phys­ic­al sym­bol sys­tems hypo­thes­is”. They sug­ges­ted that a phys­ic­al sym­bol sys­tem (such as a digit­al com­puter, for example) has the neces­sary and suf­fi­cient means for intel­li­gent action.” A phys­ic­al sym­bol sys­tem is a machine that car­ries out oper­a­tions like writ­ing, copy­ing, com­bin­ing and delet­ing on strings of digit­al sym­bol­ic rep­res­ent­a­tions. By intel­li­gent action they had in mind the high-level cog­nit­ive accom­plish­ments of humans, such as lan­guage under­stand­ing, or the abil­ity of a com­puter to make infer­ences and decisions on their own without super­vi­sion from their pro­gram­mers. Newell and Simon hypo­thes­ised that these high-level cog­nit­ive pro­cesses were the products of com­pu­ta­tions of the type a digit­al com­puter could be pro­grammed to perform.

Newell and Simon’s hypo­thes­is com­bines two con­tro­ver­sial pro­pos­i­tions that are worth eval­u­at­ing sep­ar­ately. The first pro­pos­i­tion they assert is a neces­sity claim along the fol­low­ing lines:

“Any sys­tem cap­able of intel­li­gent action must of neces­sity be a phys­ic­al sym­bol system.”

This is to claim that there is no oth­er non-magical way of bring­ing about intel­li­gent action oth­er than by digit­al com­pu­ta­tion. Assuming that humans don’t live in a world in which intel­li­gent action is caused by magic, it fol­lows that the human mind must work in fun­da­ment­ally the same way as a digit­al computer.

The second is a suf­fi­ciency claim:

“A phys­ic­al sym­bol sys­tem (equipped with the right soft­ware) has all that is required for intel­li­gent action. No addi­tion­al ingredi­ents are necessary.”

If this pro­pos­i­tion is cor­rect, it is just a mat­ter of time before com­puter sci­ent­ists suc­ceed in build­ing machines cap­able of intel­li­gent action. Artificial intel­li­gence is pretty much inev­it­able. All that stands in the way is the pro­gram­ming ingenu­ity of soft­ware design­ers. In the age of so-called “neur­omorph­ic” com­puter chips, and “deep learn­ing” algorithms (more on which later) this par­tic­u­lar obstacle looks increas­ingly negotiable.

But is the human mind really a digit­al com­puter? Many philo­soph­ers of mind influ­enced by cog­nit­ive sci­ence have thought so. They have taken the mind to have an abstract pat­tern of caus­al organ­isa­tion that can be mapped one-to-one onto the states a com­puter goes through in per­form­ing a com­pu­ta­tion. Since Frege, we have known how to rep­res­ent the form­al struc­ture of logic­al think­ing. Computation is a caus­al pro­cess that helps us to under­stand how men­tal or psy­cho­lo­gic­al pro­cesses could be caus­ally sens­it­ive to the logic­al form of human think­ing. It gives us for the first time a con­crete the­ory of how a phys­ic­al, mech­an­ic­al sys­tem could engage in logic­al think­ing and reasoning.

The thes­is that the human mind is a digit­al com­puter has how­ever run into a tri­vi­al­ity objec­tion. Every phys­ic­al sys­tem has states that can be mapped one-to-one onto the form­ally spe­cified states of digit­al com­puter. We can use cel­lu­lar auto­mata for instance to mod­el the beha­viour of galax­ies. It cer­tainly doesn’t fol­low that galax­ies are per­form­ing the com­pu­ta­tions we use to mod­el them. Moreover, to describe the mind as a com­puter seems vacu­ous or trivi­al once we notice that every phys­ic­al sys­tem can be described as a com­puter. The thes­is that the mind is a com­puter doesn’t seem to tell us any­thing dis­tinct­ive about the nature of the human mind.

This tri­vi­al­ity objec­tion (first for­mu­lated by John Searle in the 1980s) hasn’t gone away, but it is seen by many today as a merely tech­nic­al prob­lem, in prin­ciple solv­able once we have the right the­ory of com­pu­ta­tion. To put it bluntly: galax­ies don’t com­pute because they are not com­puters. Minds do com­pute because they are noth­ing but com­pu­ta­tion­al machines.

There are a num­ber of ways to push back and res­ist the bold claim that the human mind is (in a meta­phys­ic­al sense) a digit­al com­puter. One could hold as Jerry Fodor has done since the 1980s that the human mind is a com­puter only around its edges. Some aspects of the mind, for example low-level vis­ion, or fine-grained motor con­trol, are com­pu­ta­tion­al pro­cesses through and through. Other aspects of the mind, for example belief update, are most cer­tainly not.

Other philo­soph­ers have argued that the human mind is not a digit­al com­puter, and have sought a more gen­er­ic concept of com­pu­ta­tion. To think of the mind as a digit­al com­puter is to abstract away from the details of the bio­lo­gic­al organ­isa­tion of the brain that might just prove cru­cial when it comes to under­stand­ing how minds work. Digital com­pu­ta­tion only gives us a very coarse grained pat­tern of caus­al organ­isa­tion in which to root the mind. Perhaps how­ever the mind has a more fine-grained pat­tern of caus­al organ­isa­tion. This response amounts to tinker­ing with the concept of com­pu­ta­tion a little, whilst nev­er­the­less retain­ing the basic meta­phys­ic­al picture.

Should we agree that any sys­tem that can behave intel­li­gently must have a caus­al organ­isa­tion (at some level of abstrac­tion) that can be mapped onto the phys­ic­al state trans­itions of a com­put­ing machine?

Hubert Dreyfus, a long­stand­ing crit­ic of arti­fi­cial intel­li­gence, thought not. Dreyfus takes the philo­soph­ic­al ideas behind arti­fi­cial intel­li­gence to be deeply rooted in the his­tory of philo­sophy. He lists the fol­low­ing as import­ant step­ping stones:

- Hobbes’s idea that reas­on­ing is reck­on­ing or calculation.

- Descartes con­cep­tion of ideas as men­tal representations.

- Leibniz’s the­ory of a uni­ver­sal lan­guage, an arti­fi­cial lan­guage of sym­bols stand­ing for con­cepts or ideas and logic­al rules for their val­id manipulation.

- Kant’s view of con­cepts as rules.

- Frege’s form­al­isa­tion of such rules.

- Russell’s pos­tu­la­tion of logic­al atoms as the basic build­ing blocks of reality.

(From Hubert Dreyfus, “Why Heideggerian AI failed.”)

For Dreyfus the com­puter the­ory of the mind inher­its a num­ber of intract­able prob­lems that are the leg­acy of its philo­soph­ic­al pre­curs­ors. Artificial intel­li­gence is, and always has been a degen­er­at­ing research pro­gramme. The prob­lems to which it will nev­er find an adequate solu­tion lie in the sig­ni­fic­ance and rel­ev­ance humans find in the world. Dreyfus fol­low­ing in the foot­steps of the early twen­ti­eth cen­tury exist­en­tial phe­nomen­o­lo­gists, takes human intel­li­gence to reside in the skills that humans bring effort­lessy and instinct­ively to bear in nav­ig­at­ing every­day situ­ations. For a com­puter to know its way about in the famil­i­ar every­day world humans inhab­it, it would have to expli­citly rep­res­ent everything that humans take for gran­ted in their deal­ings with this world. Human com­mon­sense (which Dreyfus calls “back­ground under­stand­ing”) doesn’t take the form of a body of facts a com­puter can be pro­grammed with. It con­sists of skills and expert­ise for anti­cip­at­ing and respond­ing cor­rectly to very par­tic­u­lar situ­ations. For Dreyfus what humans know through their accul­tur­a­tion, and through the norm­at­ive dis­cip­lin­ing of their bod­ily skills can nev­er be represented.

Even if we were to some­how find a way around this prob­lem by avail­ing ourselves of the impress­ive logic­al sys­tems that lin­guists and form­al seman­ti­cists now have at their dis­pos­al, still a sub­stan­tial prob­lem would remain. The would-be AI pro­gramme would have to determ­ine which of the rep­res­ent­a­tions of facts it has in its extraordin­ar­ily large data­base of know­ledge is rel­ev­ant to the situ­ation in which it is act­ing. How does a com­puter determ­ine which facts are rel­ev­ant? Everything the com­puter knows might be rel­ev­ant to its cur­rent situ­ation. How does the com­puter identi­fy which of the pos­sibly rel­ev­ant facts are actu­ally rel­ev­ant? This prob­lem known as the “frame prob­lem” con­tin­ues to haunt research­ers in AI. At least it ought to, since as Mike Wheeler recently noted “it is not as if any­body ever actu­ally solved the problem.”

Still the tools and tech­niques of AI have advanced tre­mend­ously since Dreyfus first launched his cri­tique. Today’s com­puter sci­ent­ists and engin­eers are busy build­ing machines that mim­ic the learn­ing strategies and tech­niques of inform­a­tion stor­age found in the human brain. In 2011 IBM unveiled its “neur­omorph­ic” com­puter chip that pro­cesses instruc­tions, and per­forms oper­a­tions in par­al­lel in a sim­il­ar way to the mam­mali­an brain. It is made up of com­pon­ents that emu­late the dynam­ic spik­ing beha­viour of neur­ons. The chip is made up of hun­dred of such com­pon­ents, wired up so as to form hun­dreds of thou­sands of con­nnec­tions. Programming these con­nec­tions cre­ates net­works that pro­cess and react to inform­a­tion in sim­il­ar ways to neur­ons. The chip has been used by IBM to con­trol an unmanned aer­i­al vehicle, to recog­nise and also pre­dict hand­writ­ten digits and to play a video game. These are by no means new achieve­ments for the field of AI, but what is sig­ni­fic­ant is the effi­ciency with which the IBM chip achieves these tasks. Neuromorphic chips have also been built that can learn through exper­i­ence. These chips adjust their own con­nec­tions based on the fir­ing pat­terns of their com­pon­ents. Recent suc­cesses have included a pro­gramme that can teach itself to play a video game. It starts off per­form­ing ter­ribly, but after a few rounds it begins to get bet­ter. It can learn a skill, albeit in this well-circumscribed domain of the video game.

Elsewhere in the field of AI, “deep learn­ing” algorithms are all the rage. These algorith­ims employ the same stat­ist­ic­al learn­ing tech­niques as have been used in neur­al net­work research for dec­ades. One import­ant dif­fer­ence is the net­works include many more lay­ers of pro­cessing than in pre­vi­ous neur­al net­works (hence the “depth” descriptor), and they rely on vast clusters of net­worked com­puters to pro­cess the data they are fed. The res­ult is soft­ware that can learn from expos­ure to lit­er­ally mil­lions of images to recog­nise high-level fea­tures such as cats des­pite nev­er hav­ing been taught about cats. Deep learn­ing algorithms have achieved not­able suc­cesses in find­ing the high-level, abstract fea­tures that are import­ant, and the pat­terns that mat­ter in the low-level data to which they are exposed. This would seem to be an import­ant aspect of skill acquis­i­tion that Dreyfus right emphas­ises as being so import­ant for human intelligence.

These devel­op­ments in AI are based on the premise that the brain is a super-efficient com­puter. AI research can there­fore make pro­gress and get closer to build­ing machines that work more like the human mind by dis­cov­er­ing more about how the brain com­putes. These advances in AI would seem at first glance to provide little sup­port for the Newell and Simon phys­ic­al sym­bol sys­tems hypo­thes­is. The fact that AI research­ers needed to build digit­al com­put­ing machines that work more like brains, shows that the human mind doesn’t work much like a digit­al com­puter after all.

These devel­op­ments do how­ever raise the eth­ic­ally and polit­ic­ally troub­ling pos­sib­il­ity that humans might after all be on the brink of engin­eer­ing arti­fi­cial intel­li­gence. Wouldn’t such a res­ult indir­ectly vin­dic­ate some ver­sion of the phys­ic­al sym­bol sys­tems hypo­thes­is? Could we not argue as follows:

- The mind is the brain.

- The brain is a com­pu­ta­tion­al machine (albeit not a digit­al computer)

- Therefore the mind is a com­pu­ta­tion­al machine.

This con­clu­sion would imply an import­ant tweak and refine­ment to the ori­gin­al Newell and Simon hypo­thes­is. It would require us to think very dif­fer­ently about the cog­nit­ive archi­tec­ture of the mind. This mat­ters a great deal for cog­nit­ive sci­ence. Mental pro­cesses should no longer be thought of as sequen­tial and lin­ear rule-like oper­a­tions car­ried out on struc­tured sym­bol­ic rep­res­ent­a­tions. However the basic meta­phys­ic­al idea behind the com­puter the­ory of mind would still seem to sur­vive unscathed. We can con­tin­ue to think of the human mind as hav­ing an abstract caus­al organ­isa­tion that can be mapped onto the state trans­itions a com­puter goes through in doing form­al sym­bol manipulation.

So is the human mind essen­tially a com­pu­ta­tion­al machine? In reflect­ing on this ques­tion we should keep in mind the tri­vi­al­ity objec­tion. Every phys­ic­al sys­tem has an abstract caus­al organ­isa­tion which can be mapped one-to-one onto the states of a com­pu­ta­tion­al sys­tem. Nothing meta­phys­ic­ally inter­est­ing fol­lows about what minds essen­tially are from this obser­va­tion. If Dreyfus is right, ser­i­ous philo­soph­ic­al mis­takes are what have led us to the point today where we can think of the human mind as being in essense a com­put­ing machine. In par­tic­u­lar, we ought to be sus­pi­cious of the Cartesian concept of rep­res­ent­a­tion the com­puter the­ory of mind is pre­dic­ated on. It only makes sense to think of the brain as per­form­ing com­pu­ta­tions because it is pos­sible to give semant­ic or rep­res­ent­a­tion­al inter­pret­a­tion of brain pro­cesses. Notice how­ever that such an inter­pret­a­tion of brain pro­cesses in rep­res­ent­a­tion­al terms doesn’t imply that brains really do traffic in men­tal rep­res­ent­a­tions. That we tend to think of the brain in these terms may be due to our not hav­ing entirely shaken off the shackles of a highly ques­tion­able Cartesian philo­sophy of mind.

Xphi, Intuitions & the ‘Big Mistake’

Dr James Andow ‑Lecturer in Moral Philosophy — University of Reading

(This post is based on a sec­tion of a longer paper that has since been pub­lished. You can access the full art­icle here)

This is a pretty simple post. I want to put the record straight about exper­i­ment­al philosophy.

We exper­i­ment­al philo­soph­ers are often painted as the loy­al ser­vants of the armchair-bound monarch—going out into the world to see what’s hap­pen­ing and report­ing back with use­ful inform­a­tion to fur­ther the master’s pro­jects. Some accused us of plot­ting to usurp the mon­arch and toss the throne into the flames. We truth­fully denied this. We’re not attempt­ing to over­throw. But that doesn’t mean we’re com­pletely happy with the situation.

I think of myself as more like an unruly baron—unhappy with the master’s plans, and put­ting into motion a cam­paign to diver­si­fy pub­lic investment.

(Okay, the meta­phors got a bit out of hand there.)

*

I real­ised that the record needed set­ting straight when think­ing about a recent debate. Here’s a com­monly made claim about philo­soph­ic­al methods,

“Philosophers use intu­itions as evidence”

And here’s a com­monly made claim about exper­i­ment­al philosophy,

“Experimental philo­soph­ers help by using empir­ic­al tools to exam­ine people’s intuitions”

Suppose you thought the first claim was false. Well then you’d surely think exper­i­ment­al philo­sophy was in a bit of a bind giv­en the truth of the second claim. If philo­soph­ers don’t use intu­itions, then surely exper­i­ment­al philo­sophy is premised on a big mis­take (if it is all about examin­ing intu­itions). That’s the argu­ment Herman Cappelen has recently giv­en (in his 2012 and 2014). Cappelen thinks philo­soph­ers don’t use intu­itions as evidence—I am not going to ques­tion that here—and that con­sequently exper­i­ment­al philo­sophy is all a big mistake.

Cappelen (2014) con­siders a response exper­i­ment­al philo­soph­ers might make:

“Okay, so let’s grant that philo­soph­ers don’t use intu­itions. Here’s the thing, exper­i­ment­al philo­soph­ers were nev­er talk­ing about intu­itions. Sure they used the term ‘intu­itions’ but let’s not get hung up on that. Experimental philo­soph­ers were talk­ing about these oth­er things, BLAHs, and philo­soph­ers do use BLAHs as evidence.”

Cappelen then has a response to this, but I don’t want to get into it.

This dia­lectic involving Cappelen and his oppon­ents just strikes me as odd. Both sides seem to accept that exper­i­ment­al philo­sophy is premised on the idea that philo­soph­ers Φ and exper­i­ment­al philo­sophy can help them Φ better.

But I don’t see things that way. Check my pub­lished work and you per­haps wouldn’t guess. I’ve often writ­ten as though I thought this was the case too. However, I’m pretty clear deep down. Experimental philo­sophy is not premised on the idea that philo­soph­ers com­monly pur­sue some pro­ject which exper­i­ment­al philo­sophy can further.

The premise of exper­i­ment­al philo­sophy is not that philo­soph­ers Φ and exper­i­ment­al philo­sophy can improve their Φ‑ing, but rather that philo­soph­ers don’t ψ but should. Some caveats are appro­pri­ate here. Probably not all of them should (cer­tainly not all the time) and it mightn’t be the only thing exper­i­ment­al meth­ods are good for philo­soph­ic­ally speak­ing. Nonetheless, philo­soph­ers should ψ. We don’t want to give the mon­arch new tools to pur­sue the same old pro­jects. We want the mon­arch to pur­sue some new dif­fer­ent projects.

*

What are these pro­jects which exper­i­ment­al philo­sophy wants to use empir­ic­al tools to fur­ther? What is it to ψ? It is to try to make sense of the way we think about philo­soph­ic­ally inter­est­ing things like mor­al­ity, freewill, etc.—how we think, not simply what.

Of course, I don’t deny that we exper­i­ment­al philo­soph­ers gen­er­ally under­stand sur­vey responses to indic­ate what our par­ti­cipants think—participants ‘intu­itions’ if you like that sort of lan­guage. However, the reas­on we are inter­ested in this is largely not because philo­soph­ers use intu­itions as evid­ence. The aim is to use care­ful manip­u­la­tion to get a bet­ter under­stand­ing of how par­ti­cipants are thinking—their ways of under­stand­ing the world, their ways of com­ing to think what they think.

Don’t believe me? Read the web­site (link)!

“…exper­i­ment­al philo­soph­ers actu­ally go out and run sys­tem­at­ic exper­i­ments aimed at under­stand­ing how people ordin­ar­ily think about the issues at the found­a­tions of philo­soph­ic­al discussions”

Many philo­soph­ers will be ask­ing, ‘What then? … When does that con­trib­ute towards some philo­soph­ic­al pro­ject with which I am famil­i­ar?’ And that’s my point. Experimental philo­sophy isn’t valu­able only inso­far as it fur­thers the pro­jects philo­soph­ers cur­rently have. It’s try­ing to do some­thing new … or at least some­thing non-current.

Don’t believe me? Read the manifesto!

In the mani­festo, Knobe & Nichols describe a famil­i­ar approach accord­ing to which what people think about some­thing is con­sidered philo­soph­ic­ally rel­ev­ant only inso­far as it sheds light on the thing itself (their example is caus­a­tion) and continue

“With the advent of exper­i­ment­al philo­sophy, this famil­i­ar approach is being turned on its head. More and more, philo­soph­ers are com­ing to feel that ques­tions about how people ordin­ar­ily think have great philo­soph­ic­al sig­ni­fic­ance in their own right… we do not think that the sig­ni­fic­ance of [intu­itions about caus­a­tion] is exhausted by the evid­ence they might provide for one or anoth­er meta­phys­ic­al the­ory. On the con­trary, we think that the pat­terns to be found in people’s intu­itions point to import­ant truths about how the mind works, and these truths—truths about people’s minds, not about metaphysics—have great sig­ni­fic­ance for tra­di­tion­al philo­soph­ic­al ques­tions.” (Knobe and Nichols 2008, 11–12)

Our dis­sat­is­fac­tion is not that philo­soph­ers use intu­itions as evid­ence but fail to use the best tools. Our dis­sat­is­fac­tion is with a dis­cip­line which is largely no longer inter­ested in mak­ing sense of the ways that ordin­ary people think about philo­soph­ic­ally inter­est­ing things.

Still don’t believe me?! Again, read the manifesto!

“It used to be a com­mon­place that the dis­cip­line of philo­sophy was deeply con­cerned with ques­tions about the human con­di­tion. Philosophers thought about human beings and how their minds worked… On this tra­di­tion­al con­cep­tion, it wasn’t par­tic­u­larly import­ant to keep philo­sophy clearly dis­tinct from psychology …

The new move­ment of exper­i­ment­al philo­sophy seeks a return to this tra­di­tion­al vis­ion. Like philo­soph­ers of cen­tur­ies past, we are con­cerned with ques­tions about how human beings actu­ally hap­pen to be… we think that many of the deep­est ques­tions of philo­sophy can only be prop­erly addressed by immers­ing one­self in the messy, con­tin­gent, highly vari­able truths about how human beings really are.” (Knobe and Nichols 2008, 3)

And little has changed since the mani­festo. Here are Buckwalter & Systma in their intro­duc­tion to the forth­com­ing Blackwell Companion to Experimental Philosophy:

“Contemporary exper­i­ment­al philo­soph­ers return to these ways of doing philo­sophy. They con­duct con­trolled exper­i­ments, and empir­ic­al stud­ies more gen­er­ally, to explore how we think about those phe­nom­ena … This work helps us to under­stand our real­ity, who we are as people, and the choices we make about import­ant philo­soph­ic­al mat­ters that shape our lives.” (Buckwalter and Systma, forthcoming)

Of course, exper­i­ment­al philo­soph­ers do use the word ‘intu­itions’ a lot, and we do some­times attempt to jus­ti­fy our meth­ods in pre­cisely the terms that Cappelen accuses us of doing (i.e., our work is rel­ev­ant because philo­soph­ers use intu­itions, and we invest­ig­ate intu­itions so, …). My dia­gnos­is of this is that it is the unfor­tu­nate res­ult of a mis­guided sales tac­tic in try­ing to peddle exper­i­ment­al philo­sophy to the mainstream—we’re just not hip­ster enough.

What does all this mean for the charge that exper­i­ment­al philo­sophy is based on a big mistake?

Well, if exper­i­ment­al philo­sophy were based on a mis­take, the mis­take wouldn’t be what Cappelen thinks it is. Experimental philo­sophy isn’t try­ing to help out with the pro­jects philo­soph­ers cur­rently have—or at least isn’t only doing that. So the mis­take (sup­pos­ing that there was one) can’t be try­ing to fur­ther a pro­ject which philo­soph­ers don’t have.

What does all this mean for exper­i­ment­al philosophers?

As should hope­fully be clear, I don’t think my con­cep­tion of exper­i­ment­al philo­sophy is par­tic­u­larly nov­el among exper­i­ment­al philo­soph­ers. But the mes­sage didn’t get to folks like Cappelen for whatever reas­on. Not every­one will think that is a prob­lem. I do. What’s the solu­tion? Maybe we need to be a bit more hip­ster (and stop try­ing to peddle to the main­stream), or be more pub­licly unruly as bar­ons or … okay, I’ve lost myself in my meta­phors. In any case, we should per­haps redouble our efforts to get that mes­sage across. (Watch me blog!)

 

REFERENCES

Buckwalter and Systma (Forthcoming). A Companion to Experimental Philosophy, Blackwell.

Cappelen (2012). Philosophy Without Intuitions, OUP.

Cappelen (2014). X‑phi without intu­itions?, in Booth and Rowbottom (eds), Intuitions, OUP.

Knobe and Nichols (2008). An Experimental Philosophy Manifesto, in Knobe and Nichols (eds) Experimental Philosophy (Vol.1), OUP, pp. 3–14.

 

What Can You See? — Some Questions About the Content of Visual Experience

Dr Tom McClelland – The Architecture of Consciousness Project – University of Manchester

There are some prop­er­ties you can see and some you can­not. When you look at the pic­ture below, for instance, what do you see? I see col­ours such as the yel­low­ness of the banana, I see shapes such as the banana’s curve, I see spa­tial rela­tions such as the banana’s prox­im­ity to the man’s head and I see tex­tures such as the smooth­ness of the man’s neck­tie. There are oth­er prop­er­ties I don’t see. I don’t see the banana’s prop­erty of being a source of potassi­um or its prop­erty of cost­ing 28p. And I don’t see the man’s prop­erty of being a mem­ber of the Labour Party or his prop­erty of being an eld­er broth­er. On the basis of what I see I might judge that the things I’m look­ing at have these prop­er­ties, but that’s not the same as actu­ally see­ing those prop­er­ties. After all, prop­er­ties like ‘being a source of potassi­um’ just aren’t the kind of thing that one could see.

MillBanana

The examples I’ve men­tioned shouldn’t be too con­ten­tious, but there are many kinds of prop­erty that do cause con­tro­versy. For instance, can you see what kind of object some­thing is, such as see­ing the smal­ler object as a banana and the lar­ger object as a man? Can you see caus­al prop­er­ties such as the banana being sup­por­ted by the hand, or afford­ances such as the banana being edible? Can you see aes­thet­ic prop­er­ties such as the banana’s beauty, or mor­al prop­er­ties such as the man’s vir­tue? Can you see the iden­tity of objects, like see­ing the man as David Miliband?

There is a great deal of debate in philo­sophy about these con­ten­tious cases, and the dis­putants fall into two camps. The first camp are con­ser­vat­ives, and they say that our visu­al exper­i­ences are lim­ited to the basic kinds of prop­erty I first lis­ted: col­ours, shapes, spa­tial rela­tions and tex­tures (e.g. Prinz 2012; Brogaard 2010). These con­ser­vat­ives shouldn’t be con­fused with polit­ic­al Conservatives, but like polit­ic­al Conservatives they are big on aus­ter­ity – they take an aus­tere view of visu­al exper­i­ence that excludes all the con­ten­tious prop­er­ties. The second camp are lib­er­als, and this camp adopts a much more inclus­ive view of per­cep­tion (e.g. Siegel 2012; Bayne 2009). They hold that at least some of the con­ten­tious prop­er­ties can be visu­ally exper­i­enced. Again, this kind of lib­er­al shouldn’t be con­fused with polit­ic­al Liberals, but like polit­ic­al Liberals they are end­lessly arguing among them­selves about just how lib­er­al they should be — the prop­erty of being a man is surely per­mit­ted as a vis­ible prop­erty, but might per­mit­ting the prop­erty of being vir­tu­ous be a step too far?

Now, which camp are you in? The ques­tions I’ve been ask­ing are about what it’s like for you to have the visu­al exper­i­ence you have when you look at the photo above. Conservatives would offer an aus­tere descrip­tion of your exper­i­ence involving only the lim­ited range of prop­er­ties that they coun­ten­ance. If you think that such a descrip­tion fully cap­tures what your visu­al exper­i­ence is like, then you’re a con­ser­vat­ive (don’t worry — that doesn’t come with any polit­ic­al com­mit­ments). If, on the oth­er hand, you think there’s more to your visu­al exper­i­ence than is cap­tured by the aus­tere descrip­tion, then you’re some kind of lib­er­al, and will have to reflect care­fully on just how wide the range of prop­er­ties you can see is.

I’m a lib­er­al, but I’m think­ing care­fully about just how lib­er­al we should be. Specifically, I’m inter­ested in wheth­er we can see a spe­cial cat­egory of prop­erty called ‘scene cat­egor­ies’. When we open our eyes we don’t just see objects – we also see the wider envir­on­ments in which those objects are embed­ded. The philo­sophy of per­cep­tion tends to focus on our per­cep­tion of objects — there is end­less dis­cus­sion of wheth­er we can see an object as a pine tree, for instance, but no real dis­cus­sion of wheth­er we can see a scene as a forest (e.g. Siegel 2012). I think this is an over­sight and that we should ask ourselves wheth­er we can per­ceive scene cat­egor­ies such as being a forest, being a beach, being a field, being a street, or being a car­park.

Trees

Consider the image above. Besides see­ing the vari­ous shapes, col­ours, spa­tial rela­tions and tex­tures in this image do you also see the scene as a forest? Is the scene’s prop­erty of being a forest part of your visu­al exper­i­ence? Conservatives would say that it is not, and would deny that any such scene cat­egory can be per­ceived. They would accept, of course, that we recog­nise the scene as a forest — they would just deny that this recog­ni­tion is per­cep­tu­al. On their view, we see cer­tain pat­terns of col­our and shape and then judge that the scene is a forest. However, I think that a com­bin­a­tion of empir­ic­al and philo­soph­ic­al con­sid­er­a­tions cast doubt on this con­ser­vat­ive view. There are good reas­ons to adopt a lib­er­al view that acknow­ledges we can see scenes as forests or as beaches in much the same way as we can see objects as green or as tall. Conservatives will need some con­vin­cing that we visu­ally exper­i­ence scene cat­egor­ies, and you might need some con­vin­cing too. My case for this has two steps: the first step con­cerns the ‘visu­al’ bit of ‘visu­al exper­i­ence’ and the second step con­cerns the ‘exper­i­ence’.

If con­ser­vat­ives deny that we per­ceive scene cat­egor­ies, they have to say that we recog­nise scene cat­egor­ies through some kind of post-per­cep­tu­al cog­nit­ive pro­cess, such as mak­ing a judge­ment on the basis of what we see. The empir­ic­al data counts against such a view in at least four ways. First, judge­ment is rel­at­ively slow, but our recog­ni­tion of scene cat­egor­ies is incred­ibly fast. Thorpe et al. (1996), for instance, found that when sub­jects were shown images in a scene cat­egor­isa­tion task, their brains showed Event Related Potentials (ERPs) as early as 150 mil­li­seconds after being shown the image. Second, it is gen­er­ally thought that only atten­ded areas of the visu­al field are avail­able to judge­ment, but our recog­ni­tion of scene cat­egor­ies often seems to be inat­tent­ive (see Li et al, 2002). Third, the speed at which we make dis­crim­in­at­ive judge­ments about a stim­u­lus can gen­er­ally be improved if we’re famil­i­ar with the stim­u­lus, or if we form appro­pri­ate expect­a­tions about the stim­u­lus. However, an early study by Biederman et al (1983) sug­gests that famili­ar­ity and expect­a­tion do not speed up our cat­egor­isa­tion of scenes, indic­at­ing that scene cat­egor­isa­tion is an auto­mat­ic per­cep­tu­al pro­cess. Fourth, per­cep­tu­al pro­cesses dis­play a phe­nomen­on known as ‘per­cep­tu­al afteref­fects’ (which you can find more about here). Post-perceptual pro­cesses do not dis­play this effect, but a study by Greene & Oliva (2010) indic­ates that scene cat­egor­isa­tion is sus­cept­ible to aftereffects.

Interpreting this data is not always straight­for­ward, but it cer­tainly looks like scene cat­egor­ies can be recog­nised per­cep­tu­ally, not just through post-perceptual judge­ments. But I’m not home free yet. It’s one thing to per­cep­tu­ally pro­cess a prop­erty but quite anoth­er to per­cep­tu­ally exper­i­ence it. Since I claim that we per­cep­tu­ally exper­i­ence scene prop­er­ties, I have more work to do. This is where some philo­soph­ic­al con­sid­er­a­tions need to be intro­duced to sup­ple­ment the empir­ic­al data. Liberals use some­thing called ‘con­trast cases’ to show that our visu­al exper­i­ence is more rich than con­ser­vat­ives think. Contrast cases are pairs of visu­al exper­i­ences that dif­fer from each oth­er in ways that con­ser­vat­ives are unable to account for. Such cases drive the fol­low­ing argu­ment against conservatives:

  1. The two exper­i­ences are alike with respect to all conservative-permitted prop­er­ties i.e. they rep­res­ent all the same col­ours, shapes, spa­tial rela­tions and textures.
  2. The two exper­i­ences are nev­er­the­less dif­fer­ent i.e. what it’s like to under­go the first visu­al exper­i­ence is dif­fer­ent to what it’s like to under­go the second.
  3. Therefore the two exper­i­ences must dif­fer with respect to prop­er­ties not per­mit­ted by conservatives.

Here is a clas­sic example used by liberals:

Black Whit

To begin, this image looks to most people like a mean­ing­less jumble of black and white patches. But if you look closely you can recog­nise it as a pic­ture of a cow (the face is on the left and is look­ing towards you). This rev­el­a­tion changes what your visu­al exper­i­ence is like, but the con­ser­vat­ive can’t explain this change because there is no dif­fer­ence in the col­ours, shapes (etc.) that you see. Surely what changes is that you start to see the image as a cow? Conservatives deny that we see this kind of prop­erty, but this con­trast case sug­gests they are wrong. Perhaps a sim­il­ar example can be found in which we come to visu­ally exper­i­ence a scene cat­egory. Consider the fol­low­ing image:

Waterfall

Again, you might start by see­ing mean­ing­less patches of black and white but then come to recog­nise that this scene is a water­fall. To make sense of this change, it seems we must say that we visu­ally exper­i­ence the prop­erty of being a water­fall. Here’s anoth­er kind of example often used by liberals:

Duck Rabbit

You might first recog­nise this image as a rab­bit then recog­nise it as a duck. Your visu­al exper­i­ence rep­res­ents the same conservative-permitted prop­er­ties in both cases, so the change must involve some more con­ten­tious prop­erty, such as visu­ally exper­i­en­cing the image first as a rab­bit then as a duck. Again, we might be able to find a counter-part to this example involving scene cat­egor­ies. Consider the fol­low­ing image:

Waves

These sand dunes look a lot like waves, and you might be able to switch between visu­ally exper­i­en­cing this scene as a desert then visu­ally exper­i­en­cing it as a sea. If so, this would again be a case in which we see scene categories.

Although these brief argu­ments are far from con­clus­ive, they offer a taste of the lar­ger case I hope to make in favour of the vis­ib­il­ity of scene cat­egor­ies. Ultimately though, there’s only one way to decide where you stand on these issues, and that is to ask your­self what you can see!

 

REFERENCES

 

Bayne, T. (2009). Perception and the Reach of Phenomenal Content. Philosophical Quarterly, 59(236), 385–404.

Biederman, I., Teitelbaum, R. C., & Mezzanotte, R. (1983). Scene Perception: A Failure to Find a Benefit From Prior Expectancy or Familiarity. Journal of Experimental Psychology, 9(3), 411–429.

Brogaard, B. (2013). Do we per­ceive nat­ur­al kind prop­er­ties? Philosophical Studies, 162 (1), 35–42.

Greene, M. R., & Oliva, A. (2010). High-Level Aftereffects to Global Scene Properties. Journal of Experimental Psychology, 36(6), 1430–1442.

Li, F. F., VanRullen, R., Koch, C., & Perona, P. (2002). Scene cat­egor­iz­a­tion in the near absence of atten­tion. Proceedings of the National Academy of Sciences of the United States, 99(14), 9596–9601.

Prinz, J. (2012). The Conscious Brain: How Attention Engenders Experience. Oxford: OUP.

Siegel, S. (2012). The Content of Visual Experience. Oxford: OUP.

Thorpe, S., Fize, D., & Marlot, C. (1996). Speed of Processing in the Human Visual System. Nature, 381, 520–523.