iCog Blog

The Symbolic Mind

Dr Julian Kiverstein — Assistant Professor in Neurophilosophy — Institute for Language, Logic and Computation, University of Amsterdam. 

In 1976 the com­puter sci­ent­ists and founders of cog­nit­ive sci­ence Allen Newell and Herbert Simon pro­posed a hypo­thes­is they called “the phys­ic­al sym­bol sys­tems hypo­thes­is”. They sug­ges­ted that a phys­ic­al sym­bol sys­tem (such as a digit­al com­puter, for example) has the neces­sary and suf­fi­cient means for intel­li­gent action.” A phys­ic­al sym­bol sys­tem is a machine that car­ries out oper­a­tions like writ­ing, copy­ing, com­bin­ing and delet­ing on strings of digit­al sym­bol­ic rep­res­ent­a­tions. By intel­li­gent action they had in mind the high-level cog­nit­ive accom­plish­ments of humans, such as lan­guage under­stand­ing, or the abil­ity of a com­puter to make infer­ences and decisions on their own without super­vi­sion from their pro­gram­mers. Newell and Simon hypo­thes­ised that these high-level cog­nit­ive pro­cesses were the products of com­pu­ta­tions of the type a digit­al com­puter could be pro­grammed to perform.

Newell and Simon’s hypo­thes­is com­bines two con­tro­ver­sial pro­pos­i­tions that are worth eval­u­at­ing sep­ar­ately. The first pro­pos­i­tion they assert is a neces­sity claim along the fol­low­ing lines:

“Any sys­tem cap­able of intel­li­gent action must of neces­sity be a phys­ic­al sym­bol system.”

This is to claim that there is no oth­er non-magical way of bring­ing about intel­li­gent action oth­er than by digit­al com­pu­ta­tion. Assuming that humans don’t live in a world in which intel­li­gent action is caused by magic, it fol­lows that the human mind must work in fun­da­ment­ally the same way as a digit­al computer.

The second is a suf­fi­ciency claim:

“A phys­ic­al sym­bol sys­tem (equipped with the right soft­ware) has all that is required for intel­li­gent action. No addi­tion­al ingredi­ents are necessary.”

If this pro­pos­i­tion is cor­rect, it is just a mat­ter of time before com­puter sci­ent­ists suc­ceed in build­ing machines cap­able of intel­li­gent action. Artificial intel­li­gence is pretty much inev­it­able. All that stands in the way is the pro­gram­ming ingenu­ity of soft­ware design­ers. In the age of so-called “neur­omorph­ic” com­puter chips, and “deep learn­ing” algorithms (more on which later) this par­tic­u­lar obstacle looks increas­ingly negotiable.

But is the human mind really a digit­al com­puter? Many philo­soph­ers of mind influ­enced by cog­nit­ive sci­ence have thought so. They have taken the mind to have an abstract pat­tern of caus­al organ­isa­tion that can be mapped one-to-one onto the states a com­puter goes through in per­form­ing a com­pu­ta­tion. Since Frege, we have known how to rep­res­ent the form­al struc­ture of logic­al think­ing. Computation is a caus­al pro­cess that helps us to under­stand how men­tal or psy­cho­lo­gic­al pro­cesses could be caus­ally sens­it­ive to the logic­al form of human think­ing. It gives us for the first time a con­crete the­ory of how a phys­ic­al, mech­an­ic­al sys­tem could engage in logic­al think­ing and reasoning.

The thes­is that the human mind is a digit­al com­puter has how­ever run into a tri­vi­al­ity objec­tion. Every phys­ic­al sys­tem has states that can be mapped one-to-one onto the form­ally spe­cified states of digit­al com­puter. We can use cel­lu­lar auto­mata for instance to mod­el the beha­viour of galax­ies. It cer­tainly doesn’t fol­low that galax­ies are per­form­ing the com­pu­ta­tions we use to mod­el them. Moreover, to describe the mind as a com­puter seems vacu­ous or trivi­al once we notice that every phys­ic­al sys­tem can be described as a com­puter. The thes­is that the mind is a com­puter doesn’t seem to tell us any­thing dis­tinct­ive about the nature of the human mind.

This tri­vi­al­ity objec­tion (first for­mu­lated by John Searle in the 1980s) hasn’t gone away, but it is seen by many today as a merely tech­nic­al prob­lem, in prin­ciple solv­able once we have the right the­ory of com­pu­ta­tion. To put it bluntly: galax­ies don’t com­pute because they are not com­puters. Minds do com­pute because they are noth­ing but com­pu­ta­tion­al machines.

There are a num­ber of ways to push back and res­ist the bold claim that the human mind is (in a meta­phys­ic­al sense) a digit­al com­puter. One could hold as Jerry Fodor has done since the 1980s that the human mind is a com­puter only around its edges. Some aspects of the mind, for example low-level vis­ion, or fine-grained motor con­trol, are com­pu­ta­tion­al pro­cesses through and through. Other aspects of the mind, for example belief update, are most cer­tainly not.

Other philo­soph­ers have argued that the human mind is not a digit­al com­puter, and have sought a more gen­er­ic concept of com­pu­ta­tion. To think of the mind as a digit­al com­puter is to abstract away from the details of the bio­lo­gic­al organ­isa­tion of the brain that might just prove cru­cial when it comes to under­stand­ing how minds work. Digital com­pu­ta­tion only gives us a very coarse grained pat­tern of caus­al organ­isa­tion in which to root the mind. Perhaps how­ever the mind has a more fine-grained pat­tern of caus­al organ­isa­tion. This response amounts to tinker­ing with the concept of com­pu­ta­tion a little, whilst nev­er­the­less retain­ing the basic meta­phys­ic­al picture.

Should we agree that any sys­tem that can behave intel­li­gently must have a caus­al organ­isa­tion (at some level of abstrac­tion) that can be mapped onto the phys­ic­al state trans­itions of a com­put­ing machine?

Hubert Dreyfus, a long­stand­ing crit­ic of arti­fi­cial intel­li­gence, thought not. Dreyfus takes the philo­soph­ic­al ideas behind arti­fi­cial intel­li­gence to be deeply rooted in the his­tory of philo­sophy. He lists the fol­low­ing as import­ant step­ping stones:

- Hobbes’s idea that reas­on­ing is reck­on­ing or calculation.

- Descartes con­cep­tion of ideas as men­tal representations.

- Leibniz’s the­ory of a uni­ver­sal lan­guage, an arti­fi­cial lan­guage of sym­bols stand­ing for con­cepts or ideas and logic­al rules for their val­id manipulation.

- Kant’s view of con­cepts as rules.

- Frege’s form­al­isa­tion of such rules.

- Russell’s pos­tu­la­tion of logic­al atoms as the basic build­ing blocks of reality.

(From Hubert Dreyfus, “Why Heideggerian AI failed.”)

For Dreyfus the com­puter the­ory of the mind inher­its a num­ber of intract­able prob­lems that are the leg­acy of its philo­soph­ic­al pre­curs­ors. Artificial intel­li­gence is, and always has been a degen­er­at­ing research pro­gramme. The prob­lems to which it will nev­er find an adequate solu­tion lie in the sig­ni­fic­ance and rel­ev­ance humans find in the world. Dreyfus fol­low­ing in the foot­steps of the early twen­ti­eth cen­tury exist­en­tial phe­nomen­o­lo­gists, takes human intel­li­gence to reside in the skills that humans bring effort­lessy and instinct­ively to bear in nav­ig­at­ing every­day situ­ations. For a com­puter to know its way about in the famil­i­ar every­day world humans inhab­it, it would have to expli­citly rep­res­ent everything that humans take for gran­ted in their deal­ings with this world. Human com­mon­sense (which Dreyfus calls “back­ground under­stand­ing”) doesn’t take the form of a body of facts a com­puter can be pro­grammed with. It con­sists of skills and expert­ise for anti­cip­at­ing and respond­ing cor­rectly to very par­tic­u­lar situ­ations. For Dreyfus what humans know through their accul­tur­a­tion, and through the norm­at­ive dis­cip­lin­ing of their bod­ily skills can nev­er be represented.

Even if we were to some­how find a way around this prob­lem by avail­ing ourselves of the impress­ive logic­al sys­tems that lin­guists and form­al seman­ti­cists now have at their dis­pos­al, still a sub­stan­tial prob­lem would remain. The would-be AI pro­gramme would have to determ­ine which of the rep­res­ent­a­tions of facts it has in its extraordin­ar­ily large data­base of know­ledge is rel­ev­ant to the situ­ation in which it is act­ing. How does a com­puter determ­ine which facts are rel­ev­ant? Everything the com­puter knows might be rel­ev­ant to its cur­rent situ­ation. How does the com­puter identi­fy which of the pos­sibly rel­ev­ant facts are actu­ally rel­ev­ant? This prob­lem known as the “frame prob­lem” con­tin­ues to haunt research­ers in AI. At least it ought to, since as Mike Wheeler recently noted “it is not as if any­body ever actu­ally solved the problem.”

Still the tools and tech­niques of AI have advanced tre­mend­ously since Dreyfus first launched his cri­tique. Today’s com­puter sci­ent­ists and engin­eers are busy build­ing machines that mim­ic the learn­ing strategies and tech­niques of inform­a­tion stor­age found in the human brain. In 2011 IBM unveiled its “neur­omorph­ic” com­puter chip that pro­cesses instruc­tions, and per­forms oper­a­tions in par­al­lel in a sim­il­ar way to the mam­mali­an brain. It is made up of com­pon­ents that emu­late the dynam­ic spik­ing beha­viour of neur­ons. The chip is made up of hun­dred of such com­pon­ents, wired up so as to form hun­dreds of thou­sands of con­nnec­tions. Programming these con­nec­tions cre­ates net­works that pro­cess and react to inform­a­tion in sim­il­ar ways to neur­ons. The chip has been used by IBM to con­trol an unmanned aer­i­al vehicle, to recog­nise and also pre­dict hand­writ­ten digits and to play a video game. These are by no means new achieve­ments for the field of AI, but what is sig­ni­fic­ant is the effi­ciency with which the IBM chip achieves these tasks. Neuromorphic chips have also been built that can learn through exper­i­ence. These chips adjust their own con­nec­tions based on the fir­ing pat­terns of their com­pon­ents. Recent suc­cesses have included a pro­gramme that can teach itself to play a video game. It starts off per­form­ing ter­ribly, but after a few rounds it begins to get bet­ter. It can learn a skill, albeit in this well-circumscribed domain of the video game.

Elsewhere in the field of AI, “deep learn­ing” algorithms are all the rage. These algorith­ims employ the same stat­ist­ic­al learn­ing tech­niques as have been used in neur­al net­work research for dec­ades. One import­ant dif­fer­ence is the net­works include many more lay­ers of pro­cessing than in pre­vi­ous neur­al net­works (hence the “depth” descriptor), and they rely on vast clusters of net­worked com­puters to pro­cess the data they are fed. The res­ult is soft­ware that can learn from expos­ure to lit­er­ally mil­lions of images to recog­nise high-level fea­tures such as cats des­pite nev­er hav­ing been taught about cats. Deep learn­ing algorithms have achieved not­able suc­cesses in find­ing the high-level, abstract fea­tures that are import­ant, and the pat­terns that mat­ter in the low-level data to which they are exposed. This would seem to be an import­ant aspect of skill acquis­i­tion that Dreyfus right emphas­ises as being so import­ant for human intelligence.

These devel­op­ments in AI are based on the premise that the brain is a super-efficient com­puter. AI research can there­fore make pro­gress and get closer to build­ing machines that work more like the human mind by dis­cov­er­ing more about how the brain com­putes. These advances in AI would seem at first glance to provide little sup­port for the Newell and Simon phys­ic­al sym­bol sys­tems hypo­thes­is. The fact that AI research­ers needed to build digit­al com­put­ing machines that work more like brains, shows that the human mind doesn’t work much like a digit­al com­puter after all.

These devel­op­ments do how­ever raise the eth­ic­ally and polit­ic­ally troub­ling pos­sib­il­ity that humans might after all be on the brink of engin­eer­ing arti­fi­cial intel­li­gence. Wouldn’t such a res­ult indir­ectly vin­dic­ate some ver­sion of the phys­ic­al sym­bol sys­tems hypo­thes­is? Could we not argue as follows:

- The mind is the brain.

- The brain is a com­pu­ta­tion­al machine (albeit not a digit­al computer)

- Therefore the mind is a com­pu­ta­tion­al machine.

This con­clu­sion would imply an import­ant tweak and refine­ment to the ori­gin­al Newell and Simon hypo­thes­is. It would require us to think very dif­fer­ently about the cog­nit­ive archi­tec­ture of the mind. This mat­ters a great deal for cog­nit­ive sci­ence. Mental pro­cesses should no longer be thought of as sequen­tial and lin­ear rule-like oper­a­tions car­ried out on struc­tured sym­bol­ic rep­res­ent­a­tions. However the basic meta­phys­ic­al idea behind the com­puter the­ory of mind would still seem to sur­vive unscathed. We can con­tin­ue to think of the human mind as hav­ing an abstract caus­al organ­isa­tion that can be mapped onto the state trans­itions a com­puter goes through in doing form­al sym­bol manipulation.

So is the human mind essen­tially a com­pu­ta­tion­al machine? In reflect­ing on this ques­tion we should keep in mind the tri­vi­al­ity objec­tion. Every phys­ic­al sys­tem has an abstract caus­al organ­isa­tion which can be mapped one-to-one onto the states of a com­pu­ta­tion­al sys­tem. Nothing meta­phys­ic­ally inter­est­ing fol­lows about what minds essen­tially are from this obser­va­tion. If Dreyfus is right, ser­i­ous philo­soph­ic­al mis­takes are what have led us to the point today where we can think of the human mind as being in essense a com­put­ing machine. In par­tic­u­lar, we ought to be sus­pi­cious of the Cartesian concept of rep­res­ent­a­tion the com­puter the­ory of mind is pre­dic­ated on. It only makes sense to think of the brain as per­form­ing com­pu­ta­tions because it is pos­sible to give semant­ic or rep­res­ent­a­tion­al inter­pret­a­tion of brain pro­cesses. Notice how­ever that such an inter­pret­a­tion of brain pro­cesses in rep­res­ent­a­tion­al terms doesn’t imply that brains really do traffic in men­tal rep­res­ent­a­tions. That we tend to think of the brain in these terms may be due to our not hav­ing entirely shaken off the shackles of a highly ques­tion­able Cartesian philo­sophy of mind.

Xphi, Intuitions & the ‘Big Mistake’

Dr James Andow ‑Lecturer in Moral Philosophy — University of Reading

(This post is based on a sec­tion of a longer paper that has since been pub­lished. You can access the full art­icle here)

This is a pretty simple post. I want to put the record straight about exper­i­ment­al philosophy.

We exper­i­ment­al philo­soph­ers are often painted as the loy­al ser­vants of the armchair-bound monarch—going out into the world to see what’s hap­pen­ing and report­ing back with use­ful inform­a­tion to fur­ther the master’s pro­jects. Some accused us of plot­ting to usurp the mon­arch and toss the throne into the flames. We truth­fully denied this. We’re not attempt­ing to over­throw. But that doesn’t mean we’re com­pletely happy with the situation.

I think of myself as more like an unruly baron—unhappy with the master’s plans, and put­ting into motion a cam­paign to diver­si­fy pub­lic investment.

(Okay, the meta­phors got a bit out of hand there.)


I real­ised that the record needed set­ting straight when think­ing about a recent debate. Here’s a com­monly made claim about philo­soph­ic­al methods,

“Philosophers use intu­itions as evidence”

And here’s a com­monly made claim about exper­i­ment­al philosophy,

“Experimental philo­soph­ers help by using empir­ic­al tools to exam­ine people’s intuitions”

Suppose you thought the first claim was false. Well then you’d surely think exper­i­ment­al philo­sophy was in a bit of a bind giv­en the truth of the second claim. If philo­soph­ers don’t use intu­itions, then surely exper­i­ment­al philo­sophy is premised on a big mis­take (if it is all about examin­ing intu­itions). That’s the argu­ment Herman Cappelen has recently giv­en (in his 2012 and 2014). Cappelen thinks philo­soph­ers don’t use intu­itions as evidence—I am not going to ques­tion that here—and that con­sequently exper­i­ment­al philo­sophy is all a big mistake.

Cappelen (2014) con­siders a response exper­i­ment­al philo­soph­ers might make:

“Okay, so let’s grant that philo­soph­ers don’t use intu­itions. Here’s the thing, exper­i­ment­al philo­soph­ers were nev­er talk­ing about intu­itions. Sure they used the term ‘intu­itions’ but let’s not get hung up on that. Experimental philo­soph­ers were talk­ing about these oth­er things, BLAHs, and philo­soph­ers do use BLAHs as evidence.”

Cappelen then has a response to this, but I don’t want to get into it.

This dia­lectic involving Cappelen and his oppon­ents just strikes me as odd. Both sides seem to accept that exper­i­ment­al philo­sophy is premised on the idea that philo­soph­ers Φ and exper­i­ment­al philo­sophy can help them Φ better.

But I don’t see things that way. Check my pub­lished work and you per­haps wouldn’t guess. I’ve often writ­ten as though I thought this was the case too. However, I’m pretty clear deep down. Experimental philo­sophy is not premised on the idea that philo­soph­ers com­monly pur­sue some pro­ject which exper­i­ment­al philo­sophy can further.

The premise of exper­i­ment­al philo­sophy is not that philo­soph­ers Φ and exper­i­ment­al philo­sophy can improve their Φ‑ing, but rather that philo­soph­ers don’t ψ but should. Some caveats are appro­pri­ate here. Probably not all of them should (cer­tainly not all the time) and it mightn’t be the only thing exper­i­ment­al meth­ods are good for philo­soph­ic­ally speak­ing. Nonetheless, philo­soph­ers should ψ. We don’t want to give the mon­arch new tools to pur­sue the same old pro­jects. We want the mon­arch to pur­sue some new dif­fer­ent projects.


What are these pro­jects which exper­i­ment­al philo­sophy wants to use empir­ic­al tools to fur­ther? What is it to ψ? It is to try to make sense of the way we think about philo­soph­ic­ally inter­est­ing things like mor­al­ity, freewill, etc.—how we think, not simply what.

Of course, I don’t deny that we exper­i­ment­al philo­soph­ers gen­er­ally under­stand sur­vey responses to indic­ate what our par­ti­cipants think—participants ‘intu­itions’ if you like that sort of lan­guage. However, the reas­on we are inter­ested in this is largely not because philo­soph­ers use intu­itions as evid­ence. The aim is to use care­ful manip­u­la­tion to get a bet­ter under­stand­ing of how par­ti­cipants are thinking—their ways of under­stand­ing the world, their ways of com­ing to think what they think.

Don’t believe me? Read the web­site (link)!

“…exper­i­ment­al philo­soph­ers actu­ally go out and run sys­tem­at­ic exper­i­ments aimed at under­stand­ing how people ordin­ar­ily think about the issues at the found­a­tions of philo­soph­ic­al discussions”

Many philo­soph­ers will be ask­ing, ‘What then? … When does that con­trib­ute towards some philo­soph­ic­al pro­ject with which I am famil­i­ar?’ And that’s my point. Experimental philo­sophy isn’t valu­able only inso­far as it fur­thers the pro­jects philo­soph­ers cur­rently have. It’s try­ing to do some­thing new … or at least some­thing non-current.

Don’t believe me? Read the manifesto!

In the mani­festo, Knobe & Nichols describe a famil­i­ar approach accord­ing to which what people think about some­thing is con­sidered philo­soph­ic­ally rel­ev­ant only inso­far as it sheds light on the thing itself (their example is caus­a­tion) and continue

“With the advent of exper­i­ment­al philo­sophy, this famil­i­ar approach is being turned on its head. More and more, philo­soph­ers are com­ing to feel that ques­tions about how people ordin­ar­ily think have great philo­soph­ic­al sig­ni­fic­ance in their own right… we do not think that the sig­ni­fic­ance of [intu­itions about caus­a­tion] is exhausted by the evid­ence they might provide for one or anoth­er meta­phys­ic­al the­ory. On the con­trary, we think that the pat­terns to be found in people’s intu­itions point to import­ant truths about how the mind works, and these truths—truths about people’s minds, not about metaphysics—have great sig­ni­fic­ance for tra­di­tion­al philo­soph­ic­al ques­tions.” (Knobe and Nichols 2008, 11–12)

Our dis­sat­is­fac­tion is not that philo­soph­ers use intu­itions as evid­ence but fail to use the best tools. Our dis­sat­is­fac­tion is with a dis­cip­line which is largely no longer inter­ested in mak­ing sense of the ways that ordin­ary people think about philo­soph­ic­ally inter­est­ing things.

Still don’t believe me?! Again, read the manifesto!

“It used to be a com­mon­place that the dis­cip­line of philo­sophy was deeply con­cerned with ques­tions about the human con­di­tion. Philosophers thought about human beings and how their minds worked… On this tra­di­tion­al con­cep­tion, it wasn’t par­tic­u­larly import­ant to keep philo­sophy clearly dis­tinct from psychology …

The new move­ment of exper­i­ment­al philo­sophy seeks a return to this tra­di­tion­al vis­ion. Like philo­soph­ers of cen­tur­ies past, we are con­cerned with ques­tions about how human beings actu­ally hap­pen to be… we think that many of the deep­est ques­tions of philo­sophy can only be prop­erly addressed by immers­ing one­self in the messy, con­tin­gent, highly vari­able truths about how human beings really are.” (Knobe and Nichols 2008, 3)

And little has changed since the mani­festo. Here are Buckwalter & Systma in their intro­duc­tion to the forth­com­ing Blackwell Companion to Experimental Philosophy:

“Contemporary exper­i­ment­al philo­soph­ers return to these ways of doing philo­sophy. They con­duct con­trolled exper­i­ments, and empir­ic­al stud­ies more gen­er­ally, to explore how we think about those phe­nom­ena … This work helps us to under­stand our real­ity, who we are as people, and the choices we make about import­ant philo­soph­ic­al mat­ters that shape our lives.” (Buckwalter and Systma, forthcoming)

Of course, exper­i­ment­al philo­soph­ers do use the word ‘intu­itions’ a lot, and we do some­times attempt to jus­ti­fy our meth­ods in pre­cisely the terms that Cappelen accuses us of doing (i.e., our work is rel­ev­ant because philo­soph­ers use intu­itions, and we invest­ig­ate intu­itions so, …). My dia­gnos­is of this is that it is the unfor­tu­nate res­ult of a mis­guided sales tac­tic in try­ing to peddle exper­i­ment­al philo­sophy to the mainstream—we’re just not hip­ster enough.

What does all this mean for the charge that exper­i­ment­al philo­sophy is based on a big mistake?

Well, if exper­i­ment­al philo­sophy were based on a mis­take, the mis­take wouldn’t be what Cappelen thinks it is. Experimental philo­sophy isn’t try­ing to help out with the pro­jects philo­soph­ers cur­rently have—or at least isn’t only doing that. So the mis­take (sup­pos­ing that there was one) can’t be try­ing to fur­ther a pro­ject which philo­soph­ers don’t have.

What does all this mean for exper­i­ment­al philosophers?

As should hope­fully be clear, I don’t think my con­cep­tion of exper­i­ment­al philo­sophy is par­tic­u­larly nov­el among exper­i­ment­al philo­soph­ers. But the mes­sage didn’t get to folks like Cappelen for whatever reas­on. Not every­one will think that is a prob­lem. I do. What’s the solu­tion? Maybe we need to be a bit more hip­ster (and stop try­ing to peddle to the main­stream), or be more pub­licly unruly as bar­ons or … okay, I’ve lost myself in my meta­phors. In any case, we should per­haps redouble our efforts to get that mes­sage across. (Watch me blog!)



Buckwalter and Systma (Forthcoming). A Companion to Experimental Philosophy, Blackwell.

Cappelen (2012). Philosophy Without Intuitions, OUP.

Cappelen (2014). X‑phi without intu­itions?, in Booth and Rowbottom (eds), Intuitions, OUP.

Knobe and Nichols (2008). An Experimental Philosophy Manifesto, in Knobe and Nichols (eds) Experimental Philosophy (Vol.1), OUP, pp. 3–14.


What Can You See? — Some Questions About the Content of Visual Experience

Dr Tom McClelland – The Architecture of Consciousness Project – University of Manchester

There are some prop­er­ties you can see and some you can­not. When you look at the pic­ture below, for instance, what do you see? I see col­ours such as the yel­low­ness of the banana, I see shapes such as the banana’s curve, I see spa­tial rela­tions such as the banana’s prox­im­ity to the man’s head and I see tex­tures such as the smooth­ness of the man’s neck­tie. There are oth­er prop­er­ties I don’t see. I don’t see the banana’s prop­erty of being a source of potassi­um or its prop­erty of cost­ing 28p. And I don’t see the man’s prop­erty of being a mem­ber of the Labour Party or his prop­erty of being an eld­er broth­er. On the basis of what I see I might judge that the things I’m look­ing at have these prop­er­ties, but that’s not the same as actu­ally see­ing those prop­er­ties. After all, prop­er­ties like ‘being a source of potassi­um’ just aren’t the kind of thing that one could see.


The examples I’ve men­tioned shouldn’t be too con­ten­tious, but there are many kinds of prop­erty that do cause con­tro­versy. For instance, can you see what kind of object some­thing is, such as see­ing the smal­ler object as a banana and the lar­ger object as a man? Can you see caus­al prop­er­ties such as the banana being sup­por­ted by the hand, or afford­ances such as the banana being edible? Can you see aes­thet­ic prop­er­ties such as the banana’s beauty, or mor­al prop­er­ties such as the man’s vir­tue? Can you see the iden­tity of objects, like see­ing the man as David Miliband?

There is a great deal of debate in philo­sophy about these con­ten­tious cases, and the dis­putants fall into two camps. The first camp are con­ser­vat­ives, and they say that our visu­al exper­i­ences are lim­ited to the basic kinds of prop­erty I first lis­ted: col­ours, shapes, spa­tial rela­tions and tex­tures (e.g. Prinz 2012; Brogaard 2010). These con­ser­vat­ives shouldn’t be con­fused with polit­ic­al Conservatives, but like polit­ic­al Conservatives they are big on aus­ter­ity – they take an aus­tere view of visu­al exper­i­ence that excludes all the con­ten­tious prop­er­ties. The second camp are lib­er­als, and this camp adopts a much more inclus­ive view of per­cep­tion (e.g. Siegel 2012; Bayne 2009). They hold that at least some of the con­ten­tious prop­er­ties can be visu­ally exper­i­enced. Again, this kind of lib­er­al shouldn’t be con­fused with polit­ic­al Liberals, but like polit­ic­al Liberals they are end­lessly arguing among them­selves about just how lib­er­al they should be — the prop­erty of being a man is surely per­mit­ted as a vis­ible prop­erty, but might per­mit­ting the prop­erty of being vir­tu­ous be a step too far?

Now, which camp are you in? The ques­tions I’ve been ask­ing are about what it’s like for you to have the visu­al exper­i­ence you have when you look at the photo above. Conservatives would offer an aus­tere descrip­tion of your exper­i­ence involving only the lim­ited range of prop­er­ties that they coun­ten­ance. If you think that such a descrip­tion fully cap­tures what your visu­al exper­i­ence is like, then you’re a con­ser­vat­ive (don’t worry — that doesn’t come with any polit­ic­al com­mit­ments). If, on the oth­er hand, you think there’s more to your visu­al exper­i­ence than is cap­tured by the aus­tere descrip­tion, then you’re some kind of lib­er­al, and will have to reflect care­fully on just how wide the range of prop­er­ties you can see is.

I’m a lib­er­al, but I’m think­ing care­fully about just how lib­er­al we should be. Specifically, I’m inter­ested in wheth­er we can see a spe­cial cat­egory of prop­erty called ‘scene cat­egor­ies’. When we open our eyes we don’t just see objects – we also see the wider envir­on­ments in which those objects are embed­ded. The philo­sophy of per­cep­tion tends to focus on our per­cep­tion of objects — there is end­less dis­cus­sion of wheth­er we can see an object as a pine tree, for instance, but no real dis­cus­sion of wheth­er we can see a scene as a forest (e.g. Siegel 2012). I think this is an over­sight and that we should ask ourselves wheth­er we can per­ceive scene cat­egor­ies such as being a forest, being a beach, being a field, being a street, or being a car­park.


Consider the image above. Besides see­ing the vari­ous shapes, col­ours, spa­tial rela­tions and tex­tures in this image do you also see the scene as a forest? Is the scene’s prop­erty of being a forest part of your visu­al exper­i­ence? Conservatives would say that it is not, and would deny that any such scene cat­egory can be per­ceived. They would accept, of course, that we recog­nise the scene as a forest — they would just deny that this recog­ni­tion is per­cep­tu­al. On their view, we see cer­tain pat­terns of col­our and shape and then judge that the scene is a forest. However, I think that a com­bin­a­tion of empir­ic­al and philo­soph­ic­al con­sid­er­a­tions cast doubt on this con­ser­vat­ive view. There are good reas­ons to adopt a lib­er­al view that acknow­ledges we can see scenes as forests or as beaches in much the same way as we can see objects as green or as tall. Conservatives will need some con­vin­cing that we visu­ally exper­i­ence scene cat­egor­ies, and you might need some con­vin­cing too. My case for this has two steps: the first step con­cerns the ‘visu­al’ bit of ‘visu­al exper­i­ence’ and the second step con­cerns the ‘exper­i­ence’.

If con­ser­vat­ives deny that we per­ceive scene cat­egor­ies, they have to say that we recog­nise scene cat­egor­ies through some kind of post-per­cep­tu­al cog­nit­ive pro­cess, such as mak­ing a judge­ment on the basis of what we see. The empir­ic­al data counts against such a view in at least four ways. First, judge­ment is rel­at­ively slow, but our recog­ni­tion of scene cat­egor­ies is incred­ibly fast. Thorpe et al. (1996), for instance, found that when sub­jects were shown images in a scene cat­egor­isa­tion task, their brains showed Event Related Potentials (ERPs) as early as 150 mil­li­seconds after being shown the image. Second, it is gen­er­ally thought that only atten­ded areas of the visu­al field are avail­able to judge­ment, but our recog­ni­tion of scene cat­egor­ies often seems to be inat­tent­ive (see Li et al, 2002). Third, the speed at which we make dis­crim­in­at­ive judge­ments about a stim­u­lus can gen­er­ally be improved if we’re famil­i­ar with the stim­u­lus, or if we form appro­pri­ate expect­a­tions about the stim­u­lus. However, an early study by Biederman et al (1983) sug­gests that famili­ar­ity and expect­a­tion do not speed up our cat­egor­isa­tion of scenes, indic­at­ing that scene cat­egor­isa­tion is an auto­mat­ic per­cep­tu­al pro­cess. Fourth, per­cep­tu­al pro­cesses dis­play a phe­nomen­on known as ‘per­cep­tu­al afteref­fects’ (which you can find more about here). Post-perceptual pro­cesses do not dis­play this effect, but a study by Greene & Oliva (2010) indic­ates that scene cat­egor­isa­tion is sus­cept­ible to aftereffects.

Interpreting this data is not always straight­for­ward, but it cer­tainly looks like scene cat­egor­ies can be recog­nised per­cep­tu­ally, not just through post-perceptual judge­ments. But I’m not home free yet. It’s one thing to per­cep­tu­ally pro­cess a prop­erty but quite anoth­er to per­cep­tu­ally exper­i­ence it. Since I claim that we per­cep­tu­ally exper­i­ence scene prop­er­ties, I have more work to do. This is where some philo­soph­ic­al con­sid­er­a­tions need to be intro­duced to sup­ple­ment the empir­ic­al data. Liberals use some­thing called ‘con­trast cases’ to show that our visu­al exper­i­ence is more rich than con­ser­vat­ives think. Contrast cases are pairs of visu­al exper­i­ences that dif­fer from each oth­er in ways that con­ser­vat­ives are unable to account for. Such cases drive the fol­low­ing argu­ment against conservatives:

  1. The two exper­i­ences are alike with respect to all conservative-permitted prop­er­ties i.e. they rep­res­ent all the same col­ours, shapes, spa­tial rela­tions and textures.
  2. The two exper­i­ences are nev­er­the­less dif­fer­ent i.e. what it’s like to under­go the first visu­al exper­i­ence is dif­fer­ent to what it’s like to under­go the second.
  3. Therefore the two exper­i­ences must dif­fer with respect to prop­er­ties not per­mit­ted by conservatives.

Here is a clas­sic example used by liberals:

Black Whit

To begin, this image looks to most people like a mean­ing­less jumble of black and white patches. But if you look closely you can recog­nise it as a pic­ture of a cow (the face is on the left and is look­ing towards you). This rev­el­a­tion changes what your visu­al exper­i­ence is like, but the con­ser­vat­ive can’t explain this change because there is no dif­fer­ence in the col­ours, shapes (etc.) that you see. Surely what changes is that you start to see the image as a cow? Conservatives deny that we see this kind of prop­erty, but this con­trast case sug­gests they are wrong. Perhaps a sim­il­ar example can be found in which we come to visu­ally exper­i­ence a scene cat­egory. Consider the fol­low­ing image:


Again, you might start by see­ing mean­ing­less patches of black and white but then come to recog­nise that this scene is a water­fall. To make sense of this change, it seems we must say that we visu­ally exper­i­ence the prop­erty of being a water­fall. Here’s anoth­er kind of example often used by liberals:

Duck Rabbit

You might first recog­nise this image as a rab­bit then recog­nise it as a duck. Your visu­al exper­i­ence rep­res­ents the same conservative-permitted prop­er­ties in both cases, so the change must involve some more con­ten­tious prop­erty, such as visu­ally exper­i­en­cing the image first as a rab­bit then as a duck. Again, we might be able to find a counter-part to this example involving scene cat­egor­ies. Consider the fol­low­ing image:


These sand dunes look a lot like waves, and you might be able to switch between visu­ally exper­i­en­cing this scene as a desert then visu­ally exper­i­en­cing it as a sea. If so, this would again be a case in which we see scene categories.

Although these brief argu­ments are far from con­clus­ive, they offer a taste of the lar­ger case I hope to make in favour of the vis­ib­il­ity of scene cat­egor­ies. Ultimately though, there’s only one way to decide where you stand on these issues, and that is to ask your­self what you can see!




Bayne, T. (2009). Perception and the Reach of Phenomenal Content. Philosophical Quarterly, 59(236), 385–404.

Biederman, I., Teitelbaum, R. C., & Mezzanotte, R. (1983). Scene Perception: A Failure to Find a Benefit From Prior Expectancy or Familiarity. Journal of Experimental Psychology, 9(3), 411–429.

Brogaard, B. (2013). Do we per­ceive nat­ur­al kind prop­er­ties? Philosophical Studies, 162 (1), 35–42.

Greene, M. R., & Oliva, A. (2010). High-Level Aftereffects to Global Scene Properties. Journal of Experimental Psychology, 36(6), 1430–1442.

Li, F. F., VanRullen, R., Koch, C., & Perona, P. (2002). Scene cat­egor­iz­a­tion in the near absence of atten­tion. Proceedings of the National Academy of Sciences of the United States, 99(14), 9596–9601.

Prinz, J. (2012). The Conscious Brain: How Attention Engenders Experience. Oxford: OUP.

Siegel, S. (2012). The Content of Visual Experience. Oxford: OUP.

Thorpe, S., Fize, D., & Marlot, C. (1996). Speed of Processing in the Human Visual System. Nature, 381, 520–523.