The Symbolic Mind

Dr Julian Kiverstein — Assistant Professor in Neurophilosophy — Institute for Language, Logic and Computation, University of Amsterdam.  

In 1976 the com­puter sci­ent­ists and founders of cog­nit­ive sci­ence Allen Newell and Herbert Simon pro­posed a hypo­thes­is they called “the phys­ic­al sym­bol sys­tems hypo­thes­is”. They sug­ges­ted that a phys­ic­al sym­bol sys­tem (such as a digit­al com­puter, for example) has the neces­sary and suf­fi­cient means for intel­li­gent action.” A phys­ic­al sym­bol sys­tem is a machine that car­ries out oper­a­tions like writ­ing, copy­ing, com­bin­ing and delet­ing on strings of digit­al sym­bol­ic rep­res­ent­a­tions. By intel­li­gent action they had in mind the high-level cog­nit­ive accom­plish­ments of humans, such as lan­guage under­stand­ing, or the abil­ity of a com­puter to make infer­ences and decisions on their own without super­vi­sion from their pro­gram­mers. Newell and Simon hypo­thes­ised that these high-level cog­nit­ive pro­cesses were the products of com­pu­ta­tions of the type a digit­al com­puter could be pro­grammed to per­form.

Newell and Simon’s hypo­thes­is com­bines two con­tro­ver­sial pro­pos­i­tions that are worth eval­u­at­ing sep­ar­ately. The first pro­pos­i­tion they assert is a neces­sity claim along the fol­low­ing lines:

“Any sys­tem cap­able of intel­li­gent action must of neces­sity be a phys­ic­al sym­bol sys­tem.”

This is to claim that there is no oth­er non-magical way of bring­ing about intel­li­gent action oth­er than by digit­al com­pu­ta­tion. Assuming that humans don’t live in a world in which intel­li­gent action is caused by magic, it fol­lows that the human mind must work in fun­da­ment­ally the same way as a digit­al com­puter.

The second is a suf­fi­ciency claim:

“A phys­ic­al sym­bol sys­tem (equipped with the right soft­ware) has all that is required for intel­li­gent action. No addi­tion­al ingredi­ents are neces­sary.”

If this pro­pos­i­tion is cor­rect, it is just a mat­ter of time before com­puter sci­ent­ists suc­ceed in build­ing machines cap­able of intel­li­gent action. Artificial intel­li­gence is pretty much inev­it­able. All that stands in the way is the pro­gram­ming ingenu­ity of soft­ware design­ers. In the age of so-called “neur­omorph­ic” com­puter chips, and “deep learn­ing” algorithms (more on which later) this par­tic­u­lar obstacle looks increas­ingly nego­ti­able.

But is the human mind really a digit­al com­puter? Many philo­soph­ers of mind influ­enced by cog­nit­ive sci­ence have thought so. They have taken the mind to have an abstract pat­tern of caus­al organ­isa­tion that can be mapped one-to-one onto the states a com­puter goes through in per­form­ing a com­pu­ta­tion. Since Frege, we have known how to rep­res­ent the form­al struc­ture of logic­al think­ing. Computation is a caus­al pro­cess that helps us to under­stand how men­tal or psy­cho­lo­gic­al pro­cesses could be caus­ally sens­it­ive to the logic­al form of human think­ing. It gives us for the first time a con­crete the­ory of how a phys­ic­al, mech­an­ic­al sys­tem could engage in logic­al think­ing and reas­on­ing.

The thes­is that the human mind is a digit­al com­puter has how­ever run into a tri­vi­al­ity objec­tion. Every phys­ic­al sys­tem has states that can be mapped one-to-one onto the form­ally spe­cified states of digit­al com­puter. We can use cel­lu­lar auto­mata for instance to mod­el the beha­viour of galax­ies. It cer­tainly doesn’t fol­low that galax­ies are per­form­ing the com­pu­ta­tions we use to mod­el them. Moreover, to describe the mind as a com­puter seems vacu­ous or trivi­al once we notice that every phys­ic­al sys­tem can be described as a com­puter. The thes­is that the mind is a com­puter doesn’t seem to tell us any­thing dis­tinct­ive about the nature of the human mind.

This tri­vi­al­ity objec­tion (first for­mu­lated by John Searle in the 1980s) hasn’t gone away, but it is seen by many today as a merely tech­nic­al prob­lem, in prin­ciple solv­able once we have the right the­ory of com­pu­ta­tion. To put it bluntly: galax­ies don’t com­pute because they are not com­puters. Minds do com­pute because they are noth­ing but com­pu­ta­tion­al machines.

There are a num­ber of ways to push back and res­ist the bold claim that the human mind is (in a meta­phys­ic­al sense) a digit­al com­puter. One could hold as Jerry Fodor has done since the 1980s that the human mind is a com­puter only around its edges. Some aspects of the mind, for example low-level vis­ion, or fine-grained motor con­trol, are com­pu­ta­tion­al pro­cesses through and through. Other aspects of the mind, for example belief update, are most cer­tainly not.

Other philo­soph­ers have argued that the human mind is not a digit­al com­puter, and have sought a more gen­er­ic concept of com­pu­ta­tion. To think of the mind as a digit­al com­puter is to abstract away from the details of the bio­lo­gic­al organ­isa­tion of the brain that might just prove cru­cial when it comes to under­stand­ing how minds work. Digital com­pu­ta­tion only gives us a very coarse grained pat­tern of caus­al organ­isa­tion in which to root the mind. Perhaps how­ever the mind has a more fine-grained pat­tern of caus­al organ­isa­tion. This response amounts to tinker­ing with the concept of com­pu­ta­tion a little, whilst nev­er­the­less retain­ing the basic meta­phys­ic­al pic­ture.

Should we agree that any sys­tem that can behave intel­li­gently must have a caus­al organ­isa­tion (at some level of abstrac­tion) that can be mapped onto the phys­ic­al state trans­itions of a com­put­ing machine?

Hubert Dreyfus, a long­stand­ing crit­ic of arti­fi­cial intel­li­gence, thought not. Dreyfus takes the philo­soph­ic­al ideas behind arti­fi­cial intel­li­gence to be deeply rooted in the his­tory of philo­sophy. He lists the fol­low­ing as import­ant step­ping stones:

- Hobbes’s idea that reas­on­ing is reck­on­ing or cal­cu­la­tion.

- Descartes con­cep­tion of ideas as men­tal rep­res­ent­a­tions.

- Leibniz’s the­ory of a uni­ver­sal lan­guage, an arti­fi­cial lan­guage of sym­bols stand­ing for con­cepts or ideas and logic­al rules for their val­id manip­u­la­tion.

- Kant’s view of con­cepts as rules.

- Frege’s form­al­isa­tion of such rules.

- Russell’s pos­tu­la­tion of logic­al atoms as the basic build­ing blocks of real­ity.

(From Hubert Dreyfus, “Why Heideggerian AI failed.”)

For Dreyfus the com­puter the­ory of the mind inher­its a num­ber of intract­able prob­lems that are the leg­acy of its philo­soph­ic­al pre­curs­ors. Artificial intel­li­gence is, and always has been a degen­er­at­ing research pro­gramme. The prob­lems to which it will nev­er find an adequate solu­tion lie in the sig­ni­fic­ance and rel­ev­ance humans find in the world. Dreyfus fol­low­ing in the foot­steps of the early twen­ti­eth cen­tury exist­en­tial phe­nomen­o­lo­gists, takes human intel­li­gence to reside in the skills that humans bring effort­lessy and instinct­ively to bear in nav­ig­at­ing every­day situ­ations. For a com­puter to know its way about in the famil­i­ar every­day world humans inhab­it, it would have to expli­citly rep­res­ent everything that humans take for gran­ted in their deal­ings with this world. Human com­mon­sense (which Dreyfus calls “back­ground under­stand­ing”) doesn’t take the form of a body of facts a com­puter can be pro­grammed with. It con­sists of skills and expert­ise for anti­cip­at­ing and respond­ing cor­rectly to very par­tic­u­lar situ­ations. For Dreyfus what humans know through their accul­tur­a­tion, and through the norm­at­ive dis­cip­lin­ing of their bod­ily skills can nev­er be rep­res­en­ted.

Even if we were to some­how find a way around this prob­lem by avail­ing ourselves of the impress­ive logic­al sys­tems that lin­guists and form­al seman­ti­cists now have at their dis­pos­al, still a sub­stan­tial prob­lem would remain. The would-be AI pro­gramme would have to determ­ine which of the rep­res­ent­a­tions of facts it has in its extraordin­ar­ily large data­base of know­ledge is rel­ev­ant to the situ­ation in which it is act­ing. How does a com­puter determ­ine which facts are rel­ev­ant? Everything the com­puter knows might be rel­ev­ant to its cur­rent situ­ation. How does the com­puter identi­fy which of the pos­sibly rel­ev­ant facts are actu­ally rel­ev­ant? This prob­lem known as the “frame prob­lem” con­tin­ues to haunt research­ers in AI. At least it ought to, since as Mike Wheeler recently noted “it is not as if any­body ever actu­ally solved the prob­lem.”

Still the tools and tech­niques of AI have advanced tre­mend­ously since Dreyfus first launched his cri­tique. Today’s com­puter sci­ent­ists and engin­eers are busy build­ing machines that mim­ic the learn­ing strategies and tech­niques of inform­a­tion stor­age found in the human brain. In 2011 IBM unveiled its “neur­omorph­ic” com­puter chip that pro­cesses instruc­tions, and per­forms oper­a­tions in par­al­lel in a sim­il­ar way to the mam­mali­an brain. It is made up of com­pon­ents that emu­late the dynam­ic spik­ing beha­viour of neur­ons. The chip is made up of hun­dred of such com­pon­ents, wired up so as to form hun­dreds of thou­sands of con­nnec­tions. Programming these con­nec­tions cre­ates net­works that pro­cess and react to inform­a­tion in sim­il­ar ways to neur­ons. The chip has been used by IBM to con­trol an unmanned aer­i­al vehicle, to recog­nise and also pre­dict hand­writ­ten digits and to play a video game. These are by no means new achieve­ments for the field of AI, but what is sig­ni­fic­ant is the effi­ciency with which the IBM chip achieves these tasks. Neuromorphic chips have also been built that can learn through exper­i­ence. These chips adjust their own con­nec­tions based on the fir­ing pat­terns of their com­pon­ents. Recent suc­cesses have included a pro­gramme that can teach itself to play a video game. It starts off per­form­ing ter­ribly, but after a few rounds it begins to get bet­ter. It can learn a skill, albeit in this well-circumscribed domain of the video game.

Elsewhere in the field of AI, “deep learn­ing” algorithms are all the rage. These algorith­ims employ the same stat­ist­ic­al learn­ing tech­niques as have been used in neur­al net­work research for dec­ades. One import­ant dif­fer­ence is the net­works include many more lay­ers of pro­cessing than in pre­vi­ous neur­al net­works (hence the “depth” descriptor), and they rely on vast clusters of net­worked com­puters to pro­cess the data they are fed. The res­ult is soft­ware that can learn from expos­ure to lit­er­ally mil­lions of images to recog­nise high-level fea­tures such as cats des­pite nev­er hav­ing been taught about cats. Deep learn­ing algorithms have achieved not­able suc­cesses in find­ing the high-level, abstract fea­tures that are import­ant, and the pat­terns that mat­ter in the low-level data to which they are exposed. This would seem to be an import­ant aspect of skill acquis­i­tion that Dreyfus right emphas­ises as being so import­ant for human intel­li­gence.

These devel­op­ments in AI are based on the premise that the brain is a super-efficient com­puter. AI research can there­fore make pro­gress and get closer to build­ing machines that work more like the human mind by dis­cov­er­ing more about how the brain com­putes. These advances in AI would seem at first glance to provide little sup­port for the Newell and Simon phys­ic­al sym­bol sys­tems hypo­thes­is. The fact that AI research­ers needed to build digit­al com­put­ing machines that work more like brains, shows that the human mind doesn’t work much like a digit­al com­puter after all.

These devel­op­ments do how­ever raise the eth­ic­ally and polit­ic­ally troub­ling pos­sib­il­ity that humans might after all be on the brink of engin­eer­ing arti­fi­cial intel­li­gence. Wouldn’t such a res­ult indir­ectly vin­dic­ate some ver­sion of the phys­ic­al sym­bol sys­tems hypo­thes­is? Could we not argue as fol­lows:

- The mind is the brain.

- The brain is a com­pu­ta­tion­al machine (albeit not a digit­al com­puter)

- Therefore the mind is a com­pu­ta­tion­al machine.

This con­clu­sion would imply an import­ant tweak and refine­ment to the ori­gin­al Newell and Simon hypo­thes­is. It would require us to think very dif­fer­ently about the cog­nit­ive archi­tec­ture of the mind. This mat­ters a great deal for cog­nit­ive sci­ence. Mental pro­cesses should no longer be thought of as sequen­tial and lin­ear rule-like oper­a­tions car­ried out on struc­tured sym­bol­ic rep­res­ent­a­tions. However the basic meta­phys­ic­al idea behind the com­puter the­ory of mind would still seem to sur­vive unscathed. We can con­tin­ue to think of the human mind as hav­ing an abstract caus­al organ­isa­tion that can be mapped onto the state trans­itions a com­puter goes through in doing form­al sym­bol manip­u­la­tion.

So is the human mind essen­tially a com­pu­ta­tion­al machine? In reflect­ing on this ques­tion we should keep in mind the tri­vi­al­ity objec­tion. Every phys­ic­al sys­tem has an abstract caus­al organ­isa­tion which can be mapped one-to-one onto the states of a com­pu­ta­tion­al sys­tem. Nothing meta­phys­ic­ally inter­est­ing fol­lows about what minds essen­tially are from this obser­va­tion. If Dreyfus is right, ser­i­ous philo­soph­ic­al mis­takes are what have led us to the point today where we can think of the human mind as being in essense a com­put­ing machine. In par­tic­u­lar, we ought to be sus­pi­cious of the Cartesian concept of rep­res­ent­a­tion the com­puter the­ory of mind is pre­dic­ated on. It only makes sense to think of the brain as per­form­ing com­pu­ta­tions because it is pos­sible to give semant­ic or rep­res­ent­a­tion­al inter­pret­a­tion of brain pro­cesses. Notice how­ever that such an inter­pret­a­tion of brain pro­cesses in rep­res­ent­a­tion­al terms doesn’t imply that brains really do traffic in men­tal rep­res­ent­a­tions. That we tend to think of the brain in these terms may be due to our not hav­ing entirely shaken off the shackles of a highly ques­tion­able Cartesian philo­sophy of mind.