Enactivism, Computation, and Autonomy

by Joe Dewhurst ‑Teaching Assistant at The University of Edinburgh

Enactivism has his­tor­ic­ally rejec­ted com­pu­ta­tion­al char­ac­ter­isa­tions of cog­ni­tion, at least in more tra­di­tion­al ver­sions. This has led to the per­cep­tion that enact­iv­ist approaches to cog­ni­tion must be opposed to be more main­stream com­pu­ta­tion­al­ist approaches, which offer a com­pu­ta­tion­al char­ac­ter­isa­tion of cog­ni­tion. However, the con­cep­tion of com­pu­ta­tion which enact­iv­ism rejects is in some senses quite old fash­ioned, and it is not so clear that enact­iv­ism need neces­sar­ily be opposed to com­pu­ta­tion, under­stood in a more mod­ern sense. Demonstrating that there could be com­pat­ib­il­ity, or at least not a neces­sary oppos­i­tion, between enact­iv­ism and com­pu­ta­tion­al­ism (in some sense) would open the door to a pos­sible recon­cili­ation or cooper­a­tion between the two approaches.

In a recently pub­lished paper (Villalobos & Dewhurst 2017), my col­lab­or­at­or Mario and I have focused on elu­cid­at­ing some of the reas­ons why enact­iv­ism has rejec­ted com­pu­ta­tion, and have argued that these do not neces­sar­ily apply to more mod­ern accounts of com­pu­ta­tion. In par­tic­u­lar, we have demon­strated that a phys­ic­ally instan­ti­ated Turing machine, which we take to be a paradig­mat­ic example of a com­pu­ta­tion­al sys­tem, can meet the autonomy require­ments that enact­iv­ism uses to char­ac­ter­ise cog­nit­ive sys­tems. This demon­stra­tion goes some way towards estab­lish­ing that enact­iv­ism need not be opposed to com­pu­ta­tion­al char­ac­ter­isa­tions of cog­ni­tion, although there may be oth­er reas­ons for this oppos­i­tion, dis­tinct from the autonomy require­ments.

The enact­ive concept of autonomy first appears in its mod­ern guise in Varela, Thompson, & Rosch’s 1991 book The Embodied Mind, although it has import­ant his­tor­ic­al pre­curs­ors in Maturana’s autopoi­et­ic the­ory (see his 1970, 1975, 1981; see also Maturana & Varela 1980) and cyber­net­ic work on homeo­stas­is (see e.g. Ashby 1956, 1960). There are three dimen­sions of autonomy that we con­sider in our ana­lys­is of com­pu­ta­tion. Self-determination requires that the beha­viour of an autonom­ous sys­tem must be determ­ined by that system’s own struc­ture, and not by extern­al instruc­tion. Operational clos­ure requires that the func­tion­al organ­isa­tion of an autonom­ous sys­tem must loop back on itself, such that the sys­tem pos­sesses no (non-arbitrary) inputs or out­puts. Finally, an autonom­ous sys­tem must be pre­cari­ous, such that the con­tin­ued exist­ence of the sys­tem depends on its own func­tion­al organ­isa­tion, rather than on extern­al factors out­side of its con­trol. In this post I will focus on demon­strat­ing that these cri­ter­ia can be applied to a phys­ic­al com­put­ing sys­tem, rather than address­ing why or how enact­iv­ism argues for them in the first place.

All three cri­ter­ia have tra­di­tion­ally been used to dis­qual­i­fy com­pu­ta­tion­al sys­tems from being autonom­ous sys­tems, and hence to deny that cog­ni­tion (which for enact­iv­ists requires autonomy) can be com­pu­ta­tion­al (see e.g. Thompson 2007: chapter 3). Here it is import­ant to recog­nise that the enact­iv­ists have a par­tic­u­lar account of com­pu­ta­tion in mind, one that they have inher­ited from tra­di­tion­al com­pu­ta­tion­al­ists. According to this ‘semant­ic’ account, a phys­ic­al com­puter is defined as a sys­tem that per­forms sys­tem­at­ic trans­form­a­tions over content-bearing (i.e. rep­res­ent­a­tion­al) states or sym­bols (see e.g. Sprevak 2010). With such an account in mind, it is easy to see why the autonomy cri­ter­ia might rule out com­pu­ta­tion­al sys­tems. We typ­ic­ally think of such a sys­tem as con­sum­ing sym­bol­ic inputs, which it trans­forms accord­ing to pro­grammed instruc­tions, before pro­du­cing fur­ther sym­bol­ic out­puts. Already this sys­tem has failed to meet the self-determination and oper­a­tion­al clos­ure cri­ter­ia. Furthermore, as arte­fac­tu­al com­puters are typ­ic­ally reli­ant on their cre­at­ors for main­ten­ance, etc., they also fail to meet the pre­cari­ous­ness cri­ter­ia. So, giv­en this quite tra­di­tion­al under­stand­ing of com­pu­ta­tion, it is easy to see why enact­iv­ists have typ­ic­ally denied that com­pu­ta­tion­al sys­tems can be autonom­ous.

Nonetheless, under­stood accord­ing to more recent, ‘mech­an­ist­ic’ accounts of com­pu­ta­tion, there is no reas­on to think that the autonomy cri­ter­ia must neces­sar­ily exclude com­pu­ta­tion­al sys­tems. Whilst they dif­fer in some details, all of these accounts deny that com­pu­ta­tion is inher­ently semant­ic, and instead define phys­ic­al com­pu­ta­tion in terms of mech­an­ist­ic struc­tures. We will not rehearse these accounts in any detail here, but the basic idea is that phys­ic­al com­pu­ta­tion can be under­stood in terms of mech­an­isms that per­form sys­tem­at­ic trans­form­a­tions over states that do not pos­sess any intrins­ic semant­ic con­tent (see e.g. Miłkowski 2013; Fresco 2014; Piccinini 2015). With this rough frame­work in mind, we can return to the autonomy cri­ter­ia.

Even under the mech­an­ist­ic account, com­pu­ta­tion is usu­ally under­stood in terms of map­pings between inputs and out­puts, where there is a clear sense of the begin­ning and end of the com­pu­ta­tion­al oper­a­tion. A sys­tem organ­ised in this way can be described as ‘func­tion­ally open’, mean­ing that its func­tion­al organ­isa­tion is open to the world. A func­tion­ally closed sys­tem, on the oth­er hand, is one whose func­tion­al organ­isa­tion loops back through the world, such that the envir­on­ment­al impact of the system’s ‘out­puts’ con­trib­utes to the ‘inputs’ that it receives.

A simple example of this dis­tinc­tion can be found by con­sid­er­ing two dif­fer­ent ways that a ther­mo­stat could be used. In the first case the sensor, which detects ambi­ent tem­per­at­ure, is placed in one house, and the effect­or, which con­trols a radi­at­or, is placed in anoth­er (see fig­ure 1). This sys­tem is func­tion­ally open, because there is only a one-way con­nec­tion between the sensor and the effect­or, allow­ing us to straight­for­wardly identi­fy inputs and out­puts to the sys­tem.

A more con­ven­tion­al way of set­ting up a ther­mo­stat is with both the sensor and the effect­or in the same house (see fig­ure 2). In this case the appar­ent ‘out­put’ (i.e. con­trol of the radi­at­or) loops back way round to the appar­ent ‘input’ (i.e. ambi­ent tem­per­at­ure), form­ing a func­tion­ally closed sys­tem. The ambi­ent air tem­per­at­ure in the house is effect­ively part of the sys­tem, mean­ing that we could just as well treat the effect­or as provid­ing input and the sensor as pro­du­cing out­put – there is no non-arbitrary begin­ning or end to this sys­tem.

Whilst it is typ­ic­al to treat a com­put­ing mech­an­ism more like the first ther­mo­stat, with a clear input and out­put, we do not think that this per­spect­ive is essen­tial to the mech­an­ist­ic under­stand­ing of com­pu­ta­tion. There are two pos­sible ways that we could arrange a com­put­ing mech­an­ism. The func­tion­ally open mech­an­ism (fig­ure 3) reads from one tape and writes onto anoth­er, whilst the func­tion­ally closed mech­an­ism (fig­ure 4) reads and writes onto the same tape, cre­at­ing a closed sys­tem ana­log­ous to the ther­mo­stat with its sensor and effect­or in the same house. As Wells (1998) sug­gests, a con­ven­tion­al Turing machine is actu­ally arranged in the second way, provid­ing an illus­tra­tion of a func­tion­al closed com­put­ing mech­an­ism. Whether or not this is true of oth­er com­pu­ta­tion­al sys­tems is a dis­tinct ques­tion, but it is clear that at least some phys­ic­ally imple­men­ted com­puters can exhib­it oper­a­tion­al clos­ure.

The self-determination cri­terion requires that a system’s oper­a­tions are determ­ined by its own struc­ture, rather than by extern­al instruc­tions. This cri­terion applies straight­for­wardly to at least some com­put­ing mech­an­isms. Whilst many com­puters are pro­gram­mable, their basic oper­a­tions are non­ethe­less determ­ined by their own phys­ic­al struc­ture, such that the ‘instruc­tions’ provided by the pro­gram­mer only make sense in the con­text of the sys­tem itself. To anoth­er sys­tem, with a dis­tinct phys­ic­al struc­ture, those ‘instruc­tions’ would be mean­ing­less. Just as the enact­ive auto­maton ‘Bittorio’ brings mean­ing to a mean­ing­less sea of 1s and 0s (see Varela 1988; Varela, Thompson, & Rosch 1991: 151–5), so the struc­ture of a com­put­ing mech­an­ism bring mean­ing to the world that it encoun­ters.

Finally, we can turn to the pre­cari­ous­ness cri­terion. Whilst the com­pu­ta­tion­al sys­tems that we con­struct are typ­ic­ally reli­ant upon us for con­tin­ued main­ten­ance and a sup­ply of energy, and play no dir­ect role in their own upkeep, this is more a prag­mat­ic fea­ture of our design of those sys­tems, rather than any­thing essen­tial to com­pu­ta­tion. We could eas­ily ima­gine a com­put­ing mech­an­ism designed so that it seeks out its own source of energy and is able to main­tain its own phys­ic­al struc­ture. Such a sys­tem would be pre­cari­ous in just the same sense that enact­iv­ism con­ceives of liv­ing sys­tems as being pre­cari­ous. So there is no in-principle reas­on why a com­put­ing sys­tem should not be able to meet the pre­cari­ous­ness cri­terion.

In this post I have very briefly argued that the enact­iv­ist autonomy cri­ter­ia can be applied to (some) phys­ic­ally imple­men­ted com­put­ing mech­an­isms. Of course, enact­iv­ists may have oth­er reas­ons for think­ing that cog­nit­ive sys­tems can­not be com­pu­ta­tion­al. Nonetheless, we think this ana­lys­is could be inter­est­ing for a couple of reas­ons. Firstly, inso­far as com­pu­ta­tion­al neur­os­cience and com­pu­ta­tion­al psy­cho­logy have been suc­cess­ful research pro­grams, enact­iv­ists might be inter­ested in adopt­ing some aspects of com­pu­ta­tion­al explan­a­tion for their own ana­lyses of cog­nit­ive sys­tems. Secondly, we think that the enact­iv­ist char­ac­ter­isa­tion of autonom­ous sys­tems might help to elu­cid­ate the senses in which a com­pu­ta­tion­al sys­tem might be cog­nit­ive. Now that we have estab­lished the basic pos­sib­il­ity of autonom­ous com­pu­ta­tion­al sys­tems, we hope to devel­op future work along both of these lines, and invite oth­ers to do so too.

I will leave you with this short and amus­ing video of the autonom­ous robot­ic cre­ations of the British cyber­net­i­cist W. Grey Walter, which I hope might serve as a source of inspir­a­tion for future cooper­a­tion between enact­iv­ism and com­pu­ta­tion­al­ism.

 

References

  • Ashby, R. (1956). An intro­duc­tion to cyber­net­ics. London: Chapman and Hall.
  • Ashby, R. (1960). Design for a Brain. London: Chapman and Hall.
  • Fresco, N. (2014). Physical com­pu­ta­tion and cog­nit­ive sci­ence. Berlin, Heidelberg: Springer-Verlag.
  • Maturana, H. (1970). Biology of cog­ni­tion. Biological Computer Laboratory, BCL Report 9, University of Illinois, Urbana.
  • Maturana, H. (1975). The organ­iz­a­tion of the liv­ing: A the­ory of the liv­ing organ­iz­a­tion. International Journal of Man-Machine stud­ies, 7, 313–332.
  • Maturana, H. (1981). Autopoiesis. In M. Zeleny (Ed.), Autopoiesis: a the­ory of liv­ing organ­iz­a­tion (pp. 21–33). New York; Oxford: North Holland.
  • Maturana, H. and Varela, F. (1980). Autopoiesis and cog­ni­tion: The real­iz­a­tion of the liv­ing. Dordrecht, Holland: Kluwer Academic Publisher.
  • Miłkowski, M. (2013). Explaining the com­pu­ta­tion­al mind. Cambridge, MA: MIT Press.
  • Piccinini, G. (2015). Physical Computation. Oxford: OUP.
  • Sprevak, M. (2010). Computation, Individuation, and Received View on Representations. Studies in History and Philosophy of Science, 41: 260–70.
  • Thompson, E. (2007). Mind in Life: Biology, phe­nomen­o­logy, and the sci­ences of mind. Cambridge, MA: Harvard University Press.
  • Varela F. 1988. Structural Coupling and the Origin of Meaning in a Simple Cellular Automation. In Sercarz E. E., Celada F., Mitchison N.A., Tada T. (eds.), The Semiotics of Cellular Communication in the Immune System, pp. 151–61. New York: Springer-Verlag.
  • Varela, F., Thompson, E., and Rosch, E. (1991). The Embodied Mind. Cambridge, MA: MIT Press.
  • Villalobos, M. & Dewhurst, J. (2017). Enactive autonomy in com­pu­ta­tion­al sys­tems. Synthese, doi:10.1007/s11229-017‑1386‑z
  • Wells, A. J. (1998). Turing’s Analysis of Computation and Theories of Cognitive Architecture. Cognition, 22(3), 269–94.