Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

A Theory of Computational Implementation | UCLA Philosophy, Slides of Relativity Theory

These questions are foundational to any science that studies physical computation. Over the past few decades, philosophers have offered several theories of ...

Typology: Slides

2022/2023

Uploaded on 05/11/2023

anahitay
anahitay 🇺🇸

4.7

(16)

255 documents

1 / 54

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
A Theory of Computational Implementation
Michael Rescorla
Abstract: I articulate and defend a new theory of what it is for a physical system to implement
an abstract computational model. According to my descriptivist theory, a physical system
implements a computational model just in case the model accurately describes the system.
Specifically, the system must reliably transit between computational states in accord with
mechanical instructions encoded by the model. I contrast my theory with an influential approach
to computational implementation espoused by Chalmers, Putnam, and others. I deploy my theory
to illuminate the relation between computation and representation. I also address arguments,
propounded by Putnam and Searle, that computational implementation is trivial.
§1. The physical realization relation
Physical computation occupies a pivotal role within contemporary science. Computer
scientists design and build machines that compute, while cognitive psychologists postulate that
the mental processes of various biological creatures are computational. To describe a physical
system’s computational activity, scientists typically offer a computational model, such as a
Turing machine or a finite state machine. Computational models are abstract entities. They are
not located in space or time, and they do not participate in causal interactions. Under certain
circumstances, a physical system realizes or implements an abstract computational model. Which
circumstances? What is it for a physical system to realize a computational model? When does a
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36

Partial preview of the text

Download A Theory of Computational Implementation | UCLA Philosophy and more Slides Relativity Theory in PDF only on Docsity!

A Theory of Computational Implementation

Michael Rescorla

Abstract: I articulate and defend a new theory of what it is for a physical system to implement an abstract computational model. According to my descriptivist theory, a physical system implements a computational model just in case the model accurately describes the system. Specifically, the system must reliably transit between computational states in accord with mechanical instructions encoded by the model. I contrast my theory with an influential approach to computational implementation espoused by Chalmers, Putnam, and others. I deploy my theory to illuminate the relation between computation and representation. I also address arguments, propounded by Putnam and Searle, that computational implementation is trivial.

§1. The physical realization relation Physical computation occupies a pivotal role within contemporary science. Computer scientists design and build machines that compute, while cognitive psychologists postulate that the mental processes of various biological creatures are computational. To describe a physical system’s computational activity, scientists typically offer a computational model , such as a Turing machine or a finite state machine. Computational models are abstract entities. They are not located in space or time, and they do not participate in causal interactions. Under certain circumstances, a physical system realizes or implements an abstract computational model. Which circumstances? What is it for a physical system to realize a computational model? When does a

concrete physical entity --- such as a desktop computer, a robot, or a brain --- implement a given computation? These questions are foundational to any science that studies physical computation. Over the past few decades, philosophers have offered several theories of computational implementation. I seek to provide a new theory that improves upon prior efforts. My approach differs from most predecessors in a crucial methodological respect. Existing theories typically pursue reductive analysis. They attempt to isolate non-circular necessary and sufficient conditions for a physical system to realize a computation. In contrast, I treat physical realization as a primitive concept. Most philosophically interesting concepts resist non-circular reduction: cause , person , knowledge , and so on. I see no reason to expect that physical realization will prove more cooperative. Luckily, one can illuminate a concept without reductively analyzing it. I seek to illuminate computational implementation without offering a reductive analysis. In §§2-3, I present my theory. In §4, I contrast my theory with a popular approach espoused by Chalmers (1995, 1996, 2011), Putnam (1988), and many other researchers. I then apply my view to two vexed topics: the relation between computation and representation (§5); and Putnam-Searle triviality arguments (§6). I conclude by defending my failure to pursue a reductive analysis (§7).

§2. Descriptivism about computational implementation The basic idea behind my position runs as follows. A computational model is an abstract description of a system that reliably conforms to a finite set of mechanical instructions. The instructions dictate how to transit between states. A physical system implements the computational model when the system reliably conforms to the instructions. Thus, Physical system P realizes/implements computational model M

I use the phrase “computational model” to include both types of formalism. Whatever the formalism, one specifies a computation by specifying a program. The program encodes instructions, to which the computation must conform. For instance, Feferman describes a Turing machine as “an idealized computational device following a finite table of instructions (in essence, a program) in discrete effective steps without limitation on time or space that might be needed for a computation” (2006, p. 203). Similarly, Abelson et al. state that a register machine “sequentially executes instructions that manipulate the contents of a fixed set of storage elements called registers ” (1996, p. 490). One can codify a program through a programming language, a machine table, a set-theoretic transition function, or various other means. Within computer science, a computational model serves primarily as a blueprint. We use the model to build and manipulate physical systems. Within cognitive psychology, a computational model serves primarily as a scientific hypothesis. We postulate that the model accurately describes some biological creature’s mental activity. Despite this contrast between computer science and cognitive psychology, computational models play an essentially descriptive role within both disciplines. We seek to ensure that our computational model accurately describes some physical system, so that we can successfully manipulate or explain the system’s activity. Depending on our pragmatic or explanatory goals, we may enforce descriptive accuracy either by adjusting the model or else by adjusting the physical system. A computational model advances our pragmatic or explanatory goals because it describes certain physical systems accurately, or with approximate accuracy modulo certain idealizations. By building an artificial system that our computational model accurately describes, we ensure that the system does what we want it to do. By discovering a computational model that accurately describes a biological creature, we explain aspects of the creature’s mental activity.

A physical system usually has many notable properties besides those encoded by our computational model: colors, shapes, sizes, and so on. A physical system may even have notable computational properties besides those encoded by our computational model. For example, when we describe a desktop computer through a high-level programming language, we leave open the particular digital circuitry implemented by the computer. Thus, an accurate computational model need not be complete. A similar situation prevails within science more generally. A scientific model of a physical system (e.g. a macrophysical model) usually leaves open many important properties (e.g. the system’s particular microphysical constitution). A computer scientist or cognitive scientist may intentionally employ an inaccurate computational model. For example, she may describe a desktop computer through an idealized model that postulates infinite discrete memory capacity, even though the computer only has finite memory. In similar fashion, physicists frequently employ idealized models that postulate frictionless surfaces or massless strings. An idealized or oversimplified model may be useful for certain purposes. Strictly speaking, though, a physical system implements a computational model only if the model accurately describes the system. Even when an inaccurate computational model of a physical system serves our present explanatory or pragmatic ends, the system does not literally realize the model.

§3. Descriptivism clarified I now formulate descriptivism more carefully. I focus exclusively on deterministic models. Extending my treatment to stochastic models, while straightforward, would clutter the exposition. My approach is general enough to accommodate diverse formalisms: Turing machines, register machines, finite state machines, higher-level programming languages, and so

(4) Many computational models have a privileged initial state. In such a case, s 0  S is the privileged initial state. If the model has no privileged initial state, then the parameter s 0 is irrelevant to us, so we let it be some fixed entity not belonging to S. State space description < S , I , , s 0 > accurately describes physical system P just in case:

(1) For each sS , s is a possible state of P. (2) For each iI , i is a possible input to P. (3) P reliably conforms to the transition function . More precisely: if P were to enter into state s and to receive input i , then P would transit to state ( s , i ) at the next stage of computation. (4) Absent external inference or internal malfunction, P always begins computation in state s 0 (assuming that s 0  S ). If computational model M has determinate descriptive content, then M dictates how any physical realizer transits through a determinate space of possible computational states. Thus, M corresponds to a unique state space description < S , I , , s 0 >. I say that M induces < S , I , , s 0 >.

M accurately describes P just in case the induced state space description < S , I , , s 0 > accurately describes P. Under precisely those same circumstances, P realizes or implements M. The notion of “canonical state space description” is far more general than the notion of computation. Many canonical state space descriptions are not computational in any natural sense. This is no problem for my account. What matters is that we can convert an extremely wide range of computational models into canonical state space descriptions, thereby delineating implementation conditions for those models. Clause (3) employs the counterfactual conditional. Whether a physical system implements a computation depends upon how the system would behave under various

circumstances (Chalmers, 1995), (Copeland, 1996). How should we interpret the relevant counterfactuals? What are their truth-conditions? There is a large philosophical literature on counterfactual conditionals in general (Lewis, 1973), in the specific context of scientific modeling (Woodward, 2000), and in the more specific context of computational modeling (Chalmers, 1995). Descriptivists can freely deploy the resources offered by this literature. For present purposes, I remain neutral regarding how exactly one should elucidate clause (3). Descriptivism addresses the conditions a physical system must satisfy to implement a computational model. In any given case, those conditions may or may not be physically satisfiable. For instance, the Turing machine formalism postulates infinite discrete memory capacity, yet it may be that no possible physical machine has infinite discrete memory capacity. Descriptivism elucidates what it is to implement a computational model, without offering any guarantee that a given model is implement able.

§3.1 Converting computational models into state space descriptions To apply my descriptivist theory to a computational model M , we must convert M into a canonical state space description < S , I , , s 0 >. This is easier in some cases than others, because computational models vary considerably in precision, detail, and explicitness. Deterministic finite state machines (FSMs) allow a relatively straightforward application of my theory. An FSM has finitely many “machine states” and finitely many inputs. Current machine state and current input determine the next machine state. A common textbook example is an elevator controller , such as the elevator FSM described by Mozgovoy (2010, p. 92). The FSM, which I will call “ELEV”, has four buttons labeled “O” (for open ), “C” (for close ), “D” (for down ), and “U” (for up ). These four inputs comprise the machine’s input set. The FSM has

(4) Absent external interference or internal malfunction, P begins on the first floor with the door open. This implementation condition seems intuitively correct. Most computational models are not so easy to convert into canonical state space descriptions. For example, a complete model of a desktop computer would mention numerous internal components. The corresponding canonical state space description enumerates possible states of these components. In practice, researchers rarely provide anything as detailed or explicit as a canonical state space description. Instead, they informally indicate a state space description through a mixture of English, computer code, diagrams, and so on. What matters for us is that a canonical state space description is possible in principle whenever a computational model has a determinate implementation condition. Thus, philosophical theories of computational implementation can legitimately cite canonical state space descriptions. In straightforward cases such as ELEV, the computational model induces a unique state space description < S , I , , s 0 >. In less straightforward cases, there may not be a unique induced < S , I , , s 0 >. To illustrate, consider the distinction between physical and virtual memory. Many computational formalisms incorporate some notion of “memory location” (e.g. the cells on a Turing machine tape, or the registers in a register machine). One might correlate each memory location with a physical location in the implementing system. Alternatively, as Chalmers (1996) notes, one might correlate each memory location with an addressable “virtual location” whose physical location changes. These two contrasting approaches yield contrasting state space descriptions. Which approach is correct? I believe that the answer is indeterminate for most computational formalisms, including the Turing machine and the register machine. For example, nothing about the descriptive use of register machines within contemporary scientific practice

dictates whether we should interpret talk about “memory registers” in physical or virtual terms. Either interpretation may be appropriate, depending upon our pragmatic or explanatory ends.^2 This indeterminacy is harmless. If our current use of a computational model does not associate the model with a determinate canonical state space description, then we can stipulate away the indeterminacy as our pragmatic or explanatory needs dictate. For example, a computer designer can stipulate that she intends talk about “memory locations” to be interpreted in physical rather than virtual fashion (or vice versa). When computational model M does not induce a unique state space description, we have several theoretical options regarding M ’s implementation condition. We might say that P implements M just in case some canonical state space description induced by M accurately describes P. Or we might say that M ’s implementation condition is indeterminate, to become determinate only once we stipulate a unique state space description induced by M. I suspect that the second option more faithfully captures actual scientific practice, although the first option may be more appropriate in certain cases. For present purposes, I leave the matter unresolved. What is it for computational model M to induce state space description < S , I , , s 0 >? I will not attempt to answer this question. Instead, I treat the “inducement relation” as primitive. I exploit our pre-philosophical ability to convert computational models into corresponding state space descriptions. To the extent that one associates a computational model with a determinate implementation condition, this conversion is always possible in principle. One might worry that my methodology simply shifts the explanatory burden from the “realization” relation between model and physical system to the “inducement” relation between model and state space description. How can a good theory presuppose a primitive inducement (^2) High-level programming languages furnish additional examples in the same vein. As Chalmers (2012, p. 222) notes, there is usually slack between a program couched in a high-level programming language and a more explicitdescription in terms of states and state-transitions.

Suppose that the state space Q for an FSM contains states q 0 , q 1 , … , qr. Suppose that ( qn , i ) = qm , where i is some member of the input set . Following standard practice, we could interpret this FSM as encoding an instruction to transit from state qn and input i to state qm. Or we could employ a deviant interpretation on which the FSM encodes an instruction to transit from state qn and input i to state q ( m +1) mod r. Or we could employ a deviant interpretation on which the FSM encodes an instruction to transit from state qn and input i to state (( qn , i ), i ). Nothing about the FSM itself, qua set-theoretic object, favors one interpretation over the others. Of course, the standard interpretation impresses us as most natural. But that impression does not reflect any intrinsic features of the set-theoretic object. Rather, it reflects how we use the set-theoretic object to describe physical systems. Strictly speaking, then, a physical system realizes a computational model only relative to a descriptive practice that confers an implementation condition upon the model. Rather than say Computational model M is implemented by physical system P it would be more appropriate to say Computational model M as used within some particular descriptive practice is implemented by physical system P , thereby making explicit the relativity to descriptive practice. In practice, context usually makes salient one particular descriptive practice, so that we can safely suppress the relativity. In certain cases, such as computational models that are ambiguous between physical and virtual memory, current descriptive practice may constrain implementation conditions without determining a single unique implementation condition. We can rephrase these points in terms of canonical state space descriptions. Assume that we hold fixed the implementation condition associated with each state space description < S , I , ,

s 0 >. (Relaxing that assumption would only further accentuate the relativity to descriptive practice.) Then a machine model qua set-theoretic object does not even begin to determine an induced state space description < S , I , , s 0 >. Only the set-theoretic object plus descriptive practice can determine < S , I , , s 0 >. In certain cases, descriptive practice constrains < S , I , , s 0 >

without determining < S , I , , s 0 >. My heavy appeal to descriptive practice will repulse some readers. How can a rigorous foundation for computational science cite anything as “squishy” as descriptive practice? Shouldn’t we study implementation tout court , without relativization to our descriptive activity? I reply that it is a fool’s errand to study implementation tout court , detached from descriptive practice. An abstract computational model, viewed in detachment from any descriptive use we make of it, cannot magically select certain physical systems as its realizations. For example, an FSM viewed in isolation from descriptive practice does not even begin to determine an implementation condition. That my account assigns a central role to descriptive practice is an advantage, not a disadvantage. Most previous accounts either downplay or altogether ignore this crucial aspect of computational implementation. By relativizing computational implementation to descriptive practice, I do not render the physical realization relation subjective. I do not relativize it to the observer’s whim. Computational implementation remains as objective as one could reasonably desire. By analogy, consider recipe implementation. A physical process implements a recipe just in case the recipe accurately describes the process. Words in a recipe describe a physical process only by virtue of a descriptive practice, which endows the words with meaning and thereby makes them instructions for manipulating ingredients. In that sense, recipe implementation is relative to a descriptive practice. Nevertheless, it is a perfectly objective matter whether one implements a

To implement a computation is just to have a set of components that interact causally according to a certain pattern. The nature of the components does not matter, and nor does the way that the causal links between components are implemented; all that matters is the pattern of causal organization of the system. He suggests that we codify this intuitive idea along the following lines (1995, p. 392): A physical system implements a given computation when there exists a grouping of physical states of the system into state-types and a one-to-one mapping from formal states of the computation to physical state-types, such that formal states related by an abstract state-transition relation are mapped onto physical state-types related by a corresponding causal state-transition function. Roughly speaking, then, physical system P implements computational model M just in case: ( F )( F is an isomorphism from M ’s formal structure to P ’s causal structure). A physical system realizes a computational model when the system instantiates a “causal structure isomorphism type” dictated by the model’s formal structure.^6 Proponents of structuralism, or similar doctrines, include Copeland (1996), Dresner (2010), Godfrey-Smith (2009), Klein (2008), Putnam (1988), and Scheutz (2001). Precise formulations vary considerably. For example, Putnam elucidates the “isomorphism” between formal structure and causal structure by employing the material conditional. He demands that the mapping F from formal states to physical states satisfy the following constraint:

(^6) There are affinities between structuralism about computational implementation and broadly “structuralist” views within philosophy of science more generally. The intuitive idea behind such views is that scientific theories represent only “structural” features of the world (Bueno and French, 2011), (van Fraassen, 2008). However, structuralists within general philosophy of science need not endorse structuralism about computationalimplementation. Their position concerns the representational import of scientific theories, not the implementation relation between computational models and physical systems. To derive a structuralist view of computationalimplementation, one requires an additional premise linking representational import and implementation conditions, a premise that structuralist philosophers of science might well reject.

If the model’s transition function carries formal state s 1 to formal state s 2 , and if the physical system instantiates the physical state to which F maps s 1 , then the physical system transits to the physical states to which F maps s 2. Chalmers (1995, 1996) and Copeland (1996) instead demand that F satisfy counterfactual conditionals along the following lines: If the model’s transition function carries formal state s 1 to formal state s 2 , and if the physical system were to instantiate the physical state to which F maps s 1 , then the physical system would transit to the physical state to which F maps s 2. These counterfactual conditionals are much stronger than Putnam’s material conditionals. Descriptivism is compatible with structuralism. Indeed, structuralists sometimes express descriptivist sentiments. For example, Putnam characterizes physical realization of Turing machines as follows: “A ‘machine table’ describes a machine if the machine has internal states corresponding to the columns of the table, and it ‘obeys’ the instruction in the table… Any machine that is described by a machine table of the sort just exemplified is a Turing machine” (1975, p. 365). Even more explicitly, Chalmers writes: “Implementation is the relation that holds between an abstract computational object (a computation for short) and a physical system, such that we can say that in some sense the system ‘realizes’ the computation, and that the computation ‘describes’ the system” (1995, p. 391). Alternatively, one can endorse descriptivism while rejecting structuralism. I favor this combination of views. I agree that a mirroring relation between M ’s formal structure and P ’s causal structure is necessary for P to implement M. But I will now argue that a mirroring relation does not generally suffice for P to implement M. A physical system can instantiate the pattern of causal organization dictated by a computational model without implementing the model.

ELEV does not just specify a pattern of causal organization. It describes a system’s possible states, and it specifies instructions governing how to transit between those states. Implementing the model requires an ability to instantiate states specified by the model. For instance, a physical system implements ELEV only if it can travel between the first and second floors of some building. Only then can it obey an instruction such as: If the door is closed and the system is on the first floor, and if button U is pressed, then transit to having the door closed and to being located on the second floor. The machine from the parable cannot travel between floors, so it does not implement ELEV. It implements a distinct FSM, which I will call “ELEV*”, whose machine table is given by Figure

  1. But it does not implement ELEV, even though it instantiates the causal pattern dictated by ELEV. Thus, ELEV is a counterexample to structuralism. ELEV is an extremely simplistic FSM. Contemporary science offers numerous more realistic counterexamples that illustrate the same point. For example, the far more sophisticated elevator FSM discussed by Vahid and Givargis (2002, p. 211) encodes instructions that begin as follows: “Move the elevator either up or down to reach the target floor. Once at the target floor, open the door for at least 10 seconds, and keep it open until the target floor change.” Similarly, Patterson and Hennessy (2005, p. C-69) introduce a traffic light FSM that includes states such as the traffic light is green in the north-south direction and the traffic light is green in the east-west direction. Any textbook on embedded systems design discusses additional FSMs with non- structuralist implementation conditions: alarm clocks, seatbelt detection systems, aviation

controllers, and so on. Robotics is also a rich source of counterexamples. The robot car Junior (Montemerlo, et al., 2008, pp. 114-115) implements a FSM whose states include: LOCATE_VEHICLE: “the robot estimates its initial position... and starts road driving or parking lot navigation, whichever is appropriate” CROSS_INTERSECTION: “the robot waits if it is safe to cross an intersection (e.g., during merging), or until the intersection is clear (if it is an all-way stop intersection)” UTURN_STOP: “the robot is stopping in preparation for a U-turn” and so on. Implementing Junior’s FSM requires an ability to drive, to wait at intersections, and so on. Murphy (2000, pp. 174-184) offers several FSMs that illustrate the same point: a robot that moves through an obstacle course; a robot that seeks, retrieves, and relocates trash; and so on. Each FSM encodes instructions, such as move towards trash and grab trash , that outstrip any relevant pattern of causal organization. Each FSM is a counterexample to structuralism. A core idea underlying structuralism is that machine states are individuated functionally , i.e. by their roles in a pattern of causal organization. Structuralism makes this idea precise by invoking isomorphisms between formal structure and causal structure. A machine state is individuated by its place in a “causal structure isomorphism type” induced by the formal model. The foregoing FSMs undermine this approach. Each FSM includes at least one machine state whose nature outstrips its place in the “causal structure isomorphism type” induced by the formal model. States such as being located on the first floor and waiting at an intersection have non- functional natures that go beyond any relevant pattern of causal organization. My descriptivist theory accommodates each of these FSMs by postulating a state space description < S , I , , s 0 > whose state space S contains the desired non-functional states.