In **Introduction to Metamathematics**, S.C. Kleene begins his chapter on Turing machines with the following informal characterization [1]:

Suppose that a person is to compute the value of a function for a given set of arguments by following preassigned effective instructions. In performing the computation he will use a

finitenumber of distinct symbols or tokens of some sort. He can have only afinitenumber of occurrences of symbols under observation at one time. He can also remember others previously observed, but again only afinitenumber. The preassigned instructions must also befinite. Applying the instructions to thefinitenumber of observed and remembered symbols or tokens, he can perform an act that changes the situation in a afiniteway, e.g., he adds or erases some occurrences of symbols, shifts his observation to others, registers in his memory those just observed. A succession of such acts must lead him from a symbolic expression representing the arguments to another symbolic expression representing the function value.

I highlighted the explicit uses of ‘finite’ in the quote to emphasize that the property of *computability* does not allow infinite objects as part of the computation. Kleene’s remarks in turn harken back to Alan Turing’s 1936 paper, “On Computable Numbers, with an Application to the Entscheidungsproblem” [2]. Therein, Turing remarks, with respect to symbols and states:

Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child’s arithmetic book. In elementary arithmetic the two-dimensional character of the paper is sometimes used. But such a use is always avoidable, and I think that it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, i.e. on a tape divided into squares.

I shall also suppose that the number of symbols which may be printed is finite. If we were to allow an infinity of symbols, then there would be symbols differing to an arbitrarily small extent.……

The behaviour of the computer at any moment is determined by the symbols which he is observing. and his “state of mind” at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations.

We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be “arbitrarily close” and will be confused.

It must be made clear that a restriction to finite symbols, symbol occurrences, memory, and states is a restriction which places *no specific fixed bound*. That is, the requirement is simply that of *finiteness*, regardless of practical limitations of a given computing hardware or similar considerations. For example, as to the length of computation, Rogers states [3]:

We thus require only that a computation terminate after

somefinite number of steps; we do not insist on ana prioriability to estimate this number.

Much of this is common sense to us today: we readily understand that *computability* means a program which must be finitely large (and thus include only a finite number of occurrences of a finitely large symbol set), working with a memory space which must be only finitely large, and which must halt after some finite number of steps and produce the result of the computation.

**Rosennean Complexity and Computability**

Robert Rosen utilizes these standard characteristics of *computability* in order to classify the formal (i.e., mathematical) models of systems as either computable or noncomputable based upon whether such mathematical models are simulable (i.e., evaluable by a Turing machine) or not. From Life Itself [4]:

“if

fis simulable, then there is a Turing machineTsuch that, for any wordwin the domain off, suitably inscribed on an input tape toT, and for a suitably chosen initial state ofT, the machine will halt after a finite number of steps, withf(w) in its output tape.”

One of the characteristics of Rosennean complex systems is that such systems possess noncomputable models. That is, given the previously noted constraints, such systems possess formal models which do not meet the requirements for computability. Upon reflection, it should not be too striking that this is the case, since even in the mathematical world not all functions are computable; we should therefore expect no less from the physical world around us.

**Critiques of Noncomputability of Models**

There have been some attempts to criticize Rosen by arguing that Rosen is in error and that all formal models of physical systems are indeed computable. One of those was a paper, “In Defense of Mechanism” by Wells [5], which I have already addressed in a previous post. Recently, another paper “Computability of Closure to Efficient Causation” [6] made a similar claim.

However, in both cases such attempted criticisms sidestepped the crucial property of finiteness, a property which has been integral to the standard definition of computability since Turing, and to which Rosen adheres. By contrast, both of these critical papers attempt to implicitly utilize a definition of *computability* which does not require that the computation halt in a finite number of steps; rather, they consider that computations which require an infinite number of steps (and therefore, never halt in a finite number of steps) are acceptable, despite the fact that infinitely lengthy computations never produce a result.

While the authors of these papers are obviously free to define ‘computability’ in any way they choose, by choosing such non-standard definitions the substance of their papers are no longer relevant to Rosen’s work, since by the standard definition (which Rosen uses) computability requires “the machine will halt after a finite number of steps, with *f*(*w*) in its output tape.” Secondly, such non-standard definitions require justification as to how they either are superior to, or a suitable substitute for, the standard definition of ‘computability’.

**References**

[1] Kleene, S.C. 1952. Introduction to Metamathematics. North-Holland.

[2] Turing, A. 1936. “On Computable Numbers, with an Application to the Entscheidungsproblem”. Proc. London Math. Soc., 2(42):230-265.

[3] Rogers, H. 1967. Theory of Recursive Functions and Effective Computability. McGraw-Hill.

[4] Rosen, R. 1991. Life Itself. Columbia Univ. Press.

[5] Wells, A.J. “In Defense of Mechanism”. Ecological Psychology. 18(1):39-65

[6] Mossio M., Longo G., Stewart, J. 2008. “Computability of closure to efficient causation”. (To appear, 2008). Draft.

Excellent post, Tim! Now I wish I could persuade you to publish some of your thoughts. (However, I understand the reaons why you’re reluctant to do that.)