In 1936, two important papers in the field of mathematics and logic were published. One was “An Unsolvable Problem of Elementary Number Theory” by Alonzo Church.  The other was “On computable numbers, with an application to the Entscheidungsproblem” by Alan Turing.  These papers were aimed at providing formal and rigorous definitions of the informal notions of “effective calculability”.
However, over time, some of the ideas in these papers have been misappropriated for areas outside of their original intent. Some of the resulting claims, which purport to be “based on” Church’s and/or Turing’s results are simply unfounded. Below, I briefly review the intent of the original papers, and pursue some of the ramifications of their results. I then re-examine the notion of “effectiveness”. Effectiveness and Rosennean complexity are then discussed, along with some implications for science.
I consider the contents of this paper to be entirely consistent with the ideas of Robert Rosen, and I draw heavily on Rosen’s book, Essays on Life Itself, as the central source of ideas and inspiration. – T. Gwinn
In 1900, the mathematician David Hilbert gave a now-famous address to the International Congress of Mathematicians in Paris. Part of the aim of the address was to look toward the future of mathematics, and for Hilbert, the way to that future was by the solving of problems:
“The deep significance of certain problems for the advance of mathematical science in general and the important role which they play in the work of the individual investigator are not to be denied. As long as a branch of science offers an abundance of problems, so long is it alive; a lack of problems foreshadows extinction or the cessation of independent development. Just as every human undertaking pursues certain objects, so also mathematical research requires its problems. It is by the solution of problems that the investigator tests the temper of his steel; he finds new methods and new outlooks, and gains a wider and freer horizon.” 
The latter portion of the address laid out twenty-three unsolved problems in mathematics. The idea being that these problems exemplified both the vast scope of mathematics at the time, and perhaps more importantly, the fecundity still latent in mathematics.
But not all mathematical problems that are posed can be solved. Indeed, sometimes it is just as important and profound to prove that a given problem, or class of problems, is unsolvable. In 1928, Hilbert and Ackerman offered up yet another challenging problem, known as the Entscheidungsproblem, or “decision problem”.  This problem essentially asks: does there exist a procedure to follow for a finite number of steps in order to determine the validity of a given first-order logic statement? For Hilbert, this constituted “the central problem of mathematical logic”. 
The problem, while significant in itself, offered an even more consequential problem: what is meant by “procedure”? Intuitively, we take this to mean some kind of algorithm or program. However, in order to solve the Entscheidungsproblem in any kind of definitive fashion, it was necessary to create a formal notion of “procedure”, which could then be used in a proof to provide either a positive solution (i.e., show that an algorithm does exist) or a negative solution (i.e., show that no possible algorithm exists).
Here is where Alonzo Church and Alan Turing enter the picture. Church and Stephen Kleene, had developed a formal system called lambda calculus. The lambda calculus provides a way of representing and manipulating mathematical functions, and numbers, in an abstract manner. The nature of the lambda calculus allowed Church in his 1936 paper to compose an assertion in his paper that “effective calculability” could be identified with “lambda-definability”. In other words, any function which was effectively calculable could be expressed in Church’s lambda-calculus. This provided a rigorous formal representation of the intuitive notion of “effective calculability”, and in effect, a formal representation of “procedure”.
In that same year, Turing published his paper, referring to the same problem, from a different angle. For Turing, the approach was essentially to take the concept of a rote or “mechanical” process literally, and to define it more precisely and formally. Turing begins his paper: “The “computable” numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly the computable numbers, it is almost equally easy to define and investigate computable functions of an integral variable or a real or computable variable, computable predicates, and so forth.” He ends that first paragraph: “According to my definition, a number is computable if its decimal can be written down by a machine.”
“Machine” here refers, of course, to the now-famous Turing machine. It is the specificity of the term “machine” that is important to understand. Turing provides detailed formal criteria for what constitutes this idealized “machine”, but the gist of it can be conveyed as followed: it consists of a device with finitely-describable state configurations, an infinitely long “tape” divided into a sequence of “squares”, a read/write head, and the mechanical ability to move the tape back and forth. The operation of the machine proceeds in accordance with only six basic instruction types: 1) read a square, 2) write a square, 3) move tape left one square, 4) move tape right one square, 5) change state configuration, and 6) halt. As Turing writes, “It is my contention that these operations include all those which are used in the computation of a number.”
From this description, the Turing machine appears to be a remarkably primitive kind of machine. And indeed, Turing did not intend it as any kind of blueprint for physically building such a machine. In fact, as Turing describes in the paper, the “machine” may, in principle, be instantiated by a human being, who restricts himself to methodically following the instructional steps accurately, reading and writing by hand onto paper, in place of a read/write head and a tape.
As primitive as this machine might seem, particularly in light of today’s computer technology, Turing showed that his machine was capable of computing any of the computable numbers. (And also implied the same capability for computable functions, as noted in his opening remarks of the paper, quoted above.) In essence, his results equated “effective calculability” with “computability” by a Turing machine, and the definition of the Turing machine was the formal embodiment of “procedure”. [5a]
In an appendix to the paper, Turing goes on to show that the definition of “computability” by a Turing machine is equivalent to Church’s previous definition of “lambda-definability”. In other words, if a function is lambda-definable, then it is Turing-computable, and vice versa. (Additionally, these definitions also are equivalent to a definition of “general recursive” functions given by Kurt Gödel in 1934.)
Not surprisingly, then, both Church and Turing came to the same conclusion regarding the Entscheidungsproblem: it was unsolvable. However, Hilbert’s view of problems as leading to new horizons had great wisdom. By taking on the task of formally defining what it meant to be “calculable” by some “procedure”, Church and Turing had formally defined a boundary to what it is possible to calculate or compute algorithmically, but they did not necessarily define an ultimate boundary, as we shall see further below.
Limits of the Church and Turing results
Turing made it clear in his paper that the “computable numbers”, for which the Turing machine would work, constituted only a fraction of the real numbers. As he says, “Although the class of computable numbers is so great, and in many ways similar to the class of real numbers, it is nevertheless enumerable.” In other words, whereas the set of real numbers is so large as to be uncountably infinite, the set of computable numbers is countably (enumerably) infinite. So, from its initial conception, Turing machines have had acknowledged notable limits to what they can compute.
Placing the majority of real numbers out of the realm of Turing-computability may seem like a devastating blow to a commonsense notion of calculability, but the reality is that there is a priori no good reason to suppose that there should exist a finite recursive function for calculating any arbitrary real number. In other words, Turing’s forthright statement above can be restated thusly: it is nongeneric (rare) for a real number to be Turing-computable.
This nongenericity (rarity) is not restricted to the computing of real numbers. We can ask about the ability of Turing machines to be used in other kinds of algorithmic work in mathematics and logic. For example, a Turing machine can be programmed to algorithmically generate theorems from a set of axioms and a set of production rules. In fact, setting up mathematical systems so that their theorems could be generated this way, as pure symbol manipulations, was a major goal of the formalist program of mathematician David Hilbert in the early 1900’s.  Hilbert’s program aimed to rid mathematics of anything outside of pure symbol manipulation by a fixed set of rules – to formalize mathematical systems. It was felt by Hilbert and others that the source of some of the problems and paradoxes in mathematics arose because it contained elements beyond just syntactic symbols manipulated by rote production rules. Accomplishing this meant stripping systems in mathematics of any semantic or intuitive notions, and replacing them with purely syntactic symbols and rote procedures. (Another way to say this is that “rigor” was equated with syntactic symbols and rote procedures.) The end result would be mathematical systems which would then be essentially program (production rules) and data (axioms and symbols), that would be algorithmic, and therefore could be computable by a Turing machine.
Hilbert’s program never got very far, though. In 1931, Kurt Gödel published his devastating Incompleteness Theorems.  To put it briefly, Gödel showed that the mathematical system of Number Theory – which is essentially at the heart of mathematics as a whole – could not be formalized, as Hilbert had hoped. Gödel proved that there would exist certain true theorems of Number Theory that could not be proven within Number Theory: they were undecidable. Even worse, he proved that it was fruitless to add additional axioms in hopes of inducing decidability into all the theorems of Number Theory.
The impact of this ran deeper than just Hilbert’s program. The results in Number Theory was simply the most prominent example. It turned out that most of mathematics was not as innately algorithmic as many had thought. [7a] It was not possible, as Frege before had tried, and then as Russell and Whitehead had attempted in Principia Mathematica, to build an axiomatic foundation for mathematics, such that all of mathematics rested on one pristine formal, syntactic underlayment, free of undesirable semantics. Gödel’s results implied that most useful mathematical systems were instead intimately and nonfractionably tied to nonformalizable semantic elements.
The results also impacted the scope of application of the Turing machine. As should now be clear, an inability to formalize a mathematical system also means that such a system is not a purely rote system, and therefore, such a system is not fully computable by a Turing machine.  As noted above, though, nonformalizability pervades most of mathematics. As a result, we can say that it is nongeneric for a mathematical system to be Turing-computable.
To summarize, we see that the notion of Turing-computability (and the equivalent, Church’s lambda-definability) have the following results:
It is nongeneric for a real number to be Turing-computable.
It is nongeneric for a mathematical system to be Turing-computable.
Note that we have only touched upon two, relatively central, aspects of mathematics: real numbers and Number Theory; we have not ventured into any obscure branches of mathematics in order to find these results. A deeper survey would only uncover more of these situations. What is interesting is that these results confound the everyday notion that “mathematics” and “algorithm” are largely synonymous. (Indeed, just such a notion is likely to have provided some of the impetus behind the Hilbert movement.) Instead, we find that computability – in the precise Turing sense – is a rare quality even in the ostensibly benevolent world of mathematics.
If “effective calculability”, defined either as lambda-definability or Turing-computability, is inadequate to handle something as central as Number Theory, then what is occurring when we as humans do mathematics? The consternation this kind of question raises has led some to resort to positing mystical or extraphysical forces or capacities of some kind. However, the situation is not so dire as it might first appear.
We must remember that lambda-definability and Turing-computability were both attempts to provide formal representations of a particular informal concept of “effective”. “Effective”, in this role, referred to an intuitive notion of a realizable algorithm, rote process, or “mechanical”  manipulation. Therefore, it should be no surprise that Church’s and Turing’s work was specifically aimed at providing precise formulations of those intuitive notions. Another way to indicate this would be to rename “Turing-computability” to the less impressive “Turing-mechanicality”. With a name such as “Turing-mechanicality”, there is not nearly the same tendency to assume that it ought to be a property of all, or even many kinds of systems. In particular, as we saw, any mathematical system that was not formalizable could not be fully computable in a “mechanical” or rote manner. There is nothing mystical here: such systems simply do not – by definition – fit within the confines of the intended notion of “effective” which resided at the root of Church’s and Turing’s efforts.
The belief that there is some kind of apparent conflict between Turing-computability and the ability of humans to do nonformalizable mathematics, requiring the invocation of mystical or extraphysical capacities, ultimately rests in a tacit belief that the entire world is in some way restricted to Turing-computability. In other words, it is a belief that effectiveness, as rote algorithmicity, applies not merely to specific niches (within certain parts of mathematics, for example), but universally to all of physical reality. However, the nonformalizability of most of mathematics is itself prima facie evidence that such a belief is baseless.
The question then arises: is there perhaps some other, or larger, notion of “effectiveness” – one that would encompass nonformalizable systems? The problem is that it is unclear just how large or different such a notion might have to be. Indeed, if we start allowing nonformalizable systems to include more and more semantic elements, and to move further and further from the ideal of a rigid set of fixed axioms and fixed productions rules, then it would appear that the prospect for constructing an adequate notion of “effectiveness” is not at all promising.
Systems that are nonformalizable and thus have nonfractionable semantic elements are examples of complex systems – systems that were the focus of study for much of the career of the theoretical biologist Robert Rosen. A definition of complex systems is stated by Rosen: “A system is called complex if it has a nonsimulable [Turing-incomputable] model.”  Such systems are difficult to study, since by their very nature they are outside the realm of complete Turing-computability, and all the niceties that such computability carries with it. However difficult they may be to investigate, they are no less real than a Turing-computable system, and so deserve at least an equal amount of attention.
Once we step outside the strictures of purely syntactic, rote algorithmicity, we find that we are not well-equipped to create a new definition of effectiveness. There is, in a sense, no way to codify a definition of effectiveness in terms of some finite set of criteria or qualities, since complex systems is the open-ended realm of systems that is outside the constrained realm of Turing-computability.
We can, at the very least, retain the criteria that in in order for a process to be considered effective in any sense, it must be realizable. In this way, we can with some confidence specify that physical reality is at least effective enough to realize all the usage of nonformalizable mathematics that exists. [10a] (With a little reflection, we can also readily note that almost everything we do, such as natural language, is also nonformalizable.) In this way, we find nothing mysterious about the abilities of human beings to do mathematics, or most anything else we do, for that matter. It only requires that we acknowledge that what we can actually do is symptomatic of what actually constitutes effectiveness of the physical world. Stated in this way, it is rather obvious that it is unsound to argue the other way around: that one first formulates some definition of effectiveness and then argues backwards that the physical world ought to conform to it. Rosen states this point quite clearly:
“Indeed, we shall take the view that material object-systems, as things in the external world, and whose behaviors are governed by webs of causal entailments, constitute the true province of effective processes. That is, the notion of effectiveness has to get imported into language via modeling relations arising in material nature, through encodings/decodings from these. Accordingly, any attempt to characterize effectiveness independent of these, from within language (e.g., mathematics) alone, and most particularly, in terms of syntax alone, is perilous. But that is exactly the substance of propositions such as Church’s Thesis – namely, that causal entailments in the external world must conform to some notion of effectiveness drawn from inside the language itself. However, it is the other way around.” 
Effective Processes and Science
The aim of science can be stated, using the language of this paper, as being essentially the methodical investigation and understanding of effective processes of the physical world. This is hardly a contentious statement. However, based on the preceding discussion, effective processes of the physical world are not restricted to those that meet a notion of “effective” as being equated with rote algorithmicity as embodied in Turing-computability.
The result is that there are effective processes of physical reality that will not be fully describable by Turing-computable models alone. These systems exhibit Rosennean complexity, and as such, require additional noncomputable models in order to describe these systems. (Additionally, complex systems have many characteristics that have important implications for science – physics and biology, in particular; but that is beyond the scope of this paper: see Rosen’s books, Life Itself and Essays on Life Itself, for detailed examinations of complex systems.) This may go against an ingrained penchant for entirely computable models in science, but as we saw in the discussion of Hilbert’s program, the desire to equate “rigor” with mere syntactic symbols and rote procedures alone is imprudent, and failed even within the confines of mathematics. (Instead, I suggest, just as we saw with the the notion of effectiveness, “rigor” in science must be defined not by preconceived strictures, but rather in terms of thoroughness and scrupulousness in relation to the actual realm of physical processes. – T. G.)
Stepping beyond a paradigm of computability into a realm of complex systems, we are presented with a multitude of difficulties and problems in comprehending, investigating, and describing such systems. However, recall the words of Hilbert quoted at the beginning of this paper: “It is by the solution of problems that the investigator tests the temper of his steel; he finds new methods and new outlooks, and gains a wider and freer horizon.”
References & Footnotes
AS: Rosen, R. 1985. Anticipatory Systems. Pergamon Press
EL: Rosen, R. 1998. Essays on Life Itself. Columbia University Press
FM: Rosen, R. 1978. Fundamentals of Measurement and Representation of Natural Systems. Elsevier Science
LI: Rosen, R. 1991. Life Itself. Columbia University Press
Davis, M. 1982. Computability and Unsolvability. Dover.
Davis, M. 2004. The Undecidable. Dover.
Eves, H. 1997. Foundations and Fundamental Concepts of Mathematics. Dover.
 Church, A. “An Unsolvable Problem of Elementary Number Theory”, American Journal of Math., 58 (1936), 345 – 363.
 Turing, A. “On computable numbers, with an application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society, ser. 2, vol. 42 (1936-7), pp.230-265.
Also located online at: http://www.abelard.org/turpap2/tp2-ie.asp
 See: http://aleph0.clarku.edu/~djoyce/hilbert/problems.html
 Hilbert, D. & Ackerman, W. 1928. Grundzüge der theoretischen Logik. Springer-Verlag, Berlin.
 Davis, M. 1982. p. 134
[5a] LI p. 189: “What I have called algorithm is essentially a Turing machine.”
 Eves, H. 1997. p. 270-271
 Gödel, K. “On Formally Undecidable Propositions of Principia Mathematica and Related Systems”, in Davis, M. 2004. (Originally: 1931. “Uber formal unentscheidbare Sätze der Principia Mathematica
und verwandter Systeme I”, Mh. Math. Phys. 38)
[7a] EL p. 93: “In a certain sense, then, formalizable mathematical systems (i.e., those without impredicativities) are indeed infinitely feeble in terms of entailment. As such, they are excessively nongeneric, infinitely atypical of mathematical (inferential) systems at large, let alone “informal” things like natural languages.”
 EL p. 78
 I intentionally put “mechanical” in quotes to indicate that the word is being used in the metaphorical sense of “deliberate fixed routine”, rather than a reference to physicality.
 EL p. 306
[10a] EL p. 93
 EL p. 160