We have seen in great detail that more and more nested recursion yields more and more functions, ad infinitum. And lest one think that there is a concept of "mega-recursion" that captures effectiveness altogether, we have seen how such functions could be effectively indexed (add a new clause for mega-recursion) and then effectively diagonalized to yield an intuitively computable function that is not mega-recursive. The only solution, we saw, is to not leave "targets" everywhere in the Cantorian table of function values for the impending diagonalization to strike (think of the children's game "battleship". It's easy to win if every ship of your opponent has to cross the diagonal of the grid!)
The moral is that capturing the total computable functions requires that we think about partial functions that are undefined on some arguments. In computational terms, these undefined values correspond to the program or description of the function going into an "infinite loop" or running off the end of an unbounded search. Infinite loops and partial functions are the price we pay for a computer language capable of computing all intuitively computable total functions. To catch all the flounder, we have to put up with catching some old tires. The "net" of an effective definitional language is too coarse to do the sorting for us.
(mx)f(x, y)»0,which returns the least x such that f(x, y)»0 and f(z, y) is defined and nonzero for all z <x. Note that whenever an "infinite loop" is hit among the successive computations f(0, y), f(1, y), f(2, y), ... the whole minimialization crashes. Thus, we think of minimalization as a stupid, "serial" search that goes off the deep end if any successive computation doesn't halt.
As before, let
(mx)R(x, y)»(mx)[-sg (CharR(x, y))].
a(z) » (mx)[(my) R(x, y, z)];This is dumb, for suppose that R(1, 1, z) holds but R(x, 0, z) fails for all x. Then b(z) is undefined, for the inner search is handed y = 0 by the outer search and goes off the deep end waiting to find a matching x. Zen philosophy in Medieval Japan was familiar with this problem. There is a koan about a thousand armed god who has a tool in each hand. If the god fixates on one task at a time, only one arm is engaged and the god can accomplish no more than an ordinary mortal. This kind of fixation is called a "suki" in Japanese kendoo, or fencing and is to be avoided at all costs, else one will lose one's head to a second opponent. The Zen solution is to see one's situation as an organic whole, dealing with all problems at once and never allowing the unruly mind to analyze it into conceptual parts. We will use our n-ary encodings to achieve the same result. Instead of searching for the first component and then for the second, we search for the code number of a pair whose components satisfy the relation. Then we return the desired component of the number we find.
b(z) » (my)[(mx) R(x, y, z)].
a(z) »d((mw)R(d(w, 0), d(w, 1), z), 0);Searching in parallel by minimalization over a coded tuple is called dovetailing because infinite searches are interleaved together to avoid "suki". This is the basic programming technique in recursive function theory and is involved in almost every interesting construction. We will employ it shortly.
b(z) » d((mw)R(d(w, 0), d(w, 1), z), 1);
Sadly, the standard way of talking is screwed up, because the partial recursive functions include total functions, which are then the total partial recursive functions. This silly decision is softened by calling the total partial recursive functions simply the total recursive functions or simply the recursive functions. Let Tot denote the total recursive functions. It would have been better to call the partial recursive functions the recursive functions and the total ones the total recursive functions, but the damage of putting the qualifier on the broader class is already done. Where is Aristotle when you need him?
The following is immediate (why?) and allows us to retain all our earlier
work!
Proposition 1:
To index Part, we simply add a clause for minimalization to our earlier indexing of Prim. The arity of the function minimalized should be one plus the arity k of the function we want to obtain. To reflect the fact that the indexed functions may end up being partial, we write f[n, k] instead of f[n, k].
k = 0 Þ f[n,
k]
= n;
k > 0 Þ
lh(n) = 0 Þ f[n, k] = C(o, p1,k);
lh(n) = 1 Þ f[n, k] = C(s, p1,k);
lh(n) = 2 Þ f[n, k] = pmin(d(n, 0), k), k;
lh(n) = 3 Þ f[n, k] = C(f[d(n, 0), lh(d(n,1))], f[d(d(n, 1), 0), k], ..., f[d(d(n, 1), lh(d(n, 1)) -.1), k]);
lh(n) = 4 Þ f[n, k] = R(f[d(n, 0), k -. 1], f[d(n, 1), k + 1]);
lh(n) = 5 Þ f[n, k] = M(f[d(n, 0), k + 1]);
lh(n) > 5 Þ f[n, k] = C(o, p1,k).
g(n, x) = f[n, k](x)is not primitive recursive. So it would be of no small interest if in this case there were to exist a universal function y in Part such that
y(n, <x>) = f[n, k](x) if f[n, k](x) is defined.Notice that the k parameter is dropped because the code number of the argument list effectively "tells" the program how many inputs to expect. It will turn out that the existence of such a function is of broad significance for the theory of computability. It is much more useful for later work to first define a primitive recursive relation
y(n, <x>) is undefined if f[n, k](x) is undefined.
U(n, t, y, i)such that for each n, t, y, and k-ary x,
$t U(n, t, y, <x>) Ûf[n, k](x) » y.Think of t as a "resource bound" on computation so that, intuitively,
U(n, t, y, <x>) ÛThe universal relation is often called the Kleene predicate. The existential quantifier is now intuitive. A halting computation halts under some finite bound on resources. Computations that never halt use more and more resources. (This intuitive gloss works better for more "computery" formalisms like Turing and Urm machines than it does for partial recursion).
n halts with output y on the k inputs x after consuming no more than quantity t of computational resources.
Then we can recover the desired universal function y by dovetailing the search for the output y with the search for a suitable runtime bound t and then returning the component that represents the output.
y(n, <x>) » d((mz)[U(n, d(z, 0), d(z, 1), <x>)], 1).Since a runtime bound and output exist only if the computation halts with that output, and since the predicate will be shown to be primitive recursive, the minimization is guaranteed to find the pair and return the correct output.
It remains only to exhibit a primitive recursive decision procedure for U(n, t, y, <x>). In our definition, the "resource bound" will concern the sizes of code numbers of tuples of outputs of intermediate computations. The tuples will be "horizontal" (synchronic) in the case of composition and "vertical" (diachronic) in the case of recursion and minimalization. The resource bound will allow us to bound quantifiers in our primitive recursive derivation. We define the parametrized family by simultaneous recursion and course-of-values recursion. You might be worried about stuffing variable numbers of arguments, but recursive calls to U take care of it! Watch closely how that works. (I guess you will have to when you do the exercise). Strictly speaking, this is a course-of-values recursion (on which variable?), so aren't you happy we already know that course-of-values recursion is a primitive recursive operator?
U(n, t, y, i) =
Observe that the definition of this relation is elementary. Thus, we have shown:
Proposition 2 (Kleene Normal Form): There exists an elementary relation U such that for each n, n-ary x,
f[n, k](x) » d((mz)[U(n, d(z, 0), d(z, 1), <x>)], 1).This is remarkable. It says that all the power of primitive recursion after Grzegorczyk class E3 is superflous for arriving at all the partial recursive functions. In fact, E2 suffices. That is the solution to Hilbert's 10th question.
The following is a less informative corollary.
Proposition 3 (Universal Machine Thoerem):
there exists a u such that for each n, k, and k-ary
x,
f[n, k](x) = f[u, 2](n, <x>).Proof: just set f[u, 2](n, <x>) = d((mz)[U(n, d(z, 0), d(z, 1), <x>)], 1), which is justified by the fact that the right-hand side is a partial recursive derivation tree.
S-m-n Theorem: There exists a primitive recursive function s[m, n](n, x), where x is m-ary, such that for each n-ary y,
f[s[m, n](i, x), n](y) » f[i, m + n](x, y).How to prove it? Well, it's pretty obvious we want s[m, n](i, x) to return an index of the y such that:
y(y)» f[i, m + n](cx[1](pn,1)(y), ..., cx[m](pn,1)(y), p1,n(y), ...,pn,n(y)).(Remember, I use square brackets for double subscripts). This involves crossing from index i in the f[_, m + n] indexing to some index j in the f[_, n] indexing.
First, we have to define a primitive recursive function proj such that for all n-ary x,
f[proj(i, n), n](x) » xi.But that's easy, since our coding doesn't care about arity:
proj(i, n) = proj(i) = <i, 0>.Next, we need to be able to compute the index of a constant function from the constant. Easy, but annoying.
Exercise 2: Define a primitive recursive function const such that for all x,
f[const(n), 1](x) » n.Hint: you could use the answer to exercise 3.
And we have to write a primitive recursive program that knows how to return the index of a composition from the indices of the functions involved.
Exercise 3: Define a primitive recursive function comp such that for all j, and m-ary i,
f[comp(j, i), k] = C(f[j, m], f[i1, k], ..., f[im, k]).Exercise 4: Now it's easy to use the above to define s[m, n](i, x).