Now to the other end of the subject, computational complexity. The two ends aren’t going to meet in the middle, but the Grzegorczyk Hierarchy is so simple it should be at the start by natural justice, and it’s good to have in mind as a pleasant ending point while getting lost in pregroup theory.

If you want to ask yourself whether a problem is computable or not, you have to define what you mean by ‘computable’. It’s left as an exercise for the reader, basically. So the popular thinking is that effectively computable functions and recursive functions are the same thing – that is, every computable function should be recursive.

Furthermore, a slightly smaller class of functions, the primitive recursive functions, contains pretty much everything anyone’s interested in, and leaves out horror shows like the Ackermann function.

The idea is that you can start with a few really simple initial functions: zero; successor; projection, and the operations of composition and recursion, and end up with pretty much every function on the natural numbers that anyone’s ever thought up. (( before this was claimed and people tried to think of things that weren’t included in this class, anyway ))

A primitive recursive function is one which can be defined by finitely many compositions and recursions on the initial functions. A generally recursive function can be defined using unbounded minimisation as well, but remember that we’re deliberately ignoring those.

Basic building bits

These are the functions we’ll begin with:

  • $o(x) = 0$, the zero function
  • $s(x) = x+1$, the successor function
  • $p_n^i(x_1,\dots,x_n) = x_i$, the projection function $p_n^i$ is an (n)-ary function which gives you the $i$th parameter passed in to it.

The projection function just gets round the problem of functions having different arities – different numbers of arguments. Real Red-Blooded Americans like Haskell Curry did everything with only unary functions in $\lambda$-calculus, but we’re just in this for the money, so we can fudge things. From now on I’ll leave it up to you to decide how many parameters a function should take, and complaints of “that function doesn’t take that many parameters!” will fall on deaf ears.

Ok, so you’ve got your initial functions. Now I’ll let you define new functions by composing two you’ve already got together, so something like $h(x) = f(g(x))$, and by using recursion. We can write these as operators so that they fit in nicely with the others when written down:

  • $h = C(f,g_1,\dots,g_m)$ means that $h(x_1,\dots,x_n)=f(g_1(x_1,\dots,x_n)),\dots,g_m(x_1,\dots,x_n))$.
  • $h = R(f,g)$ means that $h(x,0) = f(x)$, and $h(x,n+1) = g(x,n+1,h(x,n))$ (( Note that the definition defines for $n+1$ in terms of $n$, instead of $n$ for $n-1$. Also notice that we haven’t used any subtraction at all yet. It’ll appear eventually as a weird byproduct of addition. Finally, adjust this definition yourself for $n$-ary functions $f$ ))

The idea with recursion is that you add an extra parameter $n$ to your function, and define it entirely at $n=0$ using some other function you’ve got. You then define your function at any further $n+1 > 0$ by sending all your parameters and the value computed at the previous $n$. It’s a bit like induction – any particular value of the function exists because all the previous ones do, and you can get it by starting at $n=0$ and computing upwards.

Complexity

Now we ask ourselves about complexity. Some functions are definitely harder to compute than others, so how do we define that in a rigorous way? Let’s look at the way functions are made – by composition or by recursion. Composing two functions together can only take as long as working one out then the other, so the time of computation is still on the same order of magnitude as the composed functions. Recursion, however, could entail an arbitrary number of computations of the basic functions, so this is the thing I think we need to pay attention to.

Simply counting the number of recursions involved in the definition of a function would be the obvious way to start, but how about $h=R(f,g)$, where $f$ uses two recursions and $g$ uses three? How many recursions should you say $h$ uses? I’m not totally sure why it’s complicated, but that’s probably something to do with it.

Anyway, a different angle of attack is needed, and a clever Polish mathematician thought of one.

The Grzegorczyk Hierarchy

The start

We’ll start with the initial functions, and try to create the class of all functions that are about as complicated as they are.

This class will be closed under composition, because we’ve decided composition doesn’t count for much. By composing the successor function with itself, we can get $f_a(x) = x+a$, for any constant $a$. Throwing the zero or projection functions on top of these wouldn’t get us anywhere further, so the add-a-constant functions are the only ones we can get so far. Projection’s particularly useless because we’ve only got unary functions at the moment, so no way to sensibly define anything on multiple parameters.

Clearly allowing any recursion would get you the whole class of primitive recursive functions in one sweep, so I’ll say that you can define new functions by recursion, but only if they are bounded everywhere by some other function you’ve already got. So, a function $g(x)$ is allowed only if you’ve already got some other function $f(x)$ such that $g(x) \leq f(x), \forall x$.

The class we’re creating is closed under bounded recursion. This doesn’t actually get us all that much. For example, we can’t have the addition function, because there is no constant $a$ such that $x+y \lt x+a, \forall y$. In fact, bounded recursion doesn’t get us anything new at all here, it only becomes useful later on. I just included it here so you don’t complain about my classes not all looking the same at the end.

Boring starting class defined, let’s call it $E_0$

The hierarchy

What was holding us back in $E_0$ was the want of a bigger function, so let’s define one and see what else that gets us. The choice of this function is entirely arbitrary, so let’s go with addition, but give it a funny name:

$e_0(x,y) = x+y$.

What can we get now? Well, we can add arbitrarily many number together to start with, since $x+y+z = (x+y)+z$, or, in our terrifically laboured notation using the initial functions and composition, $add_3(x,y,z) = C(C(e_0,p_3^1,p_3^2),p_3^3)$.

So taking $E_0$ and this new function $e_0$ and closing the whole lot under composition and bounded recursion gives us a new class of functions which is a superset of the previous one, and contains things which definitely look more complicated than what we had before. An idea is born!

If we define suitable $e_n$ for any n, we can create a hierarchy of functions which is increasing in complexity. Here’s my choice of the $e_n$:

\[ \begin{eqnarray}
e_1(x) &=& x^2+2 \\
e_{n+2}(0) &=& 2 \\
e_{n+2}(x+1) &=& e_{n+1}(e_{n+2}(x))
\end{eqnarray} \]

Finally, we say that $E_{n+1}$ consists of all of $E_n$, plus $e_n$, closed under composition and bounded recursion.

Tedious souls can show that all of the elementary functions reside in $E_3$, and that a whole plenoply of mathematical operations you know and love fit in thereabouts. Particularly, bounded minimisation makes a welcome appearance.

It turns out that the union of all the $E_n$ is the class of primitive recursive functions, so we’ve done what we wanted to do – every computable function we could reasonably want to look at lies on some finite level of this hierarchy, representing an impression of how computationally difficult it is. Super!

References

Kevin Kelly’s lecture notes

Computability and Unsolvability, Martin Davis