Re: Turing vs math

From: <hal.domain.name.hidden>
Date: Sun, 24 Oct 1999 23:17:23 -0700

Russell Standish, <R.Standish.domain.name.hidden>, writes, regarding the problem
that different Universal Turing Machines agree on program size only up
to an additive constant:

> I must admit I had hoped that the "scale factor" issue had been taken
> into account. I had hoped to chase this up when I got a chance to
> borrow Li and Vitanyi from the library (requiring the conjunction of
> the book actually being returned to the shelf, and me taking some
> holidays). However, my naive approach to the problem is to assert that
> every UTM can be specified as a program in a "most basic" UTM, which
> is in itself is simply a transition table in the most basic
> architecture.

It may be that you are right and that it is possible to define a "most
basic" UTM. However then you are very dependent upon the specific notion
of a TM as a way to formalize computation. You have a read/write head,
you have a tape, you have symbols and states. It is really a very
arbitrary definition of a computer.

There are other ways to formalize computing: stack machines, register
machines, cellular automata, even idealized billiard balls bouncing
around in a giant pinball machine. Each one agrees with the others on
computational complexity, but again only up to a "scale factor" which
includes the additive constant.

In other words, for any two of these universal computing models you can
define a constant C such that the minimal sizes of programs to output a
given string agree to within C. The problem is that in theory you can
find universal computing models which differ by arbitrarily large C.

Now, it may be that all computational models which seem intuitively simple
to us will in fact agree with each other to a large extent and have only
a small C. The question is whether that means that in fact there is an
objective definition of the "most basic" model of computation, or whether
this is something of an artifact of our way of looking at the universe.
We are limited in our perceptions and ideas, and so maybe it is not
surprising that all the simple computing models we can conceive are
ultimately quite similar to each other.

I suspect that it is going to be difficult to identify a subset of all
computing models which are the "most basic" ones and which can be shown
to agree on complexity measure within some specific, fixed bound. If we
could do that it would give a more objective foundation to the concept of
Kolmogorov complexity (rather than having it be relative to a chosen UTM).
There would still be some inherent fuzziness, based on the size of
the bound C, but that would be better than having unlimited fuzziness,
as I believe is the case in the current theory.

Hal
Received on Sun Oct 24 1999 - 23:20:13 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST