Re: interpretation of TM (Turing mechanics)
Jacques M. Mallah, <jqm1584.domain.name.hidden>, writes:
> I see two types of possibility. First, and I hope this is the
> one that works! a scheme such as I have been trying to develop might work,
> based on an objective formulation of algorithmic complexity (which, as
> I've discussed before, I have some ideas on how one might find it, but it
> has not yet been formulated. I'm talking about e.g. a uniquely self
> consistent way to average over all Kolmogorov complexities).
I am skeptical that any objective measure of complexity will work.
As well to ask for absolute complexity as to ask for absolute position
or absolute velocity. It all depends on frame of reference. This is
just an intuition, though.
> Second, and this works better if instead of just a Turing machine
> there is a high-dimensional computer, let certain particular computations
> give rise to consciousness and *don't* allow implementations within it!
> In other words, for each 'run' or simulation of an entire multiverse
> history, there is an output of one 'brain state' for ONE person.
> (Almost like Wei Dai's idea, but also requiring an initial 'brain state'
> AND the right causal relations). My arguments about the problems with the
> measure distribution produced, as told to Wei Dai, still stand.
As I have interpreted Wei Dai's idea, the interpretation problem is solved
as follows. The measure of a universe is (inversely) proportional to the
length of the program that generates it. There is then an "interpretation
measure" (my term) which is the length of the program mapping from
a logical computation to the physical elements of the universe.
You have to add these two measures to get the contribution made by this
program+interpretation to instantiations of the logical computation.
If you then integrate over all universes and all interpretations,
you find the total measure of the instantiations of any given logical
computation. Map consciousness to logical computations (an exercise in
neuroscience, not philosophy) and you find the total likelihood that a
given consciousness is instantiated. Hopefully it will turn out that
consciousnesses like ours are much more likely than ones living in
universes with dragons or flying pigs or other such inconsistencies,
and we will therefore have explained why the universe is lawful.
This escapes the problem of non-objective algorithmic complexity,
because we know that all algorithmic complexity measures agree to within
an additive constant. As we integrate over all possible programs,
tending to infinite length programs, any given constant will shrink to
insignificance. Hence you will get the same answer for the total measure
of any logical consciousness, no matter which algorithmic complexity
measure you use.
Hal
Received on Sat Nov 27 1999 - 20:29:48 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST