Bruno writes:
> I am not sure that I understand what you do with that measure on programs.
> I prefer to look at infinite coin generations (that is infinitely
> reiterated self-duplications)
> and put measure on infinite sets of alternatives. Those infinite sets of
> relative alternative *are* themselves" generated by simple programs (like
> the UD).
Here is how I approach it, based on Schmidhuber. Suppose we pick a model
of computation based on a particular Universal Turing Machine (UTM).
Imagine this model being given all possible input tapes. There are an
uncountably infinite number of such tapes, but on any given tape only
a finite length will actually be used (i.e. the probability of using
an infinite number of bits of the tape is zero). This means that any
program which runs is only a finite size, yet occurs on an infinite
number of tapes. The fraction of the tapes which holds a given program
is proportional to 1 over 2^(program length), if they are binary tapes.
This is considered the measure of the given program.
An equivalent way to think of it is to imagine the UTM being fed with
a tape created by coin flips. Now the probability that it will run a
given program is its measure, and again it will be proportional to 1
over 2^(program length). I don't know whether this is what you mean
by "infinite coin generations" but it sounds similar.
I believe you can get the same concept of measure by using the Universal
Dovetailer (UD) but I don't see it as necessary or particularly helpful
to invoke this step. To me it seems simpler just to imagine all possible
programs being run, without having to also imagine the operating system
which runs them all on a time-sharing computer, which is what the UD
amounts to.
> Now we cannot know in which computational history we belong,
> or more exactly we "belong" to an infinity of computational histories
> (undistinguishable up to now). (It could be all the repetition of your
> simple program)
And by "repetition of your simple program" I think you mean the fact
that there are an infinite number of tapes which have the same prefix
(the same starting bits) and which all therefore run the same program,
if it fits in that prefix. This is the basic reason why shorter programs
have greater measure than longer ones, because there are a larger fraction
of the tapes which have a given short prefix than a long one.
It's also possible, as you imply, that your consciousness is instantiated
in multiple completely different programs. For example, we live in a
program which pretty straightforwardly implements the universe we see;
but we also live in a program which implements a very different universe,
in which aliens exist who run artificial life experiments, and we are
one of those experiments. We also live in programs which just happen to
simulate moments of our consciousness, purely through random chance.
However, my guess is that the great majority of our measure will lie
in just one program. I suspect that that program will be quite simple,
and that all the other programs (such as the one with the aliens running
alife experiments) will be considerably more complex. The simplest case
is just what we see, and that is where most of our measure comes from.
> But to make an infinitely correct prediction we should average on all
> computational histories going through our states.
Yes, I agree, although as I say my guess is that we will be "close enough"
just by taking things as we see them, and in fact it may well be that
the corrections from considering "bizarre" computational histories will
be so tiny as to be unmeasurable in practice.
> Your measure could explain why simple and short subroutine persists
> everywhere but what we must do is to extract the "actual measure", the one
> apparently given by QM, from an internal measure on all relatively
> consistent continuation of our (unknown!) probable computationnal state.
> This is independent of the fact that some short programs could play the role of
> some initiator of something persisting. Perhaps a quantum dovetailer ? But to
> proceed by taking comp seriously this too should be justify from within.
> Searching a measure on the computational histories instead of the
> programs can not only be justified by thought experiments, but can
> be defined neatly mathematically. Also a "modern" way of talking on the
> Many Worlds is in term of relative consistent histories.
> But the histories emerge from within. This too must be taken into account.
> It can change the logic. (And actually changes it according to the lobian
> machine).
I'm losing you here.
Hal Finney
Received on Tue Feb 01 2005 - 13:06:50 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:10 PST