On 24 xxx -1, Marchal wrote:
> Now let us consider again the thought experiment from the
> renormalisation thread. I am in Brussels preparing myself for
> a multiplication experiment. After annihilation in Brussels I
> will be reconstituted in ten *virtual environment*:
>
> - one simulating perfectly Washington,
> - the others simulating perfectly Moscow.
>
> I consider here virtual environments so that by comp 3-determinism
> I can ensure that the 9 experiences of being in Moscow are
> completely identical, and thus first-person undistinguishable.
>
> Thus, if we take seriously first-person undistinguishability
> we should consider equivalent the 1:9 multiplication experiment
> described here with any 1:n multiplication experiments.
> In that case P(M) = P(W) = 1/2.
> In that case, with CUD, (there is a concrete running UD) we should
> put the same weight on all ``compiler-equivalent" computational states.
> (Note that this equivalence is not so easy to define, but clearly
> it entails that we must put the same weigth on all 1-steps
> computational continuations of my brain state (I assume NEURO for
> the sake of easyness). But remember the UD dovetails on the reals
> (or the initial segment of the reals which is the same for the
> 1-person). So if my brain has n (binary, for easiness too) entries,
> there will be 2^n such continuations, and so one: that means
> that comp would entail white noise expectation for *any*
> experience in *any* experiment.
> That is not the case, so something is wrong with such equivalence.
> So either comp is false or we must throw away this equivalence.
>
> As it appear in Mallah's reply, the idea is that we will take
> into account more steps in the comp continuation. The idea
> is to put weight, not on computational states but on
> computational histories.
That is not quite correct. I don't accept what you call
"equivalence" if by that you mean the measure is independent of the number
of copies. In fact, I take the measure to be proportional to the number
of copies.
However, I do invoke computation (as opposed to just stucture) to
explain why we should not expect just a random state. Even if all
computations occur, most strings would just be random (e.g. junk code
after a program ends), but those involved in computations would not.
> This move will lead us quickly toward comp-immortality
> (contra Mallah, ironicaly enough!).
No it won't.
> But how many steps make a computational history?
One.
> Let us go back to the question ``how many steps make a comput.
> history?". The easiest answer is "let us take all steps". So
> a computation (modulo the compiler-equivalence) is just the whole
> computation.
>
> Now, a platonist mathematician (unlike an intuitionist) will
> easily accept that there are two sort of computation:
>
> - those which stops,
> - those which never stops.
>
> So, relatively to a computational state X, (my Brussels' state
> for example), there are computational continuations going through
> X which stops, and the others which does not stop.
> The stopping one can only be enumerable. The non stopping one are
> at least as numerous as the reals.
> So the stopping one can be eliminated from the probability
> calculus. This is immortality with a revenge: we are immortal
> because we have 2^aleph_0 infinite futures and at most aleph_0
> finite futures.
Bullshit. You are clearly relying on infinitely long programs as
that is the only way you could get such numbers with your "equivalence"
assumption. So you will have lots of white rabbits.
By contrast, in my view, the shorter programs dominate because
they have a number of copies that goes as 1/2^l, which is due to the fact
that shorter programs leave more room for junk code. This eliminates WRs.
- - - - - - -
Jacques Mallah (jqm1584.domain.name.hidden)
Physicist / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL:
http://pages.nyu.edu/~jqm1584/
Received on Thu Feb 03 2000 - 16:05:49 PST