Re: on simply being an SAS (and UDA)

From: Marchal <marchal.domain.name.hidden>
Date: Mon Jan 24 08:33:16 2000

Hi Russell,

>> BM: Exercice: why should we search a measure on the computational
>> continuations and not just the computational states? Hint: with
>> just the computational states only, COMP predicts white noise for
>> all experiences. (ok Chris ?). With the continuations, a priori
>> we must just hunt away the 'white rabbit' continuations.
>> You can also show that Schmidhuber's 'universal prior' solution
>> works only in the case the level of substitution
>> is so low that my generalised brain is the entire multiverse.
>>
>
>RS: Again, I do not know what you mean by this last comment.

This is far from being an easy exercise. It is an ``exercice", not
because I think it is an easy homework, but because I do not need its
solution in the UDA (the proof that COMP -> REVERSAL).

Note that IF QM is correct, THEN we get (non constructively)
COMP -> QM.

The UDA shows ``only" that we *must* extract the ``physical laws" from
the computationnalist quantification (quantitative analysis) of the
comp-1-indeterminisme. But it does not tell us what really is the
quantification's domain and how to compute it.

And I believe it is a so difficult question that I have choosed to
approach it formally by substituting the folk psychology by the
provability logics, searching for an arithmetical interpretation
of probability or credibility notion. The verifiable ``certainty" of p
is modelized in that setting by []p & <>p, and if p is DU-accessible
we get a sort of quantum logic, and this, I think, is promising.

But it is also interesting to try to get an intuitive understanding
of the "probability" calculus, if only to make clear the relation
between Schmidhuber and me.

In the course of doing this we will also discover a kind of apparent
objective weakness in my UDA reasoning. I have never try to hide that
weakness, but I have realize it is also unpedagogical to insist on
it too early. This weakness is not fatal for the UD Argument, but
is quasi-fatal for the hope of finding intuitively the
probabilities. Here again, that has motivated me for the modal
(more abstract) approach.

Indeed. Remember the ``fundamental result": the way of quantifying
the (1) indeterminism is independent of the place, time and
the virtual/real nature of the reconstitution. The reason which has
been invoked is the first-person undistinguishability.

Now let us consider again the thought experiment from the
renormalisation thread. I am in Brussels preparing myself for
a multiplication experiment. After annihilation in Brussels I
will be reconstituted in ten *virtual environment*:

   - one simulating perfectly Washington,
   - the others simulating perfectly Moscow.

I consider here virtual environments so that by comp 3-determinism
I can ensure that the 9 experiences of being in Moscow are
completely identical, and thus first-person undistinguishable.

Thus, if we take seriously first-person undistinguishability
we should consider equivalent the 1:9 multiplication experiment
described here with any 1:n multiplication experiments.
In that case P(M) = P(W) = 1/2.
In that case, with CUD, (there is a concrete running UD) we should
put the same weight on all ``compiler-equivalent" computational states.
(Note that this equivalence is not so easy to define, but clearly
it entails that we must put the same weigth on all 1-steps
computational continuations of my brain state (I assume NEURO for
the sake of easyness). But remember the UD dovetails on the reals
(or the initial segment of the reals which is the same for the
1-person). So if my brain has n (binary, for easiness too) entries,
there will be 2^n such continuations, and so one: that means
that comp would entail white noise expectation for *any*
experience in *any* experiment.
That is not the case, so something is wrong with such equivalence.
So either comp is false or we must throw away this equivalence.

As it appear in Mallah's reply, the idea is that we will take
into account more steps in the comp continuation. The idea
is to put weight, not on computational states but on
computational histories.
This move will lead us quickly toward comp-immortality
(contra Mallah, ironicaly enough!).
But how many steps make a computational history? And should we
distinguish the equivalent one ? Surely we should if we keep
the first-person undistinguishability principle. But in that
case we will meet a new problem: with the first person possible
amnesy, the computational equivalence will make possible (cf GSLevy)
the merging (fusing) of computational histories, and this,
(although a good news for our hope about finding the comp
origin of the quantum laws) kill our hope to tackle the
probabilities by pure intuition. But let us at least continue
our attempt.

Let us go back to the question ``how many steps make a comput.
history?". The easiest answer is "let us take all steps". So
a computation (modulo the compiler-equivalence) is just the whole
computation.

Now, a platonist mathematician (unlike an intuitionist) will
easily accept that there are two sort of computation:

   - those which stops,
   - those which never stops.

So, relatively to a computational state X, (my Brussels' state
for example), there are computational continuations going through
X which stops, and the others which does not stop.
The stopping one can only be enumerable. The non stopping one are
at least as numerous as the reals.
So the stopping one can be eliminated from the probability
calculus. This is immortality with a revenge: we are immortal
because we have 2^aleph_0 infinite futures and at most aleph_0
finite futures.

But this is not enough. We should take into account more seriously
the nearness of computational histories, and this could depend
on Schmidhuber/Wei Dai Universal Prior (UP) of the roots (Wei Dai
little program) of the computations going through X.

In that case our probability formula becomes something like

P(W) = P(W in y / conditionnalised by X :: UP(little
program is an origin of X)).

Where ``::" is still not defined, and y is one possible
consistent infinite computation going
through the (actual) state X.

The possible merging of the histories makes me feel that an
intuitive research of ``::" is senseless, and personally
I have never been able to define it, and so I have decided
to interview the SRC UTM (and its guardian angels) itself. This
is possible thanks to the work of Boolos, Solovay, Goldblatt, etc.

Only if my brain is the entire universe, my history is directly
defined with the UP of the little programs (Schmidhuber's solution).

I see almost all this discussion-list as a search to define
the unknow relation ``::" (oversimplifying a little bit).
I see it more and more as a search of making a good use
of both ASSA (based on the UP) and RSSA (taking the actual state
into account).

Note also that there is something importantly true in the
saying (though vague as it is) of Higgo and Griffith.
Indeed it seems that an observer moment (the 1-person object on
which the quantification of indeterminacy is done) is really
(with comp) a computational state *including* all computational
histories going through. It seems there is some kind of
duality between ``observer moment" and the sheaf of histories
(branching-bifurking sequence) going through the observer moments.
How to use that?

With the modal logics, the observer-moment are the canonical
maximal consistent sets of formula for the logic Z1* (the
logic of []p&<>p, p DU-accessible (or Sigma_1)).
That is very nice, because formally it gives a kind of
quantum logic. And here the duality between ``observer moment"
and the sheaf of histories is akin to the ``Galois Connection
between theorie and models well known in logic.
But I'm still searching a semantics for Z1* for making that
duality and that Galois Connection genuinely useful.

Bruno
Received on Mon Jan 24 2000 - 08:33:16 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST