Re: I'm an empiricist!

From: 1Z <peterdjones.domain.name.hidden>
Date: Wed, 09 Aug 2006 08:07:39 -0700

Bruno Marchal wrote:
> Le 09-août-06, à 14:06, 1Z a écrit :
>
> > What the non-existence of HP(*) universes falsisfies is Platonism,
> > not computationalism. It is entirely possible that in a single
> > material universe, cognition is computation.
>
>
> This is coherent with the seven first steps of the UDA, but can no more
> be maintained with the whole (8 steps).


I am not sure what you you mean, the version I have [*] is 15 steps
long.


> Even with the seven first steps, the "single material universe" needs
> to be "little" so as not being able to run a too big portion of the
> universal dovetailing (which would generate HP universes).

The non-existence of HP universes still doesn't
disprove comp. It shows we con't live in abig universe,
whether a big phsyical univere or a big Platonia.

[ * ]

For sake of easyness in further references I put here the Universal
Dovetailer Argument (UDA) I have sent to Russell (and to the list) some
weeks ago. I add the solution/comment to one of the exercice.
I have made some minor corrections.

                                ***

             The Universal Dovetailer Argument. (UDA)

UDA is a proof that: COMP entails REVERSAL physics/psychology.

The reversal will be be epistemological: the branch "physics"
will be a branch of machine's psychology, and ontological: matter
will emerge from consciousness, in some sense, hopefully clearer
after reading the proof.

Indeed such a reversal will change the meaning of term
like "psychology" and "physics" and the meaning is given ultimately
by the proof itself. By 'proof' here, I mean argument which either
should convince you, which means you are the only judge (no
authoritative argument) or you should find an error, a weakness...
Of course if you belief that the REVERSAL
is an absurdity, you are free to interpret the proof as a
refutation of COMP (like Gilles Henri).

To make my reasoning independant of the debate between internalist
and externalist in philosophy of mind I introduce the concept of
generalised brain. By definition someone's generalized brain is
the portion of the universe (if that exists) which is necessary
to emulate its consciousness. Put in another way, if the environment
play a direct role in consciousness (like the externalist philosopher
of
mind argue for), put the needed part of thet environment in the
brain.

COMP is the hypothesis that there is a level such that I
survive a digital functional substitution of my generalised body/brain
(see above)
made at that level, + Church Thesis (CT: digital = turing) +
Arithmetical
Platonism (AR: the belief that arithmetical propositions obeys
classical logic, and this independently of my own cognitive ability).

To sum up: COMP = \exists n SURV-SUBST(n) + CT + AR

Note also that I'm assuming a minimal amount of folk psychology (FOLK)
without which such an enterprise would be meaningless. It is the
minimal amount of psychology to understand that you or someone else
could, in some situation, accept an artificial-digital brain graft,
and to understand the intuitive difference between first and third
person. (See below).

(the modal 'chapter 5' of my thesis can be interpreted as an
attempt (at least) to eliminate FOLK by substituting it by the
godelian provability logics and its thaetetical variants).
But the real goal of the chapter 5 is to make the derivation of
physics real and concrete.

Note also that it is the AR part of COMP which will makes COMP an
everything type of theory (explicitely so with the UD). This makes
'my' COMP assumption equivalent to Schmidhuber's one.

To make the reasoning easy I introduce supplementary hypotheses.
I will eliminate these hypotheses in due course.

a) NEURO: The neurophysiologist hypothesis. This is a supposition that
the level of substitution is high, or that my generalised brain is
my biological brain (the one in my skull) relevantly described at the
molecular level (let us say).

b) CU: there is a Concrete Universe, whatever it is. This is need
   for the decor.

c) CUD: there is a Concrete running of a UD in the concrete universe.

d) 3-locality: computations are locally implementable in the
   concrete universe. That is it is possible to separate two
   implementations of two computations in such a way that the result
   of one of these computations will not interfere with the result
   of the other one. Computations can be independent.
   More generally the result of a computation is independant of
   any event occuring a long way (out of the light cone) from that
   computation.

e) Conceptual OCCAM razor. I will not insist. That should be easy
   for many-worlder. The movie-graph argument in my thesis is really
   an elimination of occam razor. See also Maudlin's paper.
   We talk about that in the discussion list (key word: Maudlin,
   graph, movie, crackpot).

The proof. (in 15 steps).

1) By COMP and NEURO you survive with an artificial digital (turing
emulable, with TC) brain. OK? (CU is used implicitely).

2) By COMP and NEURO you survive classical teleportation. This
follows from 1) where the building (reconstitution) of the brain is
done a long way from the 'reading device' and the annihilation of the
original body. (CU is used implicitely).

3) By COMP and NEURO (and implicitely CU, I will not mention it again)
you survive teleportation with a delay. After the annihilation, your
body and brain description is keep intact during one year, and then you
are reconstituted. An important point is that you (from your first
person point of view) will not see the difference with the simple
teleportation case (case 2). But an exterior observator (third person)
will see the difference. Indeed for him the delayed teleportation
last one year.

4) You are teleported from the center of the galaxy to its border.
At the opposite border a star explodes. This changes nothing: you still
survive. This follows easily from 3-locality.

5) You are teleported from the center of the galaxy to its border.
At the opposite border you are reconstituted. (For exemple the scanned
information has been send in opposite direction from the center of
the galaxy, and reconstituting machines has been put on the edge of
the galaxy). You still survive, by COMP and 3-locality.

6) You are duplicable. (Direct consequence of 5). More precisely:
You are 3-duplicable. And the first person doesn't *feel the split*

7) Although your surviving does not depend on the faraway events,
from the first person perspective the event "I survive at the
left edge (let us say) of the galaxy" could depend on the faraway
other reconstitution. The duplicability entails first person
indeterminisme, although everything is determinate for a third
person. (It is really the computationalist 3-determinateness
which entails the computationalist 1-indeterminateness).

(exercise: show that the duplicability entails the unprovability
of COMP. Hint: consider teleportation without annihilation of the
original, with a delay, applied to a non-computationalist)

8) You are 'read' and annihilated in Brussels and the information
is send to Washington and Moscow. You are reconstituted at Washington
and the information is keep intact at Moscow during one year. Then
you are reconstituted at Moscow. (Duplication with assymmetric
delay). The point is the following: whatever the way you choose for
quantifying the 1-indeterminisme in the symmetric duplication, you
must quantifify in the same manner the assymmetric duplication.
This follows from COMP and 3. The first person cannot be aware of
the delays.

9) There is also a form of 1-non-locality. Although your surviving
does not depend on faraway events, your expectation of personal
experience does depend on faraway events. Here also, it is the
strict 3-locality which entail the 1-non-locality.

10) Here is an old argument you can find in all idealist school
of thought (Hindouist, Budhist, Platon, Descartes, Berkeley, etc.)
It is based on the notion of dream, but today it is more easy
(especially with COMP)
to convey it with the notion of virtual reality. The point is:
For any neigborhood and any time interval, you can build a computing
machine simulating that "space-time" at such a level that a first
person will not be able to see any difference.
(The computing machine preserves the relevant counterfactuals).
Roughly speaking a first person cannot
distinguish 'real neigborhood' with virtual (digitally simulated)
neighborhood (for all level 'below' its own substitution level).

11) To sum up: the way you quantify the indeterminisme is independent
of the time, the place and the nature (real/virtual) of the
reconstitution.

Note: the indeterminism is pure 1-indeterminism. Nevertheless, by
duplicating entire population, the indeterminism can be made
third person 'verifiable' inside each multiplied population. This
leads to what I call first person of the plural indeterminism.
(I would like to know a better english expression for that!).

12) A Universal Dovetailer exists. (Extraordinary consequence of
Church thesis and Arithmetical Realism). The UD simulates all
possible digital devices in a quasi-parallel manner).

(Adding a line in the code of any UD, and you get a quasi-
computation of its Chaitin \Omega number).

13) So let us assume CU and CUD, that is let us assume explicitely
there is a concrete universe and a concrete running of a UD in it.
This need a sort of steady state universe or an infinitely expanding
universe to run the complete infinite UD.
Suppose you let a pen falls. You want predict what will happen.
Let us suppose your brain is in state S at the beginning of the
experiment. The concrete UD will go to that state infinitely often
and compute all sort of computational continuations. This is
equivalent to reconstitutions. It follows from 11 that your
expectation are undetermined, and the domain of the indeterminism
is given by the (infinite) set of reconstitutions. To predict,
with COMP, what will happen you must take into account all
possible histories going through the state S of your brain.
And here clearly the NEURO hypothesis is not used. Even if your
real brain state is the state of the actual concrete universe,
with COMP that state will be generated (infinitely often) by the
UD. Same reasoning if your brain state is the quantum state of
the universe, so the reasoning works even if the brain is a
non local quantum object (if that exists). So the physics is
determined by the collection of your computational continuations
relatively to your first person actual state.

14) If 'that' physics is different from the traditional empirical
physics, then you refute COMP. But with COMP you will not refute
COMP, isn't it? So with COMP you will derive the laws of physics,
i.e. invariant and similarities in the 'average' continuations of
yourself (defining the measure on the computationnal continuations).

Exercice: why should we search a measure on the computational
continuations and not just the computational states? Hint: with
just the computational states only, COMP predicts white noise for
all experiences. (ok Chris ?). With the continuations, a priori
we must just hunt away the 'white rabbit' continuations.
You can also show that Schmidhuber's 'universal prior' solution
 works only in the case the level of substitution
is so low that my generalised brain is the entire multiverse.
(see below).

15) Once you explain why arithmetical machines are statistically right
to believe in physical laws without any real universe, such a real
universe is redundant.
By Arithmetical Realism and OCCAM razor, there is no need
to run the concrete UD, nor is there any need for a real concrete
Universe.
(Or you can use the movie graph argument to show that a first
person is not able to distinguish real/virtual/and *Arithmetical*
nature of his own implementations, and this eliminates OCCAM.)

                                                QED

>> BM: Exercice: why should we search a measure on the computational
>> continuations and not just the computational states? Hint: with
>> just the computational states only, COMP predicts white noise for
>> all experiences. (ok Chris ?). With the continuations, a priori
>> we must just hunt away the 'white rabbit' continuations.
>> You can also show that Schmidhuber's 'universal prior' solution
>> works only in the case the level of substitution
>> is so low that my generalised brain is the entire multiverse.

>RS: Again, I do not know what you mean by this last comment.

This is far from being an easy exercise. It is an ``exercice", not
because I think it is an easy homework, but because I do not need its
solution in the UDA (the proof that COMP -> REVERSAL).

Note that IF QM is correct, THEN we get (non constructively)
COMP -> QM.

The UDA shows ``only" that we *must* extract the ``physical laws" from
the computationnalist quantification (quantitative analysis) of the
comp-1-indeterminisme. But it does not tell us what really is the
quantification's domain and how to compute it.

And I believe it is a so difficult question that I have choosed to
approach it formally by substituting the folk psychology by the
provability logics, searching for an arithmetical interpretation
of probability or credibility notion. The verifiable ``certainty" of p
is modelized in that setting by []p & <>p, and if p is DU-accessible
we get a sort of quantum logic, and this, I think, is promising.

But it is also interesting to try to get an intuitive understanding
of the "probability" calculus, if only to make clear the relation
between Schmidhuber and me.

In the course of doing this we will also discover a kind of apparent
objective weakness in my UDA reasoning. I have never try to hide that
weakness, but I have realize it is also unpedagogical to insist on
it too early. This weakness is not fatal for the UD Argument, but
is quasi-fatal for the hope of finding intuitively the
probabilities. Here again, that has motivated me for the modal
(more abstract) approach.

Indeed. Remember the ``fundamental result": the way of quantifying
the (1) indeterminism is independent of the place, time and
the virtual/real nature of the reconstitution. The reason which has
been invoked is the first-person undistinguishability.

Now let us consider again the thought experiment from the
renormalisation thread. I am in Brussels preparing myself for
a multiplication experiment. After annihilation in Brussels I
will be reconstituted in ten *virtual environment*:

   - one simulating perfectly Washington,
   - the others simulating perfectly Moscow.

I consider here virtual environments so that by comp 3-determinism
I can ensure that the 9 experiences of being in Moscow are
completely identical, and thus first-person undistinguishable.

Thus, if we take seriously first-person undistinguishability
we should consider equivalent the 1:9 multiplication experiment
described here with any 1:n multiplication experiments.
In that case P(M) = P(W) = 1/2.
In that case, with CUD, (there is a concrete running UD) we should
put the same weight on all ``compiler-equivalent" computational states.
(Note that this equivalence is not so easy to define, but clearly
it entails that we must put the same weigth on all 1-steps
computational continuations of my brain state (I assume NEURO for
the sake of easyness). But remember the UD dovetails on the reals
(or the initial segment of the reals which is the same for the
1-person). So if my brain has n (binary, for easiness too) entries,
there will be 2^n such continuations, and so one: that means
that comp would entail white noise expectation for *any*
experience in *any* experiment.
That is not the case, so something is wrong with such equivalence.
So either comp is false or we must throw away this equivalence.

As it appear in Mallah's reply, the idea is that we will take
into account more steps in the comp continuation. The idea
is to put weight, not on computational states but on
computational histories.
This move will lead us quickly toward comp-immortality
(contra Mallah, ironicaly enough!).
But how many steps make a computational history? And should we
distinguish the equivalent one ? Surely we should if we keep
the first-person undistinguishability principle. But in that
case we will meet a new problem: with the first person possible
amnesy, the computational equivalence will make possible (cf GSLevy)
the merging (fusing) of computational histories, and this,
(although a good news for our hope about finding the comp
origin of the quantum laws) kill our hope to tackle the
probabilities by pure intuition. But let us at least continue
our attempt.

Let us go back to the question ``how many steps make a comput.
history?". The easiest answer is "let us take all steps". So
a computation (modulo the compiler-equivalence) is just the whole
computation.

Now, a platonist mathematician (unlike an intuitionist) will
easily accept that there are two sort of computation:

   - those which stops,
   - those which never stops.

So, relatively to a computational state X, (my Brussels' state
for example), there are computational continuations going through
X which stops, and the others which does not stop.
The stopping one can only be enumerable. The non stopping one are
at least as numerous as the reals.
So the stopping one can be eliminated from the probability
calculus. This is immortality with a revenge: we are immortal
because we have 2^aleph_0 infinite futures and at most aleph_0
finite futures.

But this is not enough. We should take into account more seriously
the nearness of computational histories, and this could depend
on Schmidhuber/Wei Dai Universal Prior (UP) of the roots (Wei Dai
little program) of the computations going through X.

In that case our probability formula becomes something like

P(W) = P(W in y / conditionnalised by X :: UP(little
program is an origin of X)).

Where ``::" is still not defined, and y is one possible
consistent infinite computation going
through the (actual) state X.

The possible merging of the histories makes me feel that an
intuitive research of ``::" is senseless, and personally
I have never been able to define it, and so I have decided
to interview the SRC UTM (and its guardian angels) itself. This
is possible thanks to the work of Boolos, Solovay, Goldblatt, etc.

Only if my brain is the entire universe, my history is directly
defined with the UP of the little programs (Schmidhuber's solution).

I see almost all this discussion-list as a search to define
the unknow relation ``::" (oversimplifying a little bit).
I see it more and more as a search of making a good use
of both ASSA (based on the UP) and RSSA (taking the actual state
into account).

Note also that there is something importantly true in the
saying (though vague as it is) of Higgo and Griffith.
Indeed it seems that an observer moment (the 1-person object on
which the quantification of indeterminacy is done) is really
(with comp) a computational state *including* all computational
histories going through. It seems there is some kind of
duality between ``observer moment" and the sheaf of histories
(branching-bifurking sequences) going through the observer moments.
How to use that?

With the modal logics, the observer-moment could be modelised by
the canonical maximal consistent sets of formula for the logic
Z1* (the logic of []p&<>p, p DU-accessible (or Sigma_1)).
That is very nice, because formally it gives a kind of
quantum logic. And here the duality between ``observer moment"
and the sheaf of histories is akin to the ``Galois Connection
between theorie and models well known in logic.
But I'm still searching a semantics for Z1* for making that
duality and that Galois Connection genuinely useful.

Bruno


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Wed Aug 09 2006 - 11:11:23 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST