- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Wei Dai <weidai.domain.name.hidden>

Date: Fri, 12 Jul 2002 20:06:36 -0700

On Fri, Jul 12, 2002 at 06:47:46PM +0200, Bruno Marchal wrote:

*> OK I will try to read Joyce's book asap. In general I am quite skeptic
*

*> about the use of the notion of "causality". I have also no understanding
*

*> of your posts in which you argue about a relationship between the search
*

*> of a TOE and decision theory.
*

See my recent reply to Hal. Basically they are related through the concept

of probabilities (if the TOE makes use of probabilities).

*> >I'm aware of *a* mind-body problem. I'm not sure if it's the same one you
*

*> >have in mind. The one I have in mind is this: how do I derive a
*

*> >probability distribution for the (absolute) SSA from a third-person
*

*> >description of the multiverse?
*

*>
*

*> The mind-body problem I am talking about is the one formulated by Descartes
*

*> (but also by Hindu philosophers before J.C). It is really the problem
*

*> of linking private (first person) sensations and third person communicable
*

*> phenomena. How a grey brain produces sensation of color, as someone put it.
*

Could you state the problem more formally? Also, you asked me whether I

was aware of the mind-body problem. What did my answer tell you?

*> Those I have encapsulated in the label "comp". Precisely it consists in
*

*> 1)accepting a minimal amount of arithmetical realism, i.e. the truth of
*

*> elementary statements of arithmetic does not depend of me or us ...
*

I agree with (1).

*> 2) the Church Thesis (also called the Church Turing Thesis, or the
*

*> Post Law, etc.)
*

*> i.e. all universal machine are equivalent with respect to their simulation
*

*> abilities (making abstraction of the duration of those simulation).
*

I don't think that is settled yet. We may be able to build machines that

are more powerful than Turing machines. I don't think we should rule it

out at this point.

*> 3) The existence of a level of description of my body (whatever it
*

*> is) such that
*

*> my first person experience remains invariant through a functional substitution
*

*> made at that level.
*

Can you state this more formally? Specificly how do you define "functional

substitution"?

*> (Note that the Arithmetical uda makes it possible to eliminate the "3)" above).
*

I guess I'll have to wait for your English paper to understand how.

*> I was referring to the second incompleteness theorem by Godel: a consistent
*

*> machine cannot prove its own consistency. This means that if you add the
*

*> inconsistency as a new axiom the machine will not derive a contradiction,
*

*> (because if the machine derive a contradiction from her inconsistency, she
*

*> will prove its consistency by reductio ad absurdo). So a consistent machine
*

*> will not be inconsistent when she asserts its own inconsistency.
*

But in second order logic, if you add a new axiom to a consistent theory

stating that it's inconsistent, the theory is no longer satisfiable (i.e.,

it no longer has a model, even though it's still consistent), right? In

first order logic, the theory would still be satisfiable but that just

indicates that the semantics of first order logic is flawed. BTW remind me

what's the relevance of this again?

*> Of course I restrict us and the machines I interview to sound logics. Why
*

*> should I interview unsound machines? It would be like an historian working on
*

*> a biography of Napoleon, and interviewing a mad guy in an asylum pretending
*

*> to be Napoleon. I limit my interview to sound machine for the same reason
*

*> I would stop reading papers by someone if I realize
*

*> he is using (systematically) a theory which is unsound.
*

*> Except for clinical case I have never find someone using unsound
*

*> basic theories.
*

Ok, I guess I thought your restriction to sound machines was a substantial

ones, but perhaps its not. However I still am not sure what point you're

making by making this restriction explicit.

*> >I don't have to explain how I "keep being in the same computation" because
*

*> >I don't know or claim that. I'm not sure that's even a meaningful
*

*> >sentence.
*

*>
*

*> It seems to me you claim it in your next sentence, here:
*

*>
*

*> >All I do claim is that for any given computation, if I am in
*

*> >that computation, I care about the future version of me in that
*

*> >computation, and I can causally affect its future (and only its future).
*

*> >In other words, the causal influence of my actions stay in the same
*

*> >computation.
*

*>
*

*> The whole point of the uda thought experiment consists in showing that
*

*> expressions like "I am in that computation" are not well defined. The uda shows
*

*> also that we have a lot of futures ("future, btw, is a first person construct:
*

*> there is no notion of future in any "block-reality" approach).
*

To me, future is a concept linked with causality, because causes always

occur before effects. In any "block-reality" approach that takes causality

into account, it would have to be littered with arrows indicating causal

relationships, and those arrows would differentiate between past and

future.

Certainly in a computation there is a natural concept of past and future

that is not a first person construct, and the causal relationship between

one state of a computation and the next one should be quite clear.

*> The fact that "I can causally affect its future" is not clear at all,
*

*> and any clearer version should be justified.
*

Read Joyce's book, it should clarify and justify it for you. If not come

back and we'll talk about it.

*> Let me give you a simple example.
*

*> Suppose you decide to drink a cup of coffee.
*

*> You will prepare that cup of coffee hoping this will causally affect "its
*

*> (yours!) future, in a way such that you make the first person experience
*

*> of drinking that cup of coffee.
*

Ok.

*> But the UD, because he is shallow, will generate an infinite number of
*

*> computations in which you will experience drinking a cup of tea (if
*

*> not a white rabbit), and this although you have the same experience
*

*> of the past which include
*

*> your preparing that cup of coffee).
*

You can just ignore those universes because their algorithmic

complexities are very high (and therefore their measures are very low).

*> The "invariance lemma" prevents "easy" use of (Kolmogorov or
*

*> Chaitin) complexity
*

*> notion for dismissing those abnormal stories.
*

Why? I just did it. Are you saying each copy of you in any universe counts

equally regardless of how small the measure of the universe is? If that is

what you mean by "invariance lemma" then I certainly don't agree with you.

*> The comp indeterminacy hints to transform that problem into a search
*

*> of a measure,
*

*> and into showing that relatively abnormal consistent
*

*> extension/stories are rare.
*

*> This is not unlike the Feynman integration on path in quantum mechanics.
*

I do not see the necessity of it.

Received on Fri Jul 12 2002 - 20:07:16 PDT

Date: Fri, 12 Jul 2002 20:06:36 -0700

On Fri, Jul 12, 2002 at 06:47:46PM +0200, Bruno Marchal wrote:

See my recent reply to Hal. Basically they are related through the concept

of probabilities (if the TOE makes use of probabilities).

Could you state the problem more formally? Also, you asked me whether I

was aware of the mind-body problem. What did my answer tell you?

I agree with (1).

I don't think that is settled yet. We may be able to build machines that

are more powerful than Turing machines. I don't think we should rule it

out at this point.

Can you state this more formally? Specificly how do you define "functional

substitution"?

I guess I'll have to wait for your English paper to understand how.

But in second order logic, if you add a new axiom to a consistent theory

stating that it's inconsistent, the theory is no longer satisfiable (i.e.,

it no longer has a model, even though it's still consistent), right? In

first order logic, the theory would still be satisfiable but that just

indicates that the semantics of first order logic is flawed. BTW remind me

what's the relevance of this again?

Ok, I guess I thought your restriction to sound machines was a substantial

ones, but perhaps its not. However I still am not sure what point you're

making by making this restriction explicit.

To me, future is a concept linked with causality, because causes always

occur before effects. In any "block-reality" approach that takes causality

into account, it would have to be littered with arrows indicating causal

relationships, and those arrows would differentiate between past and

future.

Certainly in a computation there is a natural concept of past and future

that is not a first person construct, and the causal relationship between

one state of a computation and the next one should be quite clear.

Read Joyce's book, it should clarify and justify it for you. If not come

back and we'll talk about it.

Ok.

You can just ignore those universes because their algorithmic

complexities are very high (and therefore their measures are very low).

Why? I just did it. Are you saying each copy of you in any universe counts

equally regardless of how small the measure of the universe is? If that is

what you mean by "invariance lemma" then I certainly don't agree with you.

I do not see the necessity of it.

Received on Fri Jul 12 2002 - 20:07:16 PDT

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*