Huge Look Up Tables: the relativistic point of view

From: <GSLevy.domain.name.hidden>
Date: Sun, 15 Aug 1999 02:05:42 EDT

I am back from my vacations and I find 142 Emails most of them from the list.
And some of them are LONG!

Just a quick comment from the relativistic point of view.

One question that arose was whether Huge Look Up Tables (HLUT) are conscious.
I would like to make two points:

1) The relative "strength" of the Turing test and the HLUT has not been
discussed. In fact if the HLUT is "stronger" than the test then, FROM A
POSIVITIST APPROACH the HLUT will appear to be conscious. If the the HLUT is
"weaker" than the test then the HLUT will not appear to be conscious. The
Turing test can be interpreted as the frame of reference from which the
evalluation of consciousness is made.

2) The person who program the HLUT has been completely ignored. We can call
this person the HLUT demon. The Turing test is actually testing a RECORDED
version of the HLUT demon. This reminds me of Kasparov battling IBM Deep Blue
in chess. Was Deep Blue conscious? From one point of view, Kasparov was
playing against a plain stupid computer. From another point of view, he was
sparring against the wisdom of a team of programmers fused with a huge look
up table of strategies worked out by generations of Grand Masters. It all
depends where you draw the line.

When a Turing test is applied to a human, does it just test the human and
only the human? A totally uneducated human with no apriory contact with the
environment would be marginally smarter (or worse) than a chimp.

Or does the Turing test evaluate the human plus its educators including
parents, teachers, classmates, friends, and more generally its whole
environment since its birth..... and even before its birth.... and possibly
the whole universe that gave this human its life.

It all depends where you draw the line around the system that you are
evaluating and what your frame of reference is.

George

 

attached mail follows:



On Fri, 13 Aug 1999, Christopher Maloney wrote:
> Grudging kudos to Jacques for seeing where this question came from,
> and its connection to the Quantum Suicide experiment. I haven't posted
> any follow-ups for a while because I still find the whole thing quite
> perplexing.
>
> I agree that the concept "that one's measure is somehow distributed
> among the so called computational continuations of one's brain activity"
> leads inevitably to the concept of near-zombies. The description of
> making a million copies of one person is a good illustration. Each of
> those copies has only a one millionth chance of "being" the original
> person, so we should not be as concerned when one of those dies as
> when someone else, who has never been copied, dies. But is this a
> refutation of the concept, by reductio-ad-absurdum? I don't think so.

        It is absurd to me and hopefully will be to the others. I think
you are not being objective since you usually find zombies absurd.

> I want to clarify one thing, though, in Jacques' post:
>
> "Jacques M. Mallah" wrote:
> >
> > On Fri, 13 Aug 1999, Russell Standish wrote:
> > > > referring to
> > > > > t0 |
> > > > > |
> > > > > t1 T / \ H
> > > > > / \
> > > > > t2 / / \
> > > > > | | \
> > > > > t3 Y R B
> > > >
> > > > Assume that all three branches occur (two copying events).
>
> If there are two copying events, then there is no place for a coin
> toss to enter into the experiment, so the 'T' and 'H' should be
> erased from the diagram. The point is still made that, at time t0,
> Jane would figure:
> P(left branch, t1) = 1/2, P(right branch, t1) = 1/2
> P(Y, t3) = 1/2, P(R, t3) = 1/4, P(B, t3) = 1/4
>
> Which would imply that the two copies of her that saw red and blue
> would be less likely to be the same Jane at t0, so in some sense,
> they would be less human, you might say.

        In the QS claim, that is. T and H can still label the branches.
        The two experiments are actually rather different and I should
have made the distiction clearer in my post. I assume you refer below to
the case with two copying events.

> 3. Subjective probabilities can be computed, but the assumption
> that consciousness can "flow" to a continuation independent of
> time or space is flawed.
>
> This, I think, is Jacques' point of
> view. Though he didn't state it, I would guess that he would
> say that
> P(H, t1) = P(H, t2) = P(H, t3) = 1/2,
> and that the original Jane would necessarily feel herself to
> continue along with her original body. That is, if, in the
> above diagram, at the copying event after Jane sees Heads, we
> assume that the original Jane is the one who is shown the Red
> card, then Jane at t1 would say
> P(R) = 1, P(B) = 0
> The copy of Jane who sees the blue card is a new person, who
> was just "born" at the instant the copy was made, even though
> she has all the same memories as the original.

        That is NOT my position, though of course I think 'consciousness
flowing to and being distributed among continuations' is nonsense. I make
no distinction between a copy and the original; 'identity' is not a
fundamental concept. Each has the same amount of measure. For practical
purposes the distinction is useful, however. It's just a matter of
terminology in the practical use.

> 4. Subjective probabilities can be computed on the basis of the
> Strong SSA, and we get
> P(H, t1) = 1/2
> P(H, t2) = P(H, t3) = 2/3
> If this is the case, then I think we have to throw Tegmark's
> scheme using Bayesian statistics out the window. This option
> has severe metaphysical problems, though, in my opinion.

        I don't know what you mean by the above paragraph, but the
effective probabilities are correct if there are two copying events. The
SSA is the right way to do Bayesian calculations.
        If the T-H split represented a non-MWI coin toss and was a one
time event, then P(H,t3) = 1/2. In practice those conditions would be
impossible to achieve even without the MWI of QM (e.g. in an infinite
universe) and P(H,t3) = 2/3.

> 6. Subjective probabilities can be computed, and we should expect
> the nonsensical results
> P(H, t1) = 2/3
> P(H, t2) = P(H, t3) = 2/3
>
> This is what I believe is probably true. I think that there
> must be a sort of "reverse causality" at work, which would
> increase the measure of the right branch of Jane at time t1
> (the branch that sees heads, but before the copy is made).

        Nonsense.

> This still has Jacques' problem of allowing pseudo-zombies.
> If we switch to Jacques' example and assume two copying events,
> then the Jane on the left branch, at time t1, would have less
> measure than the Jane on the right (note the contrast between
> this result and the previous, where the Janes that were the
> product of the second copying operation were accorded less
> measure).
>
> But I don't see this as a problem. What I'm suggesting is that
> each human alive today has a varying amount of "measure". It's
> incorrect to assume that each person, when they are born, is
> given a single "measure unit". By my scheme, a person with a
> terminal illness with only a few days to live would have a
> very small measure of existence, relative to others.

        Huh? This seems inconsistent with QS and the specifics aren't
there.

> I can't help wondering, often, why I find myself to be the
> particular human I am. Do you others wonder this?

        You are arrogant. I am not a typical human but see no reason to
suspect I could not be a randomly selected human.

> One thought
> I've had (please don't laugh at me too badly) is that the fact
> that I have a pretty poor memory might be significant. If I
> had a better memory, then my measure would be less, because
> fewer universes could have given rise to me. Of course, this
> reasoning probably won't work for you, but that doesn't make it
> any less valid from my perspective, which is the only one I
> have.

        Well I hate (giggle?) to say it but that sounds stupid. If you
remember something non-random, that shouldn't cut your measure. If you
remember a random bit, it cuts the total measure of each type of you in
half but now there are twice as many types. By total measure I mean, as
always, the number, so this is consistent with the SSA and leads to no
zombies.

> I came to believe in this "reverse causality" while pondering
> the QS project I wrote about before. I started to expect that
> things would crop up in my way to prevent my being able to
> complete the project, before it came to fruition. It didn't
> (and it still doesn't) make sense to me that the measure of all
> my branches should be unaffected until the very instant that I
> carry out the experiment. Because if the assumption that I'll
> be alive after the experiment date is correct, then I can expect
> to have memories at that time of somehow having escaped. And
> I should, in general, expect to have a memory of "the most
> likely" escape route, or of one of the most likely ones, if there
> are several that are near-equally likely.
>
> But how can one reconcile that with the concept of continuity of
> consciousness from moment to moment? Only if there is a reverse
> causality at work.
>
> This theory has significant and testable implications. Viz: we
> should expect to find ourselves in a universe that will allow us
> to live forever. I.e. this leads directly to the requirement
> that the FAP is true. Just consider if time t1 and t2 are
> separated by a larger and larger time span. Consider also that
> those branches in which we cease to exist also tend to decrease
> the measure of all the observer-moments in previous subjective
> time.
>
> Basically, the measure of our observer-moments at the next
> instant in subjective time are weighted as the number of continous
> paths from that observer-moment to the "Omega-point". This is
> my crackpot theory. Though it's certainly hard to justify on the
> basis of the SSA on a moment-by-moment basis (the Strong SSA), I
> haven't yet found anything that contradicts it. I know that's
> not good enough, but anyway I find it the most satisfying
> result of the above thought experiment. All the other possibilities
> are problematic.

        The Omega Point CRAP is disproven because the universe is open.
(CRAP=causally retroactive anthropic principle)

                         - - - - - - -
              Jacques Mallah (jqm1584.domain.name.hidden)
       Graduate Student / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
            My URL: http://pages.nyu.edu/~jqm1584/
Received on Sat Aug 14 1999 - 23:09:43 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST