Re: Questions on Russell's "Why Occam" paper

From: Patrick Leahy <jpl.domain.name.hidden>
Date: Fri, 10 Jun 2005 13:59:16 +0100 (BST)

On Thu, 9 Jun 2005, Russell Standish wrote:

> Yes, if you think there is a concrete reality in which everything exists
> (your question of where does the observer live?), then the AP is a
> tautology.

What I meant by "where does the observer live", in more formal language,
is "how do you account for the (apparent) sense data we have?". I also
have a strong preference for an account where the description of at least
our "world" doesn't privilege one particular observer. In particular, this
is hard to square with your insistence that "the observer provides the
interpretation" of each of your bitstring universes.

>
> However, if you are prepared to allow for the possibility that observers
> exist "nowhere", then things are not quite so simple. One can always
> imagine being the brain-in-the-vat observer a reality which does not
> contain a body, or a brain, in a vat or anywhere else. Usually in this
> scenario, the observer will conclude that there must be a body somewhere
> else, and so concludes that it is inhabiting some kind of virtual
> reality. However, this implicitly assumes there has to a brain
> somewhere, and so implies a reality somewhere else for the brain to
> inhabit. But what if the brain is not required?
>
> Obviously, the last conclusion is full blown solipsism, but that is
> hardly a knock down argument.

As both Hal and I keep trying to emphasise, we are interested in how, or
whether, your theory can account for our own existence and the reality (or
appearance, if you prefer) that we see around us. So the case of
disembodied intelligences is a total a red herring. I don't really care
whether these feature in your theory or not, but I do care whether you can
account for (apparently) embodied intelligences.

>
> Instead, one can take the Anthropic Principle as an assertion of the
> reality we inhabit...

Again, you are using a private language... the AP is not regarded as any
such assertion by anyone else I've ever heard of. Most people regard their
existence as proved by their own subjective experience, not some invented
principle. If you don't agree I think we are just arguing about the
meaning of "exist"... e.g. if we happen to be living in a computer
simulation, or are just features of the solution of some set of equations,
I would still say that we (really) exist.


> ... and experimentally test it. In all such cases is has been shown to
> be true, sometimes spectacularly.

If we know "experimentally" "the reality we inhabit" (?!), which I guess
I've just claimed that we do, why do we need a principle to assert it?

Likely you mean something completely different, in which case please
explain (with examples of said experiments!).

<snip> Quoting me:
>>
>> Then you are implying that the observer can, in a finite time, read and
>> attach meaning to a full (space-time) description of itself, including
>> the act of reading this description and so on recursively.
>>
>
> Not at all. Consistency is the only requirement. If the observer goes
> looking for erself, then e will find erself in the description. It
> doesn't imply the observer is doing this all the time.

I think here we have run into the same inconsistency that you admitted in
your discussion with Hal. In your first reply to Hal you assert that the
observer O(x) attaches a unique meaning to the description string. Which
would imply processing all bits of the string up to the start of the
"don't care" region. A later reply suggest that we should in different
contexts assume (a) this and (b) what your paper actually says, i.e. the
meanings are updated as further bits are read. Now you have changed this
again, and the observer is not (modelled by) a simple mapping but is a
free agent who can choose to apply mappings to different "regions" of the
bitstring at will.

And even that doesn't actually answer my problem: let's assume the
observer *does* "go looking for erself". You claim he will find himself,
but if the description is *complete* my original problem remains: he will
never finish reading his own description. Consequently the description
will remain uninterpreted. In particular, he will never get to the part
which would be interpreted as "himself now". So is there any sense in
which *himself now* exists?

This assumes that the string description contained a complete definition
of the observer, which is a natural interpretation of your phrase:

"Since some of these descriptions describe self aware substructures,..."

But maybe you just meant that the string contains references to (tokens
of) the observer. This would be consistent with your comment a couple of
posts ago that both "observers" and "descriptions" are primary. This also
seems to be consistent with your other recent post in response to Hal, in
which the bitstrings are treated not so much as universes in the usual
sense but as either the stream of sense data entering the observer's
conciousness, or as a continuously-updated description of that
conciousness itself.

In which case your paper should have specified the set of observers which
are deemed to exist, e.g. is it all maps from prefix strings to natural
numbers? Or maybe you only need one observer if its target domain is the
full set of natural numbers, because you can then reach any of them from
one or another of your bitstrings.

>
>> You also said:
>>
>>> I'm not entirely sure I distinguish your difference between "external
>>> world" and "internal representation". We're talking about observations
>>> here, not models.
>>
>> I'm sure you can distinguish *my* mental representation of the world
>> from your own. Hence if we share a world, and you can't distinguish
>> between that world and your internal representation, then you are not
>> granting equal status to other observers such as me.
>>
>
> I'm not sure that is the case. I have a theory of your mind. I get it
> most economically by observing my own mind, hence I'm self-aware. My
> theory of the mind says that you are doing the same thing. Isn't this
> symmetric?

Sure. But a theory of other people's minds that's any use, allows them to
be factually mistaken or ignorant (otherwise people's behaviour is
inexplicable). Which implies there is a matter of fact about things that
may be different from their internal representation. And anyone but a
solipsist should apply your symmetry principle to conclude that the same
applies to them, i.e. that there is a difference between the "external
world" and their internal representation of it.


>> You also said (quoting me):
>>
>>>> My problem is that you are trying to make your observers work at two
>>>> different levels: as structures within the universes generated
>>>> (somehow!) by your bitstrings, but also as an interpretive principle
>>>> for producing meaning by operating *on* the bitstrings. It's a bit
>>>> like claiming that PCs are built by "The Sims".
>>>
>>> Yes it is a bit like that. Obviously, the Anthropic Principle (or its
>>> equivalent) does not work with "The Sims".
>>
>> Actually I don't see why not. The existence of The Sims implies a
>> universe compatible with the existence of Sims. But granting this is
>> not so for the sake of the argument, presumably the AP *will* apply to
>> the Sims Mark VII which will be fully self-aware artificial
>> intelligences.
>
> If the AP applies to the Sims Mark VII, then their reality will be a
> description containing a "body" corresponding to their intelligences.
> They will not be aware of the PC that their description is being
> generated on. We, who inhabit the world with the PC will not be aware of
> the countless other PCs, Macs, Xboxes, Eniacs, Turing machines, pebbles
> in Zen monasteries etc running Sims Mark VII. So the PC itself is
> actually irrelevant from the internal perspective of the Sims.

Well at least we agree on that. No strange loops in this picture, so
it is unlike the picture you outline in your paper.

>> But it will still be absurd to claim that the Sims are responsible for
>> construction of PCs (assuming they are not connected to robot arms etc,
>> for which no analogs exist in your theory). Let alone for them to
>> construct the actual PC on which they are running, as apparently
>> implied by your last message... even robot arms wouldn't help there.
>
> No, it is called stretching an an analogy too far!

I havn't stretched it at all from the analogy you originally accepted (see
above). I've just removed your get-out about your AP not applying.

Paddy Leahy
Received on Fri Jun 10 2005 - 09:04:14 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST