Re: Simplicity, the infinite and the everything (42x)

From: Quentin Anciaux <allcolor.domain.name.hidden>
Date: Wed, 13 Aug 2008 22:05:52 +0200

2008/8/13 1Z <peterdjones.domain.name.hidden>:
>> Sure, why one then ?
>
> It would be the smallest number that fits the facts.

Which facts ?

>> >> >> > it is not simpler on the "entity" version of O's R, and it does not
>> >> >> > fit the evidence because of the WR problem.
>>
>> >> >> Yes but I see 'real switch' problem as equally problematic in front of
>> >> >> the WR problem.
>>
>> >> > I don't see that. You need to explain. Single-worlders can "switch
>> >> > off" WR's.
>>
>> >> Yes by saying it's a no problem... I can say MW can "switch off" WR as
>> >> easily. But we just make a step back and forth.
>>
>> > That is no explanation. Single worlders -- and physical many
>> > worlders--
>> > get rid of WR universes by saying they do no exist at all.
>>
>> > Now: don't tell me *that* mathematical may worlders can do
>> > the same, tell me how.
>>
>> By saying they exists but you're not in the class of observer capable
>> of experiencing them (or experiencing them for a long stable period of
>> time...),
>
>
>> and as a RSSA proponents, next states probabilities are
>> relative to current state... I could also say that you experience only
>> one world/history (from your point of view of course) then speaking
>> why you're not in these particular WR universes/histories is because
>> you're not (it sounds like your 'do not exist at all' no ? :)
>
> A lot of complicated hypotheses have been put forward on the
> MMW side. What does that buy you? Ontological complexity combined
> with
> theoretical complexity.

I see real complexity in asserting the single universe.

>> >> >> >> No you devise this in 2 parts, I think only the abstract world is
>> >> >> >> ontologically primary.
>>
>> >> >> > That is your conclusions. You cannot assume it in order to
>> >> >> > argue for it.
>>
>> >> >> I do not assume them.
>>
>> >> > Then you need some other way of getting your multiple instantiations.
>>
>> >> Well I believe (note the word) that we (the mind) are a computation
>> >> and as such I believe in strong AI, such that we will do conscious
>> >> digital entities... Either these entities will be truly conscious (and
>> >> it is possible for them to be conscious as we have assume that
>> >> consciousness is a computational process) or they won't, if they won't
>> >> and never will be conscious, it is only possible if contrary to the
>> >> assumption, consciousness is not (only) a computational process. Now
>> >> if consciousness is a computational process and we build an AI (I
>> >> don't see how we couldn't if consciousness is a computation, what
>> >> could prevent it ?) then here you are with multiple implementations.
>>
>> > And if we don't build an AI, here you are without them. (And with
>> > computationalism still true, and without any subjective
>> > indeterminacy).
>>
>> If it is a computation explain why we wouldn't with logical
>> argument... if the world is not destroyed tomorrow and consciousness
>> is a computational process then we'll build AI....
>
>
> There is no reason to build and AI duplicate of everybody,
> and there is no reason to single out me. So this is another
> appeal to coincidence.

i've never said that and that's not the point. This AI could be
duplicated and run in multiple instance when does she die ? when you
pull the last plug of the last computer running it ? by pulling out
all devices capable of running it ? by destroying the whole everything
?

>>you must suppose
>> either
>> 1) the end of the world before we do it
>> 2) or the never ever AI for unknown reason even if it is possible
>> because the mind is a computational process
>> 3) or the mind is not a computational process (or in part but
>> dependant on a non computational/non emulable process like an oracle
>> or your substance for example)..
>
> Or 4) We build an AI and it isn;t me. Why shoudl it be? The odds
> are billions to one.

I've never said that and it's not the point if it is you, me, Georges
Bush or Popeye... it's about consciousness.

>> >> Either you say that even if consciousness is a computation we will
>> >> never and ever be able to replicate this phenomena (creates a digital
>> >> consciousness) and you have to explain why or you should accept 1st
>> >> person indeterminacy...
>>
>> > I don't have to do any of those things. I just have to point out that
>> > it isn't particularly likely. I could be living in a fantastically
>> > elaborate
>> > Truman-style replica of a *physical* environment..but why should I
>> > believe
>> > that?
>>
>> I do not believe in that, you talk of multiverse like if it was
>> something built for deceiving us... that's nonsense paranoia :)
>
> But you are basing your whole argument on the future construction of
> an AI. And
> you are trying or persuade me that that means *I* am affected by
> indeterminacy.
> So the AI must be an AI of me. How is that any less solipsistic than
> the Truman
> Show?

Where all affected, every consciousness if consciousness is
computation, but the point is not about you... Solipsism is a negation
of everything... I do not see MW or 1st undeterminacy as solipsistic
but the contrary, it's asserting many mind, many consciousness, many
computations.

> Also , the fact that you reject other sceptical hypotheses is
> irrelevant and less you
> have a rational reason for doing so.
>
>> > There are many sceptical hypotheses; they are all equally
>> > likely, ie "not certainly false". Rationally they should be treated
>> > equally,
>> > and, since they cannot be equally true, they must be treated as
>> > equally implausible..
>>
>> Yes and many is more rationally simpler than unicity.
>
> YOu don't get your many without the assumption of computational
> indeterminacy,
> which is a sceptical hypothesis, and one that has been singled out
> from its
> rivals for no good reason

Simplicity is good reason, but we're not agreeing on what it means :)

Regards,
Quentin Anciaux
-- 
All those moments will be lost in time, like tears in rain.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Aug 13 2008 - 16:06:00 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST