marc geddes wrote:
>
>
>On Sep 27, 2:15 pm, "Wei Dai" <wei....domain.name.hidden> wrote:
>
> >
> > Yes. So my point is, even though the subjective probability computed by
>ASSA
> > is intuitively appealing, we end up ignoring it, so why bother? We can
> > always make the right choices by thinking directly about measures of
> > outcomes and ignoring subjective probabilities.
> >
>
>OK, new thought experiement. ;)
>
>Barring a global disaster which wiped out all of the humanity or its
>descendents, there would exist massively more observers in the future
>than currently exist.
>But you (as an observer) find you born amongst the earliest humans.
>Since barring global disaster there will be massively more observers
>in the future, why did you find yourself born so early? Surely your
>probability of being born in the future (where there are far more
>observers) was much much higher than your chances of being born so
>early among a far smaller pool of observers?
>The conclusion appears to be that there is an overwhelming probability
>that we are on the brink of some global disaster which will wipe out
>all humanity, since that would explain why we don't find ourselves
>among the pool of future observers (because there are none).
>Is the conclusion correct?
>
>
This is the standard "Doomsday argument" (see
http://www.anthropic-principle.com/primer1.html and
http://www.anthropic-principle.com/faq.html from Nick Bostrom's site), but
there's a loophole--it's possible that something like Nick Bostrom's
"simulation argument" (which has its own site at
http://www.simulation-argument.com/ ) is correct, and that we are *already*
living in some vast transhuman galaxy-spanning civilization, but we just
don't know it because a significant fraction of the observer-moments in this
civilization are part of "ancestor simulations" which simulate the distant
past (or alternate versions of their own 'actual' past) in great detail. In
this case the self-sampling assumption which tells us our own present
observer-moment is "typical" could be correct, and yet we'd have incorrect
information about the total number of humanlike observers that had gone
before us, if we knew the actual number it might tell us that doomsday was
far off.
Thinking about these ideas, I also came up with a somewhat different thought
about how our current observer-moment might be "typical" without it needing
to imply the likelihood that civilization was on the verge of ending. It's
pretty science-fictioney and fanciful, but maybe not *too* much more so than
the ancestor simulation idea. The basic idea came out of thinking about
whether it would ever be possible to "merge" distinct minds into a single
one, especially in a hypothetical future when mind uploading is possible and
the minds that want to merge exist as programs running on advanced
computers. This idea of mind-merging appears a lot in science fiction--think
of the Borg on Star Trek--but it seemed to me that it would actually be
quite difficult, because neural networks are so idiosyncratic in the details
of their connections, and because memories and knowledge are stored in such
a distributed way, they aren't like ordinary computer programs designed by
humans where you have neat easy-to-follow decision trees and units of
information stored in distinct clearly-marked locations. Figuring out how to
map one neural network's concept of a "cat" (for example) to another's in
such a way that the combined neural network behaved in a nice unified matter
wouldn't be straightforward at all, and each person's concept of a cat
probably involves links to huge numbers of implicit memories which have a
basis in that person's unique life history.
So, is there any remotely plausible way it could be done? My idea was that
if mind B wanted to integrate mind A into itself, perhaps the only way to do
it would be to hook the two neural networks up with a lot of initially
fairly random connections (just as the connections between different areas
in the brain of a newborn are initially fairly random), and then *replay*
the entire history of A's neural network from when it was first formed up to
the moment it agreed to merge with B, with B's neural network adapting to it
in realtime and forming meaningful connections between A's network and its
own, in much the same way that our left hemisphere has been hooked up to our
right hemisphere since our brain first formed and the two are in a constant
process of adapting to changes in one another so they can function as a
unified whole. For this to work, perhaps B would have to be put into a very
passive, receptive state, so that it was experiencing the things happening
in A's brain throughout the "replay" of A's life as if they were happening
to itself, with no explicit conscious memories or awareness of itself as an
individual distinct from A with its own separate life history. In this case,
the experience of being B's neural network experiencing this replay of A's
life might be subjectively indistinguishable from A's original life--there'd
be no way of telling whether a particular observer-moment was actually that
of A or of B passively experiencing as part of such an extended replay.
And suppose that later, after B had emerged from this life-replay with A now
assimilated into itself, and B had gotten back to its normal life in this
transhuman future, another mind C wanted to assimilate B into itself in the
same way. If it used the same procedure, it would have to experience a
replay of the entire history of B's neural network, including the period
where B was experiencing the replay of A's neural network (even if C had
already experienced a reply of A's history on its own)--and these
experiences, too, could be indistinguishable from the original experiences
of A! So if I'm having experiences which subjectively seem like those of A,
I'd have no way of being sure if they weren't actually those of some
transhuman intellect which wanted to integrate my mind into itself, or some
other transhuman intellect which wanted to integrate that first transhuman's
mind into itself, etc. If we imagine that a significant number of future
transhuman minds will be "descendents" of the earliest uploads (perhaps
because those will be the ones that have had the most time to make multiple
diverging copies of themselves, so they form the largest 'clades'), then as
different future minds keep wanting to merge with one another, they might
keep having to re-experience the same original life histories of the
earliest uploads, over and over again. Thus experiences of the lives of
beings who were born within decades of the development of uploading
technology (or just close enough to the development of the technology so
that, even if they didn't naturally live to see it, they'd be able to use
cryonics to get their brains preserved so that they could be scanned once it
became possible) could form a significant fraction of the set of all
observer-moments in some extremely vast galaxy-spanning transhuman
civilization.
Perhaps it's pretty farfetched, but my motive in thinking along these lines
is not just that I want to see the doomsday argument wrong when applied to
the lifetime of our civilization--it's that the doomsday argument can also
be applied to our *individual* lifetimes, so that if my current
observer-moment is "typical" the number of years I have left to live is
unlikely to be too many times larger than the number of years I've lived
already, and yet in this form it seems to be clearly incompatible with the
argument for quantum immortality, which I also find pretty plausible (see my
argument in favor of QI at
http://groups.google.com/group/everything-list/msg/c88e55c668ac4f65 ). The
only way for the two arguments to be compatible is if we have beings with
infinitely extended lifespans that nevertheless are continually having
experiences where they *believe* themselves to have only lived for a few
decades and have no conscious memories of a longer life prior to
that--something along the lines of "reincarnation", except that I want a
version that doesn't require believing in a supernatural non-computable
"soul". This is my attempt to come up with a rationale for why transhuman
minds might have a lot of observer-moments that seem to be those of
short-lived beings like ourselves, one which hopefully would not seem
completely crazy to someone who's already prepared to swallow notions like
mind uploading and giant post-singularity transhuman civilizations (and at
least some on this list would probably fall into that category).
Jesse
_________________________________________________________________
Discover sweet stuff waiting for you at the Messenger Cafe. Claim your
treat today!
http://www.cafemessenger.com/info/info_sweetstuff.html?ocid=TXT_TAGHM_SeptHMtagline2
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Sep 30 2007 - 01:20:31 PDT