Re: Alternate deductive route to the existence of all universe

From: Russell Standish <R.Standish.domain.name.hidden>
Date: Thu, 8 Jul 1999 09:23:44 +1000 (EST)

>
> Just joined the list, having come to its central position by my
> own means, from the artificial-intelligence position that simulated
> minds can be conscious of their own existence.
>
> Chapter 7 of my recent "Robot" <http://www.frc.ri.cmu.edu/~hpm/book97>
> derives the idea that all universes are equally real from the position
> that robots can be conscious. It notes that the totality of all
> universes (like the library of all books) requires no information to
> construct, that, although any of us (however we define ourselves)
> exists in an infinity of bizarre universes, we're most likely to find
> ourselves in ones that require the least amount of initial information
> to explain us, and that we are stuck on a subjective path that needs
> the least coincidences to keep our consciousness going (and that turns
> out to be the boring physical universe that made us by Darwinian
> selection, all working under a very simple TOE that resulted in the
> visible universe as a side effect). It also notes that, no matter
> what happens to us, among all universes there are some in which our
> consciousness continues, and we will always find ourselves in those
> (and never in ones where our consciousness does not continue!). For
> some things that happen (like our brain rotting) the simplest
> continuation of our consciousness may no longer involve the exact
> continuation of the old physical laws.
>
> Here's a compactified presentation of of the core train of thought:
>
> Start with the premise (A) that properly designed minds implemented in
> computers can have conscious experiences just like minds implemented
> in flesh. Also assume (B) that experiences of rich virtual worlds can
> be as vivid as experiences of the physical world. Immersive video
> games make the second premise non-controversial. Materialistic
> accounts of the evolution of life and intelligence, providing a rough
> roadmap for the evolution of machine intelligence, make the first
> premise compelling to AI guys like me. (Also, Occamesque, it demands
> no mysterious special new ingredients to make consciousness.)
>
> Let AI = Artificial Intelligence and VR = Virtual Reality.
> Combine the two halves of both premises into four cases:
> 1) a flesh human in the physical world.
> 2) a conscious AI controlling a physical-world robot.
> 3) a human immersed in a VR, maybe by neural interface.
> 4) a conscious AI linked to a VR, all inside one computer.
>
> Case 4 is a handle on the subjective/objective problem that was not
> available to past philosophers. Unlike flesh, dreams, stories,
> sensation-controlling demons or divine ideas, it is nearly free of
> slippery unstated assumptions about human minds or physical
> reality. On the outside, we have a simple objective device stepping
> through states. Yet, on the inside, there is a subjective mind
> experiencing its own existence.
>
> What connects the internal experience to the external mechanism? As in
> any simulation, it is an interpretation. Storage locations can be
> viewed as representing bit patterns, numbers, text, pressures,
> temperatures, sensations, moods, beliefs, feelings or more abstract
> relationships. In general, different observers will have different
> interpretations. Someone looking at the simulation trying to improve
> memory management in the operating system will likely put a different
> interpretation on the memory contents than someone wanting to view
> life in the simulation, or to talk with its inhabitant.
>
> But does the AI cease to exist if there is no one outside who happens
> to have the correct interpretation to see it? Suppose an experimenter

We have already discussed the concept that conciousness is a relative
concept. In your cases above, 1-3 would indeed be concious relative to
our own, but case 4 would not (if it is an entirely deterministic system with
no free will), and case 3 is arguable I suppose.

However, relative to itself, the AI system is concious.

> sets up an AI/VR, and builds a translating box allowing him to plug in
> and talk with the AI. But on the way home, the experimenter is killed
> and the translating box destroyed. The computer continues to run, but
> no one suspects it holds a living, feeling being. Does the AI cease to
> be? Suppose one day enough of the experimenter's notes are found and a
> new translating box is built and attached. The rediscovered AI then
> tells a long story about its life in the interval when it was
> unobserved.
>
> My take on this is that there is an observer of the AI even when it
> goes unobserved from the outside, namely the AI itself. By
> interpreting some process inside the box as a conscious observer, we
> grant that process the power of making observations about itself. That
> self-interpretation exists in its own right whether or not someone
> outside ever appreciates it. But once you allow externally
> undiscovered interpretations of AIs that exist only in their own
> eyes, you open the door to all possible interpretations which contain
> self-aware observers. Which is fine by me. I think this universe is
> just such a self-interpretation, one self-defining subjective thread
> in an infinity or alternatives that are just as real to their
> inhabitants.
>
>

Otherwise, this line of argument seems fine.

----------------------------------------------------------------------------
Dr. Russell Standish Director
High Performance Computing Support Unit,
University of NSW Phone 9385 6967
Sydney 2052 Fax 9385 7123
Australia R.Standish.domain.name.hidden
Room 2075, Red Centre http://parallel.hpc.unsw.edu.au/rks
----------------------------------------------------------------------------
Received on Wed Jul 07 1999 - 16:24:39 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST