>Gilles HENRI <Gilles.Henri.domain.name.hidden> writes:
>> My claim is that a conscious computation is so tightly (and non-linearly)
>> linked to the environment that it will diverge from another computation
>> very quickly, as soon as this interaction really takes place.
>
>What if you are uploaded into a simulation which includes an environment?
>The movie, The Matrix, recently released in the U.S. has a whole society
>living in such a world. (They are using brains but the same principle
>would apply if their minds were executing on computers.)
>
>If my brain can be simulated on a computer, it can provide me with a
>simulated environment as well. Does that count as "the environment"
>for your purposes? Or do you need to go outside the computer for that?
>
>Hal
I think I already answered this question. A given machine can only simulate
a world much simpler than itself - obviously it cannot simulate itself.
With your brain you are not able to think of every people in detail: you
can only have very crude approximations of them. So you cannot emulate a
"whole society" (obviously computer games are much simpler than the real
life!)
A computer could compute a simple approximation of the real environment,
but not an environment including this computer itself. So the world it
would generate would be a (very) simplified version of our world, in other
words another world, lacking in particular this computer itself! And the
"you" you would try to simulate would also be an oversimplified version of
you, that you would probably refuse to consider as a "perfect copy" of you.
Of course I agree that you could imagine the opposite situation where our
world would be the result of a computation in a much larger world. It
doesn't contradict what I say because our consciousness would still be
unique in the world (real or simulated) where we are actually living. As I
said you may imagine two different implementations of the same
consciousness in different, non interacting worlds, such as in MWI.
However, if the worlds are different, you have to be careful in defining
what you call the "same" consciousness in this case. In the MWI you may
rely upon the time-ordering of the macroscopic components of the wave
function to determine "who" becomes "who".
Of course I assume here that the consciousness is the result of preexisting
physical laws, which put constraints on its evolution. Even if you think we
are the result of a huge program, I assume that this program does respect
the physical laws : it calculates the physical evolution of the Universe
whose consequence is the apparition of conscious beings (apparently what we
observe!). In games, you would be free to program conscious beings
independantly of each other and duplicate them ad libitum. The problem is
that we are not living in a game (or at least we do not control it, and
none of our machine can control it as well). So you can not construct a
machine that is both conscious of our environment and can control it or
simulate it...
I have not really a theory of that, but I think that "consciousness" may
not be an objective property, but rather a property relative to the world
in which the system is actually living in. It may be interesting to think
of a definition of it based on the idea that a conscious being should have
a representation of his environment complex enough to insure its
uniqueness, which corresponds to what we are actually experiencing. I think
that we would start to consider that computers are conscious when it would
become obvious that they know "who" they are...
Gilles
Received on Mon May 10 1999 - 09:07:31 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST