>On Sun, 25 Apr 1999, Gilles HENRI wrote:
>> But "COMP" is (if I understood it correctly)
>> a stronger hypothesis: it is that at some finite level, you could reproduce
>> or duplicate EXACTLY your conscious state, or at least you could simulate
>> it "to an arbitrary degree of accuracy" (which is already somewhat
>> different!) (James):
>
> I will not speak about "COMP" - it is not my term - but as far as
>duplicating your conscious state, you can. The idea is that by
>duplicating some computation, your conscious state would be duplicated,
>since consciouness is just as aspect of certain computations.
My argument is that this computation is so tightly linked to the physical
reality that you cannot in practice make another implementation that is
perfectly equivalent. This is analoguous to the perturbations associated
with a quantum measurement. Classical computers can be exactly duplicated
because their computations are not affected by the fact that other
identical computers exist. This could be compared to classical
measurements, that can be repeated as often as you want because they don't
perturbate the object. But a conscious being cannot exist without being
perturbated by its environment (in fact we are conscious BECAUSE we are
constantly interacting with our environment). So if something else is
implementing a conscious computation, it must be different because its
environment is different. I precize this point below.
>> If you think that you have built a neural network almost identical to
>> yourself, you know that YOU have built A MACHINE. But what does YOUR
>> MACHINE know?
>
> If computationalism is true, it knows and feels pretty much what
>you know and feel.
>
So I see here a logical problem. If you built a machine that you think
identical to you, you can not of course predict its state one hour after
because it would be equivalent to be able to predict your own state - which
is clearly impossible. BUT if the machine is exactly like you, it must
think also that it has built a machine (a second one), identical to it, but
whose state is not predictible. But where is this second machine and how to
simulate it? It is also clearly impossible that your machine implements a
computation of yourself + this very machine. So the first machine that you
thought identical to you is in fact in a different environment : it has no
other machine to be confronted with. For example if you start to play with
your machine and you see that it is indeed doing exactly what you are doing
at the beginning, you could be delighted to see you have succeeded. Alas,
as soon as you realize that you are delighted, you realize also that your
machine is not delighted at all, because it has no machine to play with.
The interaction with the environment has very quickly lead you to separate
from your machine, just like in QM...
>
>> And if you succeed in building an intelligent
>> machine (which IS possible in my opinion), this machine SHOULD know that it
>> is a machine, and hence that it is not you, exactly for the same reason why
>> you know that you are you, and not anybody else.
>
> That just depends on the view it has of the external world. You,
>Giles, *could* be an artificial digital intelligence in a simultated
>environment in a supercomputer, and you'd never know it if it's done well
>enough.
This point must also be considered carefully. If you think of the last
point, you are inevitably led to conclude that either your machine is
interacting with the real physical environment (through sensors, photo
cells and so on...), but in this case it will be conscious to be a
different being, or that you must simulate entirely its environment, but as
you cannot put in this artificial environment a second machine identical to
the first one, it will be also different.
I can be an artificial intelligence simulated by a supercomputer, but in
this case, this supercomputer works in a world that I cannot access. And in
this simulation, I must be unique. There may be another identical
calculation with another me, but it corresponds also to another world that
I cannot access. So my point is that a conscious being cannot be duplicated
exactly IN THE WORLD WHERE HE IS LIVING, in other words he cannot INTERACT
with an exact copy of himself. Parallel computations and MWI escape this
constraint, because the duplication takes place in both cases in ANOTHER
OBSERVABLE WORLD.
Don't you think it sounds coherent?
Gilles
.
Received on Fri Apr 30 1999 - 01:04:01 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST