Re: A nerw idea to play with

From: Gilles HENRI <Gilles.Henri.domain.name.hidden>
Date: Fri, 3 Sep 1999 15:34:33 +0200

>Gilles Henri wrote:
>
>>I REFUSED to see this movie! (ok, I may be wrong, but let say I refuse to
>>pay to see this movie, I will wait until it comes on TV)
>
>I like very much your sense of humour, Gilles.

thanks, it tends too be more prominent after some weeks of vacations!

>>Again all this stuff relies on what I called comp2, ie the hypothesis that
>>a digital simulation can be at some level completely equivalent to an
>>analogic physical system.
>
>Being digitalisable ourself (with comp) we don't
>need to be emulated at *any* analogical level (if that exist at all).
>Comp = there is a level where we are psychologicaly equivalent, no more.

Of course that's exactly the questionable point. I know we've had this
discussion many times , and that we have not succeeded in convincing each
other!

To tell it again, I think we are "too much" analogical to be
"psychologically equivalent" to anything else than ourselves. For our
psychology implies a representation of our environment and of ourselves.
But it is impossible to build a machine who would have a correct
representation of its environment, but not of itself (believing it is a
human).
You escaped the question of the environment in your thesis by considering a
dreaming machine. But if the machine is not able to interact with the
environment, it is very improbable that it actually dreams, even if it
pictures perfectly a dreaming brain, bacause you cannot dream of something
you never experiment in the reality. It is another form of
counterfactuality: you can dream only if you could be able to be sensitive
to what you dream of (i.e. be awake).

 One practical problem is how can you check that your system is really
thinking?

I thought recently that the Turing test could well be not so good as I used
to think before. I found Turing's definition of a thinking machine very
smart because it avoided to give an objective definition of consciousness,
which is impossible to do. However I think now there is a fundamental flaw
in it : for a thinking machine should not behave like a human, if it is
really thinking. It should behave like a machine knowing it is a machine,
which is very different. I think the Turing test may have led people to try
to imaginate artificial systems emulating human beings, and that may have
been misleading.

For example if I am sitting in front of a screen connected to a computer
that writes "Hi Gilles my name is Bruno, I live in Belgium etc..." I would
immediately conclude that the machine does not know anything about the real
world, and that it imitates a human behaviour without feeling anything- in
fact it would be a zombie, that is, if you don't believe in zombie, it
won't be a thinking machine.
BUT if I am sitting in front of a screen that says
"Hi Gilles ; I'm a new generation machine that can think like you, except
some differences due to my electrical nature. For example I can feel very
well the 40 Hz electromagnetic emission from your brain..." I would think
"Waouw that COULD be really a thinking machine!" - although it would not
satisfy the Turing test , and nor be a digital analogous of any human
brain...

>Do you really think that someone believe a digital machine can emulate
>in extenso (completely) an analogical machine (if that exist) ?
>I don't believe that such an idea ever emerged from computer science.
>If that idea is what you mean by comp2, I think that this
>crackpot idea emerges in your own mind.
>

I think many contributors said that, including James in his last mail!

Gilles
Received on Fri Sep 03 1999 - 06:34:32 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST