Re: A nerw idea to play with
>Gilles Henri wrote:
>
>>To tell it again, I think we are "too much" analogical to be
>>"psychologically equivalent" to anything else than ourselves.
>
>I can understand the psychological appeal for self-unicity.
>But I am not sure there is evidence for our analogicalness, nor
>am I sure I understand the word.
For me digital systems are systems whose some characteristics (description and evolution of their "state") are EXACTLY equivalent to a TM. All our computers are obviously of this type. All systems that are not known to be digital must be considered as analogical (in fact they are all analogical at another level of description). The burden of proof is for the demonstration of "digital character", the default value being analogicalness.
I do not know any description of our brain that is obviously exactly equivalent to some TM. I would be very happy if someone in this group could give me one.
>
>>You escaped the question of the environment in your thesis by considering a
>>dreaming machine. But if the machine is not able to interact with the
>>environment, it is very improbable that it actually dreams, even if it
>>pictures perfectly a dreaming brain, bacause you cannot dream of something
>>you never experiment in the reality.
>
>I think, by some other mail you send, that you have understand
>my proof that if we are digital machine there is no environment
>at all, in the sense you are using here.
>If you think that it is a sufficient reason to abandon the
>mechanist hypothesis, why not. But nobody has ever gives a proof
>of the existence of a 'substancial' neighborhood, so it is a
>matter of religious belief or axiomatic hypothesis.
Bruno, I really think that you are playing with words here!. When you say "If we are digital machine", the simple use of "we" implies the existence of something else, that I define to be the environment. Of course this is not strictly axiomatic, but we are at a level of description (speaking of "we", "computers", "Turing machines") that implies some categorization of the apparent world into objects, whatever the real world can be. Why did you take the example of a dreaming brain?
>
>> One practical problem is how can you check that your system is really
>>thinking?
>
>How can you check that 'anything' is thinking ? There is no possible
>check.
>To *attribute* thinking is a kind of social betting strategy.
>(Independently of the fact that 'God' knows the truth).
I agree, that's why I use "check" and not "demonstrate". A check is never 100% certain. All what I said was about the "betting strategy" about possible thinking machines.
>Turing test can help to make your mind, but it is neither
>necessary (cf paralised people)
Independantly of the practical possibility, I see no reason why duplicated people would fail the Turing test.
>nor sufficient (some people
>HAVE attributed consciousness to Feigenbaum's ELIZA).
or to Teddy bears!
ELIZA is not conscious because it uses words that do not correspond to an experience. We give meaning to the language because it is associated to physical ("analogical") sensations, not because of its formal structure. I think that the Turing test (or better as I explained a "modified" Turing test) is suitable to test this characteristics. For example if ELIZA speaks of "parents", what could be its answer of "Please tell me about your own parents?"
>So I am stuck in an abyss of perplexity when you define comp2 by
>"a digital machine" can emulate (from a third person perspective) an
>analogical machine.
>I don't think James or any other many-worlder has ever say something
>like that. It would be like to simulate 2$ with 1$.
>Only dishonest people can do that, for a short time.
I agree completely with you- comp2 is actually the hypothesis that the computation of the physical properties of an analogical machine (e.g. our brain) can emulate this machine from the first person point of view.
Gilles
Received on Mon Sep 06 1999 - 02:01:11 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST