Re: The implementation problem
On Thu, 21 Jan 1999, Gilles HENRI wrote:
> I am personnaly very reluctant to interpret the consciousness in terms of
> "intrinsical" information (that could be objectively defined and measured).
> Take a typical state of the brain, or a succession of states realizing an
> implementation of some computation, for example corresponding to you seeing
> an elephant and saying "Oh an elephant!". Now imagine you realize a random
> permutation of neural cells which keep their electric and chemical states,
> but put them in different cortical areas. Most probably in this new state
> you won't feel and say anything sensible, although the new implementation
> is logically perfectly equivalent to the first one. The meaning of your
> thoughts must come in final analysis from the way your neural cells are
> actually connected to your I/O devices (eyes, ears, muscles..), not only
> from their formal arrangement and state.
If you move the neurons around but keep their connections in place
(stretch them somehow), and take some kind of steps so that the timing
of a signal to get from A to B stays the same, then it should not make
any difference in the brain function. The brain will continue to operate
as usual. It will process inputs and produce outputs in the same way.
The person will not report any differences in his sensations, in his
consciousness, or in any aspect of his brain's functionality.
Now, you can argue that he is simply mistaken, that his consciousness is
completely different but he is unable to report it for some reason, but
this is a very difficult position to support.
If you are talking about breaking the neural connections when you move
the neurons around, then they will no longer fire with the same patterns
they would have. Neurons with broken connections won't fire, or will
do so randomly or haphazardly. They will not produce the same firing
patterns they would have if they were hooked up correctly. So there
will be no simple mapping between the old firing pattern and the new one.
If you are talking about using some kind of artificial means to force
the neuron to fire with the pattern it would have used if it were still
connected (say, a little mechanical device which gives the neuron a
"kick" when it is supposed to fire) then how could the device know when
to fire the neuron? You would have to be doing some kind of "replay" of
an already-recorded consciousness, I think. But as I have been arguing,
such "replays" are not perceptible to the consciousness involved.
I can't tell right now whether I am being instantiated just once or an
infinite number of times.
Hal
Received on Sat Jan 23 1999 - 22:06:07 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST