Re: against computationalism

From: Marchal <marchal.domain.name.hidden>
Date: Mon Aug 2 09:12:31 1999

Brief comments on Gilles Henri "against computationalism"
Original message below.


Comp, as I conceive it, is analogous to 'comp1 OR comp2', with a non
constructive OR. It means I am open to a high level of substitution
and I am open to a low level of substitution (I survive a
fonctionnal substitution at that level)
... but I will NEVER know that level.

Note that when you say:
>comp2 : all the physical world is EXACTLY equivalent to some computation of
>its analogic properties at a finite level.

It is still a "physicalist" description. It is less misleading
to replace "physical world" by psychological histories".


>However, I want to stress here that comp2 is indeed a huge jump: in fact it
>does not COMPLEMENT but rather CONTRADICT comp1; it is the source of all
>paradoxes and worst, it it completely useless to understand the origin of
>consciousness.
>
>comp2 contradicts comp1 because the essence of comp1 is the independance of
>the results with respect to the material implementation, whereas comp2
>requires the precize definition of this implementation (of course the
>simulation made in comp2 could be runned on any TM, but the object of the
>simulation must be one precise physical system).

Only your "physicalist" conception makes you confuse "comp2" with
Jacques Mallah's physical computationalism. The difference between comp1
and
comp2 is a matter of degree. It is the difference between comp high level
and comp low level. My resoning is independant of the level.

> This would
>insure the proper handling of counterfactuals which is in fact nothing else
>than the self-construction of consciousness.

I like that very much. Note that the concept of computation gives (with
CT) an absolute way of defining counterfactuals.

>This is NOT to be taken as a
>formal definition of consciousness but only a possible restriction to get
>an actually conscious device. It would not be pure computationalism because
>it would put restrictions on the kind of implementations it requires.

So you get a Mallah sort of implementation problem.

>It makes it still worse because abandoning the idea of physical
>reality (Marchal) means also the impossibility of linking a computation
>with a physical state, which is the starting point of comp2 and
>Tegmark-Schmidhuber and co..theory!!!

It is the contrary. It is the very impossibility to link 'conscious
computation'
with a physical activity, which makes possible for me (without Occam
razor)
to abandon the need in a 'physical universe' or 'physical reality'.
This follows *also* from the PE-OMEGA thought experiment + Occam Razor.
The only difference between TEGMARK-SCHMIDHUBER and me, is (I deduce it
from comp BTW) that it is not possible to use a particular execution of a
program
to individualise (single out) a 'particular' consciousness, so that
conciousness and all possible subjective experiences (always
indexicalised: all the 'here, now and me') are always defined by the
whole set of computational histories. In fact I reduce the
'consciousness' problem to the
problem of the origin of the physical sensations and laws.
Tegmark can be made coherent by the abandon of computationalism.
Schmidhuber, IMO, should realise that with comp he MUST reduces the
physical laws to the machine's law of dreaming (let us say).

>So
>how to found Bruno's "computational psychology ?

Generally in computer/information/communicability/prouvability sciences.
Precisely in my chapter 5 :-)

Bruno




--- original message by Gilles ---

>I'd like to expose some arguments against computationalism - mainly
>elaborated through the various discussions on this list, many thanks to all
>contributors!
>I think we should distinguish between two forms of computationalism, which
>are unfortunately often confused in many discussions:
>
>comp1 : the brain is an EXACT implementation of some digital computation
>equivalent to a TM. The running of this computation on any equivalent TM
>would produce exactly the same output.
>In other words, we are just running a very complicated digital program
>(much like Word but still more complicated!) that could run on any platform
>(complicated enough). Like our ordinary computers, the implementation must
>be independant of the material structure of the platform, and thus the
>information must lie at a (much) higher level than the molecular one, most
>probably at the level of a neuron state, just like the bits of a computer.
>
>I think we will all agree that the experimental facts from neurobiology are
>enough to conclude that comp1 is FALSE, because a neuron state can not be
>defined in a discrete, deterministic way. The firing pattern depends also
>on many analogic quantities (temperature, concentration of various
>neurotransmettors, possible drugs) and has a strong probabilistic
>(pseudo-random) behaviour.
>
>Most of you seem to conclude : "ok, no problem. We know that all these
>analogic parameters can be modelized by deterministic equations; it's
>enough to implement the solution of these equations to produce the same
>output as the brain, and THUS consciousness, and that's it.".
>Most of the (if not all) devices dicussed as "artificial conscious
>machines",(including Bruno's "crackpot" dreaming machine) are based on this
>assumption. As the brain is in fact interacting with the outer world (a
>point that would deserve more discussions : I think that even dreams or
>hallucinations would not be possible if the brain had NEVER interacted with
>its environment), you soon realize that the equations must take into
>account the whole Universe, a la Schmidhuber. To recall last Tegmark's
>quote:
>
>> Let us imagine a hypothetical Universe much larger than our own,
>> which contains a computer so powerful that it can simulate the time-
>> evolution of our entire Universe. BY HYPOTHESIS (*I emphasize*), the
>>humans in this
>> simulated world would perceive their world as being as real as we
>> perceive ours, so by definition, the simulated universe would have
>> PE [physical existence].
>
> So you are led to another hypothesis, that I will call comp2, which is in
>fact Bruno's "comp" postulate.
>
>comp2 : all the physical world is EXACTLY equivalent to some computation of
>its analogic properties at a finite level .
>
>As Bruno shows it magnificently, comp2 leads inevitably to the actual
>disappearance of the physical world, replaced by a world of computations.
>
>However, I want to stress here that comp2 is indeed a huge jump: in fact it
>does not COMPLEMENT but rather CONTRADICT comp1; it is the source of all
>paradoxes and worst, it it completely useless to understand the origin of
>consciousness.
>
>comp2 contradicts comp1 because the essence of comp1 is the independance of
>the results with respect to the material implementation, whereas comp2
>requires the precize definition of this implementation (of course the
>simulation made in comp2 could be runned on any TM, but the object of the
>simulation must be one precise physical system).
>
>One of the problem is to find the actual level at which this simulation
>should be made. As I already stressed, the physical laws we are using do
>not describe the REALITY, but only our REPRESENTATION of it. Although it is
>plausible that there is SOMETHING objective, we have no idea what this
>could be ("das Ding an sich following Kant). So which representation to
>choose ? The chemical description ? The QM state ? String theory ?(with
>the supplementary difficulty that beyond the QM level, no measurement of
>the Q-state is possible). You could try to accept any level giving an
>acceptable output, but then isn't Hans right when he assumes that even
>Teddy bears and movies characters have "acceptable enough" outputs to be
>considered as conscious?
>Note that this first difficulty did not exist with comp1, but we are facing
>it inevitably with comp2.
>
>Assume you have solved this difficulty, either by finding a known (e.g.
>electro/chemical) level at which the brain evolution IS predictible, or by
>finding a (subquantum) TOE reproducing exactly all know features of the
>Universe. You may hope to describe your brain at this level and calculate
>its evolution. You may think (with comp2) that a TM calculating the state
>of your brain would actually be conscious LIKE YOU.
>I will not recall all paradoxes associated with this hypothesis (for
>example Olympia/Karas paradox) But taking again just the chinese room
>example, let think of the case where your brain would be actually simulated
>not by a machine but BY SOMEBODY ELSE ? Who or what would feel your
>consciousness ? You could imagine a situation where the entirely state of
>your brain at some instant is stored in a huge library, and somebody (for
>example me) is put in charge to calculate its evolution (for example
>applying id\psi/dt = H \psi to a quantum state). What do I have to do for
>this device to actually think (like you, not me) ? Must I write the output
>of my calculation somewhere? with a pen? on a magnetic tape? what if I read
>a stored file of a previous calculation ? And if it is read by somebody
>else that does not know what it represents? Please tell me ! I am paid for
>this job and I don't want to be fired!
>Of course I just point out again the contradictions between comp2 and
>physical supervenience, but abandonning Phys-sup does not solve the first
>point. It makes it still worse because abandoning the idea of physical
>reality (Marchal) means also the impossibility of linking a computation
>with a physical state, which is the starting point of comp2 and
>Tegmark-Schmidhuber and co..theory!!!
>
>A third and last difficulty is that comp2 does not solve the problem of
>consciousness. For if everything is assimilable to computations, what makes
>some computations or parts of computations conscious or not (see Wei)? So
>how to found Bruno's "computational psychology" ? What is the dream of a
>string? Complexity is not enough, because for example the chemical
>evolution of a thinking brain is not more complex to that of a dead brain
>leading to putrefaction. At the analogic level of comp2, you have lost the
>information level of comp1 ! I agree that the problem is the same with
>materialism - I just point out that it is not easier with comp2.
>
>So I think that pure computationalism, either comp1 or comp2, is very hard
>to maintain. Another comp3 proposition?
>
>One remark is that all "thinking" devices based on digital simulations of
>the analogic state of the brain handle in fact much more (and too much)
>information than the brain itself, which is totally unaware of its own
>material structure. A very important fact is that they ALL require an
>external structure able to store the relevant information and program them
>adequately, which is NOT the case of actual brains (and I guess of possible
>future thinking machines).
>
> So my guess is that consciousness requires not only a proper handling of
>information, but that this handling must be a natural consequence of
>physical evolution without ANY interaction with an external storage of
>information about its own structure, even for its construction. This would
>insure the proper handling of counterfactuals which is in fact nothing else
>than the self-construction of consciousness. This is NOT to be taken as a
>formal definition of consciousness but only a possible restriction to get
>an actually conscious device. It would not be pure computationalism because
>it would put restrictions on the kind of implementations it requires.
>
>Comments?
>
>Gilles







 Bruno MARCHAL Phone : +32 (0)2 6502711
 Universite Libre Fax : +32 (0)2 6502715
 de Bruxelles Prive : +32 (0)2 3439666
 Avenue F.D. Roosevelt, 50 IRIDIA, CP 194/6
                                
 B-1050 BRUSSELS Email : marchal.domain.name.hidden
 Belgium URL : http://iridia.ulb.ac.be/~marchal
Received on Mon Aug 02 1999 - 09:12:31 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST