Re: Implementation

From: Marchal <>
Date: Tue Jul 27 11:24:51 1999

I will make some comments on the last posts. But we are entering in
very deep
waters and I would like to make general remarks.

I said it before, but I want
repeat it here. One of my main goal is to understand what is "the
physical" and where does it comes from. And like Wheeler I don't think
that this can be explain by physical laws.

With Ockham razor there is no need of
the crackpot/Maudlin argument, the UD argument (PE-omega) is quasi-enough
to convince oneself
that the 'universal part' of physics must be extract from computer

Note also that the 'reversal' is in both Tegmark and Schmidhuber (it
seems to me), but they
haven't see the measure problem (do they ?), and they havent' put their
methodology to its logical extreme.

Another general remark is that, in your post, I agree sometimes whith
what you are saying until you jump to a conclusion which I don't

George Levy wrote:

>Similarly, the insertion of the piece of wood in the computer must be done
>someone. Let's call that someone Maudlin's demon. Deciding what the right
>place and the right time is to make the wood irrelevent to the thinking
>process, in order to satisfy the counterfactual role that the wood must
>requires Maudlin's demon to think. Maudlin's demon then becomes part of the
>computer's consciousness just like the subject in the chinese room
>becomes a cog in the chinese room ability to speak chinese.

An interesting similar (although computationalist) move has been made
by Eric Barnes "The causal history of computational activity: Maudlin and
Olympia. (The journal of philosophy 1991, pp 304-316.)
What is interesting for me is that Barnes'move forces him to pretend
having the ability to distinguish between being awake and being dreaming
(which I doubt), (which contredict also my theetetic self-referential
theory of knowledge where p is known when p is justified by the machine
and true or just consistent).

George Levy wrote:

>In the end, I am a strong sceptic of both computationalism and physical
>supervenience. I believe that consciousness exists only in the eyes of the
>beholder, and is a relativistic property, based on the relativity of
>information as defined by Claude Shannon.

I am definitely open to the idea that consciousness exists only in the
eyes of the beholder, that it is a relativistic property, based on the
relativity of mutual information as defined by Claude Shannon, Kolmogorov,
and as used by Everett (but see also the paper of Adami and Cerf in the

Hans Moravec wrote:

>So, deterministic machines can have just as much free will as you or
>I. The key is that they don't know everything that's going on,
>outside themselves or in, so often don't know what will happen next,
>or how they will respond to it. Many-worlds may provide an
>interesting additional "source" of ignorance, but limitations on what
>a finite process can model already provide sufficient ignorance for
>free will even in a fully deterministic framework.

I agree. What MW or self-duplication adds is truly random uncertainty.
This is "testify" by quantum computers.
I don't think determinism is an "effective" problem for free-will,
nor do I think randomization can help in making free-will possible.
I think free-will is related with the boundary of self-knowledge.

Hal Finney wrote:

>But more than one computation can be conscious, obviously. It is
>conceivable that the new computation, although different, is conscious
>as well. This is a possible escape from Maudlin's argument.

> [...]

Are the content of consciousness different ? or the intensity are
different ? I'm not sure I understand.

>So it seems to me we need a new argument for why the computer sans
>counterfactuals should not be considered conscious.

> [...]

It seems to me that a computer without counterfactuals is like
a doll, a teddy bear, or a sculpture.
Unlike Hans I don't understand what would it mean to ascribe them

>One of the thins which is attractive about Wei's approach, as I understand
>it, is that it does not try to answer the question of whether a given
>system is conscious, at least not in yes-or-no terms. Rather, it tries
>to give a probability that a given system is conscious, and specifically
>that it instantiates a particular consciousness, such as my own.
>This allows you to have such things as systems which are "probably"
>conscious, or, in a sense, "partially" conscious (in the sense that
>we can treat them as having a 10% chance of being conscious, say).
>This interpretation makes most sense in the context of the Strong
>Self-Selection Assumption (that we can consider our moments of experience
>as randomly chosen from among all observer-moments). The probabilities
>assigned to consciousness serve as a weighting factor for how much they
>contribute to the ensemble of all observer-moments.

I agree and I appreciate very much this way of seeing the things. It is
linked (from my humble understanding) to James Higgo's anthropic
principle/occam-razor. This 'interpretation' along with comp leads to
a total reversal ....but in the relative way (I will not insist here).
I do link consciousness with machine's inference about their own possible
consistent extensions.

David Seaman wrote:

>This seems an excellent viewpoint, consciousness requires the freedom to
>react to a reasonably wide range of circumstances in a way which is not
>predictable to other observers. So a single execution can never confirm or
>deny consciousness however many times it is replayed. But I'm not so sure
>that a Turing machine cannot have free will. I'd guess that the appearance
>of free will can emerge from a sufficiently complex TM provided that the TM
>exists in a suitably complex environment. If a person built a machine
>containing a TM it would be part of our MWI universe and the requirements
>could be satisfied. This would not be an isolated TM since it would be
>simulated by and react to its environment, and any 'randomness' requirement
>could actually involve a sensitivity to gravitons, photons, or quantum
>I tend to agree that a completely isolated TM is unlikely to have free will
>or be conscious (and in any case it would be impossible to test it). Of
>course the program executed by an isolated TM may well be able to generate
>a universe containing conscious subjects. In the special case of an
>isolated TM generating a universe which contains exactly one conscious
>subject in a suitable environment it could loosely be said that the TM's
>program is that conscious subject. But this is different to saying that
>the TM itself is conscious, and it would not be apparent from looking at
>the TM that it was generating consciousness.

I agree in part. It is easy to build version of dreaming Olympia.
A dreaming machine would be, at least here and now, an isolated
(but not necessarily awake) machine.
I think an isolated 'conscious' machine cannot be isolate for ever for
purely computational reasons.

Jacques Mallah wrote:

> Bruno, I think it is now abundently clear that Maudlin's paper
>does not rule out physical computationalism, and other people on the list
>have seen that as well.

Clear would be enough. Abundently clear is a little to much.

I don't understand what really means 'physical' in physical
It is clear that we have not the same primitive elements.
I believe in numbers and number's dreams. Some dreams are deep and
partially sharable among UTMs, those are their relative realities.

I appreciate the everythinger's work on these questions, and I guess it
is not easy to abandon the physical supervenience thesis.

Russell Standish wrote:

> > Chris, this is a well thought out reponse, and it persuades me that
> > the difference between conciousness and nonconciousness could be as
> > little as the "inert block of wood", precisely because it is a
> > physically different system. It actually reminds me of the quantum 2
> > split experiment. An interference pattern is seen, or not seen
> > according to whether a detector placed at one of the slits is switched
> > on or not.

> [...]

1) The great programmer dovetail also on the quantum turing machines...
2) I think so. There is a deeper analogy between the computationalist's
counterfactuals and the quantum. This is linked to a paper by Hardegree
showing a formal similarity between a very natural definition of 'quantum
implication' and Stalnaker's logic of counterfactual and my own definition
of 'observation'. (ref. in my thesis), and the resulting arithmetical
"quantum logic".

>Thinking about this some more, I realise this is exactly what is going
>on. Consider Olympia from a MWI point of view. For the vast majority
>of worlds containing Olympia, Karas (I believe that is what the
>correcting machinery is called) is active, handling the
>counterfactuals. Only on one world line (of measure zero!) is Karas
>inactive, and Olympia is simply replaying the previously recorded
>Now consider what happens when Karas is turned off, or prevented from
>operating. Then, in all world lines is Olympia simply a replay
>device. From the MWI point of view, the simple inert piece of wood is
>not so innocuous. It changes the systems dynamics completely.


>Now this has bearing on a supposition I have argued earlier - that
>conciousness requires free will, and the only way to have free will is
>via the MWI picture.

Not OK. See above.

>In this context, a Turing machine can never be
>concious, because it follows a preprogrammed path, without free
>will. Note this is not the same as saying comp is false, unless you
>strictly define computers to be Turing machines.

I do. It is my working hypothesis.

>My suspicion is that
>adding a genuine random number generator to the machine may be
>sufficient to endow the architecture with free will, however, of
>course the question is unresolved.
>What does this all mean for your thesis Bruno? Alas I didn't follow
>your argument (not because it was written in French - which I have no
>problem with, rather because I was not familiar with the modal logic
>you employed, and haven't raised enough enthusiasm to follow up the
>references). Could it be implying that you have too restrictive
>definitions of both comp and sup-phys?

Church's Thesis is a vaccin against any restrictive interpretation
of comp. Comp makes the unknown much bigger that we have ever thought.
Even if from the archimedian point of view there are only numbers

>Quote from Bruno follows:
>> This seems rather magical to me. If only because, for a
>> computationalist,
>> the only role of the inert block (during the particular execution) is
>> to
>> explain why the machine WOULD have give a correct answer in case the
>> inputs WOULD have been different.
>> This mean that you don't associate consciousness with a particular
>> physical computation but with the entire set of possible computations.
>> But that is exactly what I do ..., and what I mean by the abandon
>> of physical supervenience.
>> A singular brain's activity becomes an invention of the mind.
>Could it mean that you are defining sup-phys to be supervenience on
>the one track classical physics, rather than on the MWI style quantum

Most people in cognitive science do but I do not care about the level of
I think that ANY sufficiently patient self-referentially correct
machines, either by introspection or by observation (or a mixture of
both), will infer MWI-like physics (once observing below their level of
 duplication, for example).

Received on Tue Jul 27 1999 - 11:24:51 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST