Re: The Game of Life

From: <GSLevy.domain.name.hidden>
Date: Tue, 21 Dec 1999 03:06:08 EST

Sorry, I read Jerry's post too quickly and I was not explicit enough. The
paradox I was talking about was the inconsistency that Jerry pointed out
about Hal Finney's post. I guess Hal was supporting comp.

Let me try to be more explicit about my position with regard to comp.

I believe that it is a necessary condition that consciousness requires a
Turing Machine or the equivalent. However, it is not a sufficient condition.
For example, a Turing Machine can simulate fluid flow, the weather, the stock
market, astronomical events and even the Game of Life. This is not to say
that any of those phenomenon, no matter how complicated can lead to
consciousness. AN ADDITIONAL CONDITION IS REQUIRED.

I would like to follow in the footsteps of (I think) Von Neumann and try to
describe consciousness, the way he described life. Essentially he said that a
living form must include the following functions:
    1) A plan of the living creature itself ( here we have self
referentiality)
    2) An execution device (such as a Turing Machine) which reads the plan,
execute its instructions including collecting parts and assemble them into a
new living form and finally start the execution device in the newly formed
living form.

    I think that consciousness requires:
    1) A model of the self (This condition opens the pandora's box of
recursion and self-referentiality)
    2) A Turing Machine or the equivalent, which would attempt to simulate
the self using the available model.

These two conditions lead to an explanation of free will and the feeling of
the "I", "Le Moi" in French. (The English expression "to be reflexive", to
think, is a clue that we are looking at the mental model of our selves in
certain thinking situations.)

Let's create a living form with such a "program" and let's ask it to do the
exact opposite of what it would do in a given situation. (I am leading here
to Newcomb's paradox: a super being gives you the opportunity to make a
million dollars if you make the choice opposite of what you would have made)
To answer my question, the creature would simulate itself with its on-board
model, but, since this question involves self reference, it would end up in
an infinite self referential loop. The result would be total INDETERMINACY:
What we call free will. Note: when there is determinacy, there is no free
will.

As I mentioned a few months ago, free will is also RELATIVE TO THE OBSERVER.
A sophisticated observer may predict in advance a persons every single move.
In this case this person can be considered, as far as the observer is
concerned, to be a robot or a zombie. Yet this sophisticated observer, may
himself be observed by an even more intelligent creature and he would be just
a zombie in the eyes of this super intelligent creature.

The "I", le "Moi" is the decision process BLACK HOLE that we perceive in
these situations. FREE WILL is the INDETERMINACY that comes out of this black
hole.

IN SUMMARY: CONSCIOUSNESS IS NOT THE TRIUMPH OF COMP BUT ITS FAILURE!!!!
Consciousness exist at the boundary between what is computable and what is
not.

And of course the definition of computability depends on the set of axioms or
rules or laws that govern your mind, fully in a Godelian sense. Computability
is relativistic and the frame of reference is precisely that mental set of
axioms or rules or laws.

In a perfect state of indecision, that is in a perfect state of
indeterminacy, when all the neuronal outputs could equally go one way or the
other, in this state, it is probable that quantum effects would become
dominant. Hence free will is also a quantum phenomenon AND COMPUTATIONAL
INDETERMINACY IS LINKED TO QUANTUM INDETERMINACY. Every time we make a
difficult decision, we straddle the quantum branches and end up in all
possible worlds. When we make an easy decision we end up in one world.

George Levy

n a message dated 12/20/1999 7:46:37 AM Pacific Standard Time,
marchal.domain.name.hidden writes:

> GSLevy wrote:
>
> >Jerry makes the mistake of shifting his coordinate system twice, and this
> is
> >why the paradox that he describes arises.
>
> I agree with Jerry here. What paradox ? What error ?
>
> >
> >The first shift in coordinate has to do with 1st and 3rd person.
> >Jerry does not see that awareness is a 1st person phenomenon.
>
> I'm not sure that Jerry does not see that awareness is a 1st person
> phenomenon.
> When Jerry says:
>
> > Are you arguing that a program has to be run before SAS's embodied
> > in that program experience consciousness? I totally disagree with
> > this approach, which some people call'computationalism'
> > (confusingly), don't they?
>
> I guess that he is talking about Jacques Mallah type of physical
> computationalism where there is a need for a "real physical running"
> of the machine for consciousness to appear, and so there is a need
> for a "real universe" whatever that means.
>
> GSLevy:
> >So those
> >creature in the universe of Life believe themselves to be conscious ( in
> the
> >first person)
>
> OK.
>
> >but from the Great Programmer perspective (3rd person) or from
> >any sophisticated external observer they are not.
>
> Why do you want making these poor little creatures to be
> wrong ? And wrong about their own feelings.
>
> How could someone believe to be conscious without being conscious ?
> Consciousness is a "pure" first person phenomenon.
>
>
> You *do* believe in
> zombies, don't you ? I mean you think that those creatures are
> exactly like us, but that they are unconscious ?
>
> Schmidhuber's, not to talk about Tegmark's, universes, or the little
> simple big
> everything, all that would be full of creatures believing wrongly they
> feel ?
>
> And how could you know you are not among those creatures?
>
> (Of course in the UDA it is an *hypothesis* that those machines are
> (relatively) correct.)
>
> \begin{for the modalist}
> But even without that hypothesis you can also just modelize the
> correcteness
> of the machine by []p -> p, or just by consistency <>T.
> (Jacques Baihache suggests me that I use [] for the modal box and <> for
> the
> diamond.) Remeber that [] and <> are interdefinissable: <>p can be seen
> as an
> abreviation of -[]-p, and []p can be seen as an abreviation of -<>-p)
> Remember that in classical propositional logic -p is equivalent to p->F,
> so that "[]F - >F" is equivalent to -[]F, which is equivalent to <>T.
>
> And let us interview the SRC machine through G and its Guardian Angel G*.
> (SRC = self-referentially-correct)
> Well the machine seems to remain silent on <>T. The Angel tell us the
> machine is correct (G* proves []p -> p), and consistent (G* proves <>T),
> and he did tell us that the machine cannot know it, nor justify it
> (G* proves -[]([]p->p) and G* proves -[]<>T).
> Quite the reverse of you, it seems. The little SRC creature seems a little
> bit wiser about what she know.
> \end{for the modalist}
>
> This will not convince you, nor is it intended to convince you. Just to
> tell
> you my opinion and the SRC machine's opinion, and its Guardian Angel's,
> opinion. Which is Jerry's opinion too if I
> understand and interpret him correctly.
>
>
> >The second shift has to do with the action of running the program. Before
> >the
> >computer is started, these creatures in the Life Universe do not exist in
> >our
> >*time*. Their time is frozen - compared to ours - so from our point of
view
>
> >they are not conscious. They are just a bunch of inert bits. When the
> >computer runs, their time becomes like ours and now they appear to be
> >conscious.
>
> "Running" a machine is a modality which makes sense only relatively
> to you. That relative running makes it possible for the machine to
> manifest its consciousness relatively to you. It makes possible
> to entangled and share computationnal histories. But consciousness per se
> is not linked to the dynamical physical activity itself.
>
> >Resolution of this paradox illustrates the relativistic issues in the
> >observation process and in particular the relativistic quality of the
> >1st/3rd
> >person point of view.
>
> I don't understand.
>
>
> >The relativity of information in terms of mutual
> >information as defined by Claude Shannon has deep consequences in physics
> >that, I feel, should be explored. In this context, Hawkings has made a
> major
> >breakthrough in the understanding of black holes by relating entropy to
> >their
> >size.
>
> I agree although I'm not sure to see the relevance here.
>
> Perhaps you are correct on all the points in which case comp should
> be wrong (or my reasoning!).
>
> Bruno
>
>
Received on Tue Dec 21 1999 - 00:07:59 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST