Re: The Game of Life

From: Marchal <marchal.domain.name.hidden>
Date: Wed Dec 29 03:05:18 1999

Fred Chen wrote:

>The fundamental question here was whether a universe that is essentially a 2D
>infinite grid where the only operating laws are the Game of Life rules would
>give rise to intelligent self-aware (or self-conscious) artificial life
>forms,
>given the appropriate starting configuration.
>
>This rapidly became a question of whether the Turing-machine-computational
>aspect of this universe would be sufficient to generate the self-aware
>substructures (SAS's). Through the most recent postings, we have heard
>supporters of a whole spectrum of responses to this question.
>Conservatively, I
>don't think we will know enough (in 3rd person perspective) to capture all
>aspects of human consciousness in a single machine, and if we did somehow,
>accidentally, I don't think we'd be able to realize it. This is just Godel's
>result.

Exactly. With comp it is even provable that we cannot provably
capture ourself. It is not Godel's result, but is a consequence of
Godel's
result. (And it is provable by G of course, where G is the modal system
axiomatizing completely the propositional consequences of Lob's theorem,
which means that it is not only a theorem in the psychology of
machines, but that it is a theorem in the machine's psychology of
machines).

>The evolutionary aspect of life as we know it is probably a pretty crucial
>part
>of developing the self-awareness that we ourselves experience. Evolution is
>based on survival, so this would tend to nurture the emergence of a
>self-awareness geared toward preservation of self.

OK.

>I am not sure this is a
>proof that even the lower evolved life forms have consciousness, since they
>could be operating by involuntary reflex-type behavior.

Mmh... This is indeed certainly not a proof that the lower evolved life
forms have
consciousness. But *that* is not a disproof either. Think about Hal
Finney's
contemplative spectator ... I am open that consciousness appear with the
involuntary reflex-type behavior. Probably the more voluntary less
reflex-type
behavior involves a more elaborate "self-consciousness".
I think Hal Finney's distinction is relevant here (independently that we
disagree on the fabric of the spectacle).

>It is difficult to imagine if a straightforward Turing approach would be able
>to capture the evolutionary process, which would seem to require random
>environmental inputs.
> ...

Grrr... Firstly, and this is linked to Chaitin's theorem, a machine
having localy
some complexity C1 cannot distinguish randomness with the output of a
machine with complexity C2, with C2 > C1. This gives some Turing type
approaches
for relative randomnes.
Secondly, with self-multiplication (cf the SSA, UDA), computationalism
(the belief
that you can survive with a digital brain) forces an Everett-like
phenomenology of randomness, which can be share among "entangled"
computational
histories.

The turing realm is closed for the diagonalisation operator, that's why it
escapes all possible axiomatisable theories.
There are ten thousand sort of randomnes available.

(The truth is that with comp (but a fortiori with Tegmark and any
everything
approaches) there is *too much* randomnes, we must explain away white
rabbits
and white noises and justify the apparent determinism).

Bruno
Received on Wed Dec 29 1999 - 03:05:18 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST