RE: consciousness based on information or computation?

From: Higgo James <james.higgo.domain.name.hidden>
Date: Mon, 1 Feb 1999 11:32:43 -0000

Schmidhuber suggests that there is just one program, as simple as possible,
and the simples program generates all possible universes with their
complement of consciousness or lack of it. So he and I agree that the fact
that there is an awful lot of 'noise' does not matter.

> -----Original Message-----
> From: hal.domain.name.hidden [SMTP:hal.domain.name.hidden.org]
> Sent: 30 January 1999 22:47
> To: everything-list.domain.name.hidden
> Subject: Re: consciousness based on information or computation?
>
> Several people have noticed that a very simple program can create a
> pattern or even, by some definitions, a process which imitates our minds.
> A trivial counting program eventually creates all possible patterns,
> and you can map any string into a four-dimensional space-time diagram
> which is isomorphic to an entire universe. It seems, then, that even
> a trivial program like this ought to be able to create structures which
> are identical to our minds down to the smallest detail, and therefore,
> perhaps, we should conclude that we are likely to be the product of such
> a trivial program.
>
> If I understand him, Wei suggests that a way around this is to notice that
> the valid mind or valid universe produced in this way is buried amidst
> an unimaginably large amount of garbage. To estimate the contribution a
> particular program makes to instantiating a mind we should look not only
> at the size of the program, but also at the difficulty of identifying
> or locating the mind amongst its output. Here we have a very simple
> program, but if we then wanted to point to where in its output we had
> a mind-creating universe being produced, we would have to specify a
> huge amount of information. Combining these two we find that the total
> information needed to both *create* and *locate* the mind in the output
> of the program is very large, making the contribution of that program
> be very small.
>
> This approach also offers a solution to the problem that even in our
> own universe, you can select subsets of, say, water molecule collisions
> which produce patterns identical to our own consciousness. A vat of water
> as big as my brain has billions of water molecules in each region which
> is of the size of my neurons, and in the time my neurons fire, billions
> of water molecule collisions will occur in each such region. By paying
> attention to only a tiny subset of these water molecule collisions, we
> can find a pattern of collisions which is identical to the pattern of
> firings of my neurons when I am thinking. Of course, to do this we have
> to ignore the billions of other collisions we aren't paying attention to,
> but it might seem that adding more collisions cannot change the existence
> of the patterns that are sufficient for consciousness.
>
> However, Wei's approach calls attention to the size of the program needed
> to select those molecules to pay attention to. In practice we would seem
> to need a full-sized neural simulation to do that. This would be a very
> large program, so the actual contribution of such random collisions to
> my consciousness would be vanishingly small.
>
> This approach seems to move away from the question, is this system
> conscious? That is no longer seen as having a yes/no answer. Instead, we
> try to say, how much contribution does this system make to a given mind?
> If we can have a short program to define the laws of physics which govern
> the system, and another short program can identify the mind states with
> states of the system, then it makes a relatively large contribution.
> But if either of these two programs must be large, then it makes a
> small contribution.
>
> It is somewhat counter-intuitive to downgrade the mental contribution
> of a pattern just because it is lost among a large amount of noise.
> From the external perspective, maybe it makes sense; information that
> is buried is less useful than information which is easily exposed.
> But mental activity seems like an "inner" quality; the string doesn't
> "know" whether it is surrounded by noise or is out in the open; it should
> be equally conscious either way.
>
> However if we accept the initial idea that short programs make more of
> a contribution, I don't see a problem in saying that short *output* also
> makes more of a contribution. Why, after all, should minds produced by
> short programs be considered to make a larger contribution than those
> for long ones? We have to make some implicit assumptions in order to
> make this work. Sometimes we imagine an actual computer cranking out
> programs and running them, with short programs getting more run time
> than long ones.
>
> With this kind of mechanistic picture, we could also imagine that programs
> which take more time to produce their output could also make less of a
> contribution. Programs which produce larger output could be penalized
> in this way or simply by virtue of their greater resource requirements.
>
> There does not seem to be a unique measuring system for estimating
> the contribution of any given program to the overall reality.
> Penalizing programs for program size, run time, or output size all seem
> somewhat plausible. With this kind of model it does make sense that the
> contribution a given output pattern makes would depend on details of how
> it was produced by the program, including how much other output there was.
>
> Hal
Received on Mon Feb 01 1999 - 03:38:04 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST