Re: consciousness based on information or computation?

From: <hal.domain.name.hidden>
Date: Thu, 21 Jan 1999 10:24:37 -0800

Wei Dai, <weidai.domain.name.hidden>, writes:
> On Fri, Jan 15, 1999 at 05:04:09PM -0800, hal.domain.name.hidden wrote:
> > Two computers, running in lock-step, performing exactly the
> > same calculation at every instant, seem to me to be effectivelly the same
> > as a single computer. Make a computer with extra wide processing elements
> > and data paths, then divide them all down the middle by an insulator.
> > You have turned one computer into two. I don't see how that can be a
> > subjective change. Make the insulator a variable resistor, and we can
> > vary smoothly between one and two computers. But it does not seem that
> > we should be able to vary smoothly between one and two conscious entities.
>
> Another solution to this puzzle is the idea that the measure of a
> conscious experience is related to the measure of the state information
> that produces that experience. Given this assumption, when you double the
> computer's processing element widths, you double the system's contribution
> to the universal a priori distribution and therefore double the measures
> of the conscious experiences related to the system. And the same thing
> happens when you double the number of computers instead.

That's an interesting possibility, which I will have to give more
thought to. The idea that bigger computers would actually produce
a larger measure for conscious systems that it implements is counter
intuitive but not completely impossible.

I see a few difficulties with the proposal, though. We already face
the difficulty of identifying when a system instantiates a computation.
Now we would have the additional difficulty of identifying what aspects
of the system contribute to the measure. Presumably non-functional
elements would not increase the measure (?). Is it enough to make the
wires thicker, or do the processing elements themselves also have to
be larger? Would expanding the size of the computer without increasing
the size of the parts be enough? How about moving just one part a long
ways away?

It would seem that in general the solution to these problems is to find
the shortest description of the computer, and then make the measure be
proportional to that. As long as you can determine whether a given
computer instantiates the intelligent program, you can see whether a
given description of sub-parts of the computer would also instantiate
that program, and in that way eliminate redundant elements. So if I tried
to define "my computer" as my PC plus the entire earth, giving it a huge
measure for the AI program it is running, this method would catch that by
finding that the same program is instantiated even if we exclude the earth
and count only the PC. But we can't exclude more than that or we don't
find that the program is instantiated. In this case we are able to draw
a boundary around the computer and eliminate the unnecessary elements.

However it seems that for the double-thick computer, you could describe
a subset of the computer which is only half as wide, which would also
instantiate the computation, hence the additional width did not make
a contribution, which is not the desired result. So I'm not sure this
method is going to work in general.

I wonder if this proposal would also produce the effect that natural
selection would "seem" to favor larger and more complicated brains?
If those systems have larger "measure", and we are perceiving the world
from such systems, then it would give the appearance of a boost in
probability for making big brains, independent of any survival advantage
that such brains offered. Does that seem right? Could this actually
be an explanation for the evolution of intelligence? (Or is it just
a restatement of the anthropic principle, that intelligence must have
evolved for us to be asking these questions?)

Hal
Received on Thu Jan 21 1999 - 10:40:01 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST