Re: consciousness based on information or computation?

From: Wei Dai <weidai.domain.name.hidden>
Date: Thu, 21 Jan 1999 16:07:11 -0800

On Thu, Jan 21, 1999 at 10:24:37AM -0800, hal.domain.name.hidden wrote:
> That's an interesting possibility, which I will have to give more
> thought to. The idea that bigger computers would actually produce
> a larger measure for conscious systems that it implements is counter
> intuitive but not completely impossible.
>
> I see a few difficulties with the proposal, though. We already face
> the difficulty of identifying when a system instantiates a computation.
> Now we would have the additional difficulty of identifying what aspects
> of the system contribute to the measure. Presumably non-functional
> elements would not increase the measure (?). Is it enough to make the
> wires thicker, or do the processing elements themselves also have to
> be larger? Would expanding the size of the computer without increasing
> the size of the parts be enough? How about moving just one part a long
> ways away?

Let me be more specific and precise about my proposal. I propose that the
measure of a conscious experience is related to the measure of the
associated state information, and take this measure to be the universal a
priori distribution.

The universal a priori probability of a string is inversely related to the
length of the shortest program that outputs that string (the distribution
actually takes into account all programs, but the shortest ones contribute
most to the distribution). Now take an AI running on some computer, and
consider its state at some given time. The shortest program (P1) that
produces this state as output probably consists of two parts. The first
part of the program simulates the physical universe (which let's say is a
newtonian universe) which contains the computer running the AI. The second
part of the program extracts the AI's state from this simulation.

Now if the *memory* elements containing the AI's state were doubled in
size, that should allow the second part of the program to be shorter,
since it would take less information to "find" the AI's state in the
wavefunction simulation. The smaller program size implies a larger measure
of the state.

If the AI were simultaneously running on two computers, there would be two
shortest programs that produce the state as output (they would be
identical in the simulation part but slightly different in the extraction
part), and these two programs together would make twice as much
contribution to the universal a priori distribution as P1, and again the
measure of the state would be increased.
Received on Thu Jan 21 1999 - 16:12:21 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST