Re: consciousness based on information or computation?

From: Jacques M Mallah <jqm1584.domain.name.hidden>
Date: Sat, 30 Jan 1999 14:50:48 -0500

On Fri, 29 Jan 1999, Wei Dai wrote:
> On Fri, Jan 29, 1999 at 03:10:43PM -0500, Jacques M Mallah wrote:
> > It applies both ways. I do agree that it's not the sort of
> > thing we can expect to resolve. Like reductionism vs. dualism, there are
> > always going to be disagreements, I think.
>
> decision to draw the line at binary strings is based on the fact that
> there is good reason why nothing simpler can give rise to consciousness
> (nothing simpler can contain information), while there doesn't seem to be
> any reason why binary strings can't give rise to consciousness.

        You say there doesn't seem to be a reason strings can't do it, but
to me, there does seem to be one. Like A=B and B=C implies A=C, you
either believe it, or you don't.
        I am a big proponent of Occam's razor. But in using the criterion
of simplicity, one must be sure not to make an explanation too simple to
explain the facts. In this case, the fact is that we have observations
and seem to be able to make decisions. I don't see how just plain
information could give rise to that. Can you explain to me how it
happens that information has such a property? Of course not, no one can
really explain consciousness. That's a fact.
        But computations seem to me to have enough structure - barely
enough - to do the trick. After all, what could we add to computations to
give them more structure? There's really nothing left. (Noncomputatible
stuctures are qualitatively the same as computations in my book, unlike
laws vs. no laws.)

> > There are very likely telltale signs that a neuron has recently
> > been firing, which is just as good in your proposal. But suppose it's an
> > AI with non-volatile memory. It would still have experiences when turned
> > off, or just stored on a CD, in your proposal.
>
> I agree in the AI case, the CD containing the AI's state will contribute
> to its measure. However I do not see this as more counterintuitive than
> your own proposal, where something similar happens if you program a
> computer to repeatly load the AI's state from the CD and then run the AI
> algorithm for a few clock cycles.

        Why is that a problem? If you build a human brain, let it run a
little, then destroy it and rebuild one in the original state, it's the
same thing. Not so counterintuitive. But the CD alone is different - it
just sits there. Or it could be a book, with the pattern of memory states
just printed out on the pages. Or a sand sketch. Wet that sand and you
get 20 years to life in prison.

> > You obviously don't need to replace proteins and DNA, just the
> > neural net.
>
> I meant what are you going to use as the evolutionary mechanism?

        There have been such simulations, I don't know the details.
Without limitations on time and computing power, it shouldn't be that hard.

> > So where are we going to get that authoritative estimate? I don't
> > really know any computer scientists, though NYU has them of course. It
> > would have to be someone interested, but not so interested as to take sides.
>
> It's going to be difficult to find an appropriate authority. This person
> would have to be an expert in computer science, physics, chemistry,
> biology, and neurology.

        Or he could be assisted by one expert from each discipline. But
how could we assemble such a panel? I want to get those estimates. This
is a rare case in which it is actually possible to settle a philosophical
question quantitatively. That alone makes it important.

> > Again, I'm not seeing it the same way as you. You still have not
> > answered my question: are you proposing that the program must print out
> > the conscious part of the string at the beginning of the tape, then erase
> > the rest of the program? Otherwise it is still a substring.
>
> No I'm adopting the standard definition of a prefix machine (see Li and
> Vitanyi) which has seperate input and output tapes.

        I'll have to look at it now that I have L&V, but your proposal
seems stranger all the time to me. Now not any string can be conscious,
just one on an official output tape? Do you think that other types of
machines exist as well (perhaps all types), but don't give rise to
consciousness?

> > What about
> > the fact that it is surrounded by other bits in the orthogonal direction
> > from the other Turing machines?
>
> But the concept of surround doesn't make sense if the spacial dimension in
> the orthorgonal direction is continuous. You can't even give a
> definition for the values of the neighboring bits of a given bit. It's
> like asking what are the two real numbers surrounding PI.

        But you said that you were *not* using a continuous array of
machines! You can't have it both ways. What happened to your discrete
machine cloning scheme, with the array of machines doubling in size at
each time step? Have you abandoned it?
        The time direction is discrete, right? What about the bits
surrounding it in the time direction, which is just another dimension?

                         - - - - - - -
              Jacques Mallah (jqm1584.domain.name.hidden)
       Graduate Student / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
            My URL: http://pages.nyu.edu/~jqm1584/
Received on Sat Jan 30 1999 - 11:52:41 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST