RE: The Time Deniers

From: Lee Corbin <lcorbin.domain.name.hidden>
Date: Fri, 8 Jul 2005 15:42:49 -0700

Stathis writes

> Lee Corbin writes:
>
> > But it is *precisely* that I cannot imagine how this stack of
> > Life gels could possibly be thinking or be conscious that forces
> > me to admit that something like time must play a role.
> >
> > Here is why: let's suppose that your stack of Life boards does
> > represent each generation of Conway's Life as it emulates a
> > person.... If a stack of gels like this amounts to the conscious
> > experience of an entity, then it certainly wouldn't hurt to move
> > them farther apart... Next, we alter the orientations of the gels...
> >
> > So, for me, since it is absurd to think that either vibrating
> > bits of matter (an example Hal Finney quotes) or random patches
> > of dust (Greg Egan's theory of Dust) can actually give runtime
> > to entities, then I have to draw the line somewhere. Where I
> > have always chosen is this: if states, no matter now represented,
> > are not causally connected with each other, consciousness does
> > not obtain.
>
> If you remember Egan's "dust" theory in Permutation City, you probably also
> remember that he did the same manipulations of a computation running in time
> as you suggest doing with the Life board stacks in space. Do you not think a
> computation would work if chopped up in this way?

If you are speaking of the earlier part of the Greg Egan novel
(which I claim to entirely understand) then no, he did not isolate
a person's experiences down to *instants*. He would run a minute's
worth now, a minute's worth then, and mix them up in order.

But! The only causal discontinuities were *between* the successive
sessions (each session at least a minute long---but I'd be happy
with a millisecond long).

> The idea that any computation can be implemented by any random process,
> given an appropriate programming language (which might be a giant lookup
> table, mapping [anything] -> [line of code]) is generally taken as being
> self-evidently absurd.

Not sure I understand. Since you are talking about a *process*,
then for my money we're already half-way there! (I.e., the
Time Deniers have not struck.) Suppose that we have a trillion
by trillion Life Board and the program randomly assigns pixels
for each generation. Then, yes, I guess I agree with you: we
have achieved nothing: the random states are admittedly connected
by causal processes (your machine is an ordinary causal process
operating in *time*), but nothing intelligent is being implemented.
It's not even implementing a wild rain-storm.

(Of course, the Time Deniers, as I understand them, would be
perfectly happy to let this machine run for 10^10^200 years,
and then identify (pick out) a sequence of apparently related
states, in fact, a sequence that seemed to be you or me having
a conscious experience. They'd be quite happy (many of them
at least) to say that once again Stathis or Lee had been
implemented in the universe and had had some conscious
experience (i.e. OMs).

> The argument goes that that the information content
> of the "programming language" must contain all the information the random
> system is supposed to be producing, so this system is actually superfluous.
> This means we have won no computational benefit by setting up this odd
> machine.

I'm following so far.

> However, the programming language is only there so that the machine
> can interact with the environment. If there is no programming language
> and no I/O, the machine can be a complete solipsist.

You've lost me, sorry. Could you explain what you mean and
where you are going here?

> This might occur also if
> some future archaeologist finds an ancient computer running an AI, but there
> is no manual, no terminal, no keyboard, and nobody knows how it is
> programmed any more. If the archaeologist could figure out how to power up
> this computer, wouldn't the AI be implemented as per usual?

In the first sentence here, the archaeologist finds the machine
running. Now, for me, if it's truly implementing an AI, then the
AI may still be having a great time working on the Riemann
Hypothesis, and I don't see why it's important if it's a
solipsist or a hermit.

In the second sentence, I infer that the machine is not powered up.
Yes, then, if the archaeologist finds the right AC input voltage
and gets it going, then we have the first case (i.e. sentence one),
and the AI would be implemented as usual.

> You might say that in the last example the states were "causally connected",
> while in the first they were not. But why should that make any difference,
> especially to a solipsist?

By "matters to a solipsist", you are referring to the AI himself
or to an outsider? As for me, the states of a running process
are by definition causally connected (this is what "process")
means to me, but then, yes, the states reached could be a sort
of random hash as you were speaking of earlier. In that case,
then it might as well be a succession of frozen states, or dust
between the galaxies, or whatever, in terms of (not) being able
to emulate a conscious entity.

I'm driven by this continuum I see: on one extreme are people
that I care about (me, you, etc.), and on the other extreme
are vibrations of a crystal or patterns of dust between the
galaxies that I don't care about. I draw the line between them
as follows:

   A necessary condition for the states to be evidence of a
   conscious computation is that they be causally connected.

   A sufficient condition is (almost always) that they pass
   the Turing test, or are an example of a process that would
   have passed during an interval (the interval in question)
   provided that someone had conducted the test.

Lee
Received on Fri Jul 08 2005 - 19:02:20 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST