Re: 3 possible views of "consciousness"

From: <hpm.domain.name.hidden>
Date: Tue, 30 Jan 2001 22:34:47 EST

"Jesse Mazer" <lasermazer.domain.name.hidden>:

> So then we get the same problem, that it seems sort of arbitrary to
> say that a computer is a "good" implementation but a rock is not.

I think a rock is a perfectly good implementation of any
self-contained computation, say of a universe and its inhabitants.

But if you want interaction, the external connection of the
implementation matters: you probably don't have the means to translate
to and from the thermal rock motions that represent Shakespeare's
mannerisms and speech in an interactive Bard interpretation. I like
to imagine a very high dimensional "interpretation space". The rock
Shakespeare is an astronomical distance away from you in
interpretation space (just as remote as if a flesh Shakespeare
existed, but on a planet a billion light years away). Someday it
might be possible to bridge the distance, perhaps with a powerful
translating computation (but the latency might be too long, as with
powerful radio link to the distant planet) or by reimplementing
Shakespeare (or yourself) in a more accessible encoding (like bringing
the planetary Shakespeare (or yourself) nearer by starship).


> It's true that under the right mapping, the ticks of a clock can be seen as
> doing any "computation" you please, including a simulation of an intelligent
> A.I. But if you wanted to actually implement this mapping in the physical
> world so you could interact with the A.I., you'd end up reproducing the
> causal structure of the A.I. on the computer (or brain) responsible for the
> mapping. That's my intuition anyway--since I don't have a precise
> definition of "causal structure" I can't be sure.

A clock isn't in fact a possible implementation for something that's
supposed to remain interactive with its old world, because there's no
way to give it input. A state machine with input, but states labelled
simply 1,2,3 ... by some kind of lattice enumeration could be an
implementation. But to actually make the state machine interact in
the old way, you'd have to bridge the interpretative distance with a
translation box. The translation box could take the form of a
humongous lookup table that says what output each entered state and
which state transition each possible input corresponds to.

But if the interaction is maintained, why fret about the internal
implementation? That should be a private matter between an entity and
its maker (including its code optimizer, which might take a
straightforward, Chalmers-blessed causal AI formulation and turn it
into a humongous lookup table purely for the sake of efficiency.
Chalmers might say that level 20 optimization (partial tables) leaves
the system conscious, but level 21 (full table) turns it into a
zombie. I say he's blowing smoke: there's no justificiation for such a
distinction.).

> [inhabitants] would be free to say that what they mean by the term
> "literary goodness" is completely different from the quantity in the
> axioms.

Of course I assume the goodness axiom operates on the universes
primitives in conjunction with the other axioms, producing its own
unique theorems, which would affect experiments, evolution and brain
operation.


> "mind fire" (one of my favorite parts of the book, incidentally)

Thanks very much for the comment. Some of the reviewers thought
I'd simply gone loony there.
Received on Tue Jan 30 2001 - 19:38:04 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST