Re: implementations

From: <hal.domain.name.hidden>
Date: Fri, 2 Jul 1999 10:46:38 -0700

David Chalmers attempts to address the question of when a computation
is instantiated at:

http://ling.ucsc.edu/~chalmers/papers/rock.html

Jacques Mallah points out some problems with Chalmers and offers his own
ideas at:

http://pages.nyu.edu/~jqm1584/cwia.htm#II3
http://pages.nyu.edu/~jqm1584/newideas.htm

I agree with Jacques that Chalmers' biggest problem is his simplistic
assumption that physical subsystems must occupy specific regions of space.
This would rule out people as conscious entities, since we move around.
You could patch this up but it is still far too limited.

However Chalmers makes another mistake. I don't think it is crucial to
his argument but he spends a lot of time on it. That is the assumption
that a conscious state machine must have input, that an inputless system
cannot be conscious.

I think this is clearly false, and any explanation of consciousness which
requires the existence of input is going to be wrong. The reason is that
you can take an input-needing state machine and embed it in a larger
system which supplies the input. The system as a whole instantiates a
conscious entity but has no input.

We have seen examples of this in several recent sci-fi dramas, including
The Matrix. People live in a simulated world with all their input
provided by the computer simulation. (In The Matrix, people stilll had
organic brains, but in other fiction they are simulated as well.) It
seems very plausible that if you can simulate a brain on a computer, you
can simulate an environment for it to interact with as well. To oppose
this possibility you must give specific reasons why it cannot happen,
and Chalmers does not address the issue.

Hence it is sufficient to consider an inputless state machine as posing
the problem of instantiation, and specifically of instantiation of
consciousness. Chalmers' attempt to require input is not valid.

However, I think that Chalmers' basic strategy of looking at substructure
of the mathematical process which is being instantiated, and trying to
map them to physical substates of the physical system which claims to
instantiate it, still works even with inputless systems. So I think his
whole discussion of the need for input is in the end a red herring, but
does not ultimately detract from his basic argument.

Turning then to the substate argument, Chalmers makes a good start but
founders on the difficulty of objectively, unambiguously defining what
is a substate of a physical system. Physical position is the best he
can offer.

Here is where Jacques attempts to improve the argument. Rather than
a fixed physical region, he proposes that there is a program which
identifies physical points within the physical system which correspond
to each substate of the algorithm being instantiated. This is called
the SMA, or Simplest Mapping Algorithm.

This approach includes Chalmers' as a subcase, where the SMA would
simply choose a fixed location for each substate. However, the SMA is
more general as it can accomodate motion and dynamics and change in the
physical system. If it turns out that neurons grow and move around so
that they eventually don't overlap with their original location, Jacques'
method could allow for this.

The main problem that I see with this approach is that Jacques proposes
to use Kolmogorov complexity to judge the simplicity of the SMA. We want
the SMA to be uniquely defined, and in fact he has to add a few ad hoc
rules to deal with some possible ambiguities in order to get uniqueness.
We therefore need a measure over algorithms which describes simplicity,
so that we can take the simplest algorithm which satisfies the mapping
constraints.

The problem is that Kolmogorov complexity (basically, shortest algorithm
length) is not a good tool for this, and in fact there may not be
any tool for this, in that it may not be a uniquely defined concept.
It may be that in trying to pin down the question of instantiation,
it has popped out as the question of algorithmic complexity, which is
equally unresolvable.

There are two problems with Kolmogorov complexity. The first is that
it is uncomputable. See Chaitin's extensive work for more discussion of
this. For anything other than very short strings, you can never be
sure of what the K. complexity is of a string or of a program.

Now, you can bound K. complexity from above. You can say that a program
has no more than a certain amount of K. complexity. But there is always
the problem that it may have less. This means that any attempt to
identify the SMA is impossible, because there could be another mapping
algorithm which turns out to be simpler.

This is not actually a fatal blow to Jacques' idea. It is possible that
although K. complexity is uncomputable and hence we can never know the
SMA, that in fact it does exist objectively and there is an objective
SMA. We can never learn what it is, but its objective existence could be
enough to ground the instantiation question and therefore the question of
which systems are conscious. This would not be a completely comfortable
result, that the objective consciousness of systems is forever unknowable,
but it would at least be consistent.

However I think there is a worse problem. That is that K. complexity is
not uniquely defined. K. complexity is defined only with regard to some
specific universal Turing machine (UTM). Two different UTMs will give a
different answer for what the K. complexity is of a string or program.
In fact, given any particular string, you can construct a UTM which gives
it an arbitrarily large or small K. complexity as measured by that UTM.

I think this objection is probably fatal to Jacques' idea. We need the
SMA to be uniquely defined. But this cannot be true if there exist UTMs
which disagree about which mapping algorithm is simplest. Within the
mathematical framework of K. complexity, all UTMs are equally valid.
So there is no objective preference of one over the other, hence there
can be no objective meaning to the SMA.

In order to fix this, we have to identify a particular UTM which we will
use to measure the K. complexity. There has to be some particular Turing
machine which is preferred by the universe. You could choose one and
produce an "objective theory of consciousness", but this would be yet
another ad hoc rule which would make the theory less persuasive.

Hal
Received on Fri Jul 02 1999 - 10:51:06 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST