Re: MGA 1

From: Hal Finney <hal.domain.name.hidden>
Date: Sun, 23 Nov 2008 09:39:07 -0800 (PST)

Allow me to try analyzing MGA 1 in the context of what I call the UDASSA
framework, which I have discussed here on this list in years past. First
I will briefly review UDASSA.

In UDASSA, the measure of an observer experience, an observer moment, or
for that matter anything that can be thought of as an information pattern,
is its measure in the Universal Distribution, a mathematically defined
probability distribution which is basically the probability that a given
Universal Turing Machine will output that bit pattern, given a random
program as input. This turns out to be approximately equal to 1/2 to the
power of AC, where AC is the algorithmic complexity of the bit pattern,
i.e. the length of the shortest program that outputs that bit pattern.
More precisely, the measure is actually the sum of contributions from all
programs that output that pattern. For each program, if its length is L,
its contribution to the measure of that pattern is 1/2 to the L power.

UDASSA also assumes that experiences that have higher measure are more
likely to be experienced, hence that we would predict that we are more
likely to experience something that has high measure than something low.
This is the ASSA part.

The first step to analyzing most of our thought experiments is to assume
that consciousness can be represented as an abstract information pattern
of some yet-to-be-determined nature. Assume that as our understanding
of psychology and brain physiology grows, we are able to achieve a
mathematical definition of consciousness, such that any system which
implements a pattern which meets various criteria would be said to be
conscious. Brains presumably would then turn out to be conscious because
they implement (or "instantiate") patterns that follow these rules.

In the UDASSA framework, we would apply this understanding and model
of consciousness slightly differently. Rather than asking whether a
particular brain or other system implements a given consciousness, or
more generally asking whether it is conscious at all, we would ask what
contribution the system in question makes to the measure of a given,
mathematically-defined, conscious experience. Systems which we would
conventionally say "are conscious", like brains, would be ones which
make a large contribution; systems which do not, like rocks, would make
virtually no contribution to the measure.

The manner in which a brain makes a contribution to the measure of the
abstract information pattern representing its conscious experience is
like this: The measure of the abstract consciousness is based on the
size of the shortest program which can output it. A short program to
output a given person's conscious experience is to start a universe off
with a Big Bang in a simple initial state and with simple physical laws;
run it for a while; and then look at a given location in space-time for
patterns of activity which can be specified by simple rules, and record
those patterns. The rules in this case would be those corresponding to
neural events in the brain, in whatever form is necessary to output the
appropriate mathematical representation of consciousness.

Some years back on this list, I estimated that a particular observer
moment might correspond to a program of this nature with a length in
the tens of thousands of bits. This is very short considering that
the raw data of neural activity would be billions of bits; any program
which tried to use a rock of some other non-conscious source as its
raw material for outputting the bit pattern would have to hard-code the
entire bit pattern within itself. Since the contribution of a program
is inversely exponential in the length of the program, the enormous
economy of brain-scanning type programs means that they would contribute
essentially all the measure to conscious experiences, rocks essentially
none, and hence that we "really are" brains in this sense.

Turning now to the zombie based thought experiments, let us think of Alice
whose brain malfunctions for a while but who gets lucky due to cosmic ray
strikes and so the overall brain pattern proceeds unchanged. What impact,
if any, would such events have on her consciousness in this framework?

Under UDASSA, we would not ask if she is conscious or a zombie. We would
ask what contribution this instance of her experience makes to the measure
of the information pattern representing her conscious experience during
this time (that is, what her experience would be if all were working
well). If her brain still contributes about as much as it would if it
were working right, we'd say that she is conscious. If it contributes
almost nothing, we'd say she was not conscious and was, in that sense,
a zombie. The UDASSA framework also allows for intermediate levels of
contribution, although due to the exponential nature of the Universal
Distribution, even small increases in program size will greatly reduce
the amount of contribution.

Assuming that the shortest possible program for outputting the
mathematical representation of Alice's conscious experience is based on
the brain scan concept sketched above, what happens when a neuron stops
working? Well, the brain scan is not going to work the same way. Now,
if it is a single neuron, it's likely that there would be no noticeable
effect on consciousness. The brain is an imperfect machine and must have
a degree of redundancy and error correction. It is likely that the fact
that one neuron has stopped working would be caught and corrected by the
error-correction part of the brain-scan program, and its output would not
be changed. So although the input is different, the output is probably
the same and so we would say that Alice is not a zombie. Fundamentally
this is because the brain is immune to noise at this level (we assume).

Now let us suppose a more serious failure, thousands of neurons, enough
that it really would make a difference in her experience. But we assume
that just through luck, cosmic rays activate those neurons so that there
is no long term change in her thought processes or behavior. The rest
of her brain works normally. In this case, the brain scanning program
would possibly output a different result. It would be working by tracking
neural events, synaptic activity and so on. Depending on the details,
the fact that the neurons continued to fire in the right patterns might
not be good enough.

Let's suppose that the neurons are firing but their synapses are broken
and are not releasing neurotransmitters. Suppose it turns out that
the optimal brain-scanning program studies synaptic activity as well
as overall neural activity. In that case it would output a different
result. In order to get the program to output the same thing as it would
have if the brain weren't broken, we would have to make it more complex,
so that the cosmic ray activity was just as good as neural stimulation,
for making neurons fire and for producing the desired output pattern. This
would complicate the program and probably make it substantially larger, at
least hundreds of bits. This would then decrease its contribution to the
measure of Alice's conscious experience by at least a factor of 2 to the
100th power. Alice would have to be though of as a zombie in this case.

Now, this is based on the assumption that the optimal brain-scanning
program would be disrupted by whatever aspect it is about Alice's brain
which is "broken". By playing around with various ways of doing brain
scanning and various ways a brain might break, we can get different
answers. But at a minimum, if Alice's neurons had never been wired up
to each other in her whole life, and all her life they had merely fired
sheerly by luck in the same patterns they would have had if properly
connected, then it seems clear that no simple program could extract the
proper causal connections among the neurons that would be a necessary
prerequisite to analyze their logical relationships and output a concise
mathematical representation of her consciousness. So in that case she
would certainly be a zombie. A momentary interruption, with the causal
channels still in place but perhaps temporarily blocked, might still be
tolerated, again depending on the details.

Note that in principle, then, whether Alice would be a zombie is an
empirical question that can be solved via sufficient study of psychology
and brain physiology (to understand what characterizes consciousness),
and computer science (to learn what kinds of programs can most efficiently
translate raw brain measurements into the mathematical representation
of conscious experience).

This framework also allows answers to the various other thought
experiments which have been raised, such as whether a sufficiently
detailed recording of the brain's activity would be conscious. UDASSA
suggests that not only is the answer yes (assuming that a simple program
can infer the same logical relationships as it could from the actual
brain itself), but that actually such a recording might contribute
vastly more to the measure of the conscious experience than an ordinary
brain! That is because the recording persists, and so the program to
output the mathematical representation of the conscious experience
can be shorter due to the reduced precision necessary in specifying
the time coordinate where the scan should start. Hence such "static"
recordings can apparently produce experiences with much higher measure
than ordinary brains. We might therefore endorse something like Nick
Bostrom's Simulation Argument (simulation-argument.com) with the proviso
that not only are we living in simulations, but that the simulations
are recorded in persistent storage in some form.

Hal Finney

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Nov 23 2008 - 14:13:58 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST