Information content of the brain

From: <hal.domain.name.hidden>
Date: Mon, 1 Feb 1999 18:47:57 -0800

Information content of the brain

Ralph Merkle, at http://www.merkle.com/cryo/techFeas.html#return5, has
some estimates of the information content of the brain. He is taking a
very conservative view and overestimating the content by looking at how
much information would be necessary to record every molecule in the brain.
He comes up with 50 bits per molecule, with 4E25 molecules in the brain
(99% of which are water). This leads to an estimate of about 2E27 bits
to represent the entire brain.

Another estimate he quotes is just for the information content of the
synapses in the cerebral cortex, which is 1E13 bits, and Merkle even
suggests that long term memories might involve only 1E9 bits.

The gap between 1E13 and 2E27 (let alone 1E9) is embarrassingly large,
big enough to make it questionable whether these estimates are of any
value whatsoever. However let us proceed and see what we can do with
these numbers.


Simulating a brain in isolation

Suppose we want to write a program which will produce a simulation of
a typical human brain, as it experiences a life typical of those we
live today. Let's suppose we want to simulate all its activity over a
70 year lifetime. We don't want to simulate the rest of the universe,
we'll just simulate the brain and make it "think" it is interacting with
the universe.

The actual brain simulation program itself will probably not be very
big compared to the larger numbers above. It needs to simulate the
behavior of neurons, and possibly of other systems like the chemicals
which diffuse from the blood stream. This will not be trivial, but the
chemistry involved is not all that complex. Big programs today are a
few hundred megabytes, which would be small compared to 1E13, and the
simulator should not need to be bigger than that. So the part of the
program that simulates brain internals is basically free.

We also need to simulate the brain's input. This is pretty hard since we
didn't want to simulate the universe that was going to give it the input.
What we need to do is to hard-wire the input which the brain will receive
over that 70 year period. Everything the brain will see, hear, touch,
smell and taste over 70 years is going to be sitting in a table that our
program will provide.

How much information is this? How much "bandwidth" does the brain consume?

TV is about 1E7 bits per second. If we increase this by a factor of
10 it would roughly account for our visual input (we don't see as much
as it seems like; only a small part of our visual field is seen in high
resolution). That gives 1E8 bits per second for vision. Vision is the
primary sense modality for humans, but let's multiply it by a factor of
5 to account for the other senses (handwaving madly here!). That gives
5E8 bits per second of input, over all modalities, for the human brain.
Over 70 years, which is 2E9 seconds, that gives a total of 1E18 bits
of input.

This falls between the estimates above of brain information. When we
encode this information in a table so that we can present it to our
simulated brain, we have to add it as part of the size of the input.
The result is to narrow our bracket, and put it from about 1E18 to
2E27 bits.

That is the information for this method of simulating an entire human
lifetime, without trying to actually simulate the universe which the
human appears to live in.


Simulating a universe which evolves a brain

We then ask how hard it would be to simulate a brain the old fashioned
way, by setting up a universe which uses the laws of physics that we
observe, and running it.

We face a problem right at the beginning, here. Our universe appears
to have a random component. So we have some choices. We can simulate
the universe as it appears, and include some mechanism in the program to
make random selections as specified by quantum mechanical measurements.
This mechanism could either be a software random number generator, or a
table of random bits which would be used to make the random choices.
Or, we could adopt a many-worlds approach, and simulate the universe
state without collapse.

In any case, the physics involved is probably pretty simple. We don't
have a full theory, but our best shots at it, the superstring and
membrane theories that try to marry QM and relativity, really aren't
very complicated. I'm sure you could capture their essence in a few
megabytes or even kilobytes.

The initial conditions, as far as we can tell, are also simple. Matter
appears to have been quite uniformly distributed in the Big Bang. This
bodes well for us, as it takes little information to specify a uniform
distribution, at least if we don't care about the details.


Simulating a random universe which has wave-function collapse

However, incorporating the randomness of QM measurements introduces a
problem. If we take the first approach, using a software random number
generator, it is unlikely that we will evolve humans, let alone humans
who observe a universe identical with ours. With all the random events
that happened between the formation of the universe and our evolution,
the chances of repeating that exactly are virtually nil.

If we used the second approach, a table of random values, we might be
able to force events to occur in such a way that humans would evolve.
But this would specifying, in this table, the outcome of every
quantum event that produces measurement-like wave function collapse.
These occur potentially billions of times per second, in billions of
places per centimeter, throughout the entire extent of the universe.
I haven't tried to work it out but if there are 1E80 particles in the
universe (as some people estimate) and the universe is 1E18 seconds old,
obviously we are talking about a very big number, much more than 1E100.
This amount of information would be overwhelmingly larger than the
stand-alone brain estimates.


Simulating a many-worlds universe

The third approach, simulating many-worlds, has the advantage that
we don't have to put in any randomness. Somewhere in the multiverse,
people are going to evolve. But is that good enough? The multiverse
is incredibly huge. Instead of collapsing the wave function for every
atomic event, we now split it. All those 1E100+ events are going to
result in distinct universes.

Here is where Wei's point about *locating* the intelligent mind comes
into play. In our first example, we created the mind directly as the
output of the simulation, taking approximately 1E23 bits, plus or minus
five orders of magnitude. With the many-worlds approach, we can create
it with a very small program, probably 1E7 bits or so, but it is lost
in a maze of 1E100+ universes. Is this method cheating?


Simulating a brain's entire lifetime with a counting program

After all, if all we wanted to do was to create a series of mind states,
we have (upper bound) 2E27 bits of state, with the state changing about
100 times per second (faster than the shortest time intervals we can
detect), over 2E9 seconds, for a total of about 1E39 bits to just record
the brain as a series of states. This is worse than our estimate above
where we used a program to actually simulate the brain; here we aren't
trying to simulate it, just hard-coding all the states it will go through.

A trivial counting program, once it counts up to about 1E39, will produce
a number which encodes the entire 1E39 bits of a human brain's state all
through its lifetime. And there we have it, a fully simulated brain,
70 years of human lifetime in living color, produced by a program which
is about 1E2 bits long.


Accounting for the costs to localize the brain

We can apply Wei's approach, which is to count the size of the program
needed to locate the brain, and add it to the size of the program that
produces the output. With the counting program, it will take 1E39 bits
to specify where in the output that brain is located. Add that to the
1E2 bits of the counting program and the overall cost is 1E39 bits, very
high, making this program give a small contribution.

In the case of many-worlds, then, the challenge is to estimate how hard
it would be to locate a human-like brain, or even more, a specific human
brain, among the 1E100+ universes it has created.

To find a human brain, without asking for a specific one, should not be
that hard. We could create a pattern or template which we would use to
look for a match. It could perhaps be a specification for a generic
brain, perhaps a young person's brain. The program to search through the
multiverse and look for matches to this would be easy to write.

The actual information to encode the generic brain would probably be
somewhat less than the estimates above for specific human brain state.
But it's not clear how much less. Conservativelly we could use the
same values as above, 1E13 to 2E27. This would dominate the size of
our multiverse simulation and would be the size of program we would
need to create and locate a human mind.

The main thing we have saved here is that once we have found the mind,
we don't have to simulate the universe. We already have that simulation
running. The only way we could find a mind is if it has evolved naturally
within the simulated universe, so if it is a typical human mind it must
have evolved on a simulated world which is the equivalent of Earth.


Are we in a real universe?

If the estimates of mind information are towards the low end, near 1E13,
then this method uses a considerably smaller program than creating the
mind from scratch. We save the effort needed to hard-code inputs
which mimic the effect of a universe. Simulating the universe itself is
easier. This makes sense, because the universe is lawful, and lawful
means predictable, which also means compressible.

However, if we use the higher estimates of information in the mind, then
we have a problem. It appears to be just as cheap to actually simulate
just the brain and provide a "fake" universe for it to interact with,
as it would be to provide a real universe. The cost is dominated by the
need to simulate the brain state.

I think Merkle's high estimates of brain state are intended to be
extremely conservative for his purposes, where he wants to show that
the brain could be recorded even at such a ridiculously small level of
detail. In actuality, we could probably simulate brains at a much higher
level and still produce brain actions which are indistinguishable from
our own. In that case, creating a real universe to live in looks like
the most economical approach.

Hal
Received on Mon Feb 01 1999 - 18:57:36 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST