- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Hal Finney <hal.domain.name.hidden>

Date: Wed, 3 Jul 2002 14:53:27 -0700

Brent Meeker writes:

*> Does Wolfram have a critereon for discriminating between "random" and
*

*> "sturctured", or is it "just how it looks?"
*

I am only 1/4 way through the book, but at this point he seems to mostly

be basing his criteria on how the output looks. I understand from book

reviews that at some point he will bring in the notion of computational

systems that have the power of universal computers, and show that some

simple systems are able to do this. These are all inherently of class 4

(structured), I believe.

The first 6 (of 12) chapters, which are as far as I have read, are

essentially an observationally-oriented discussion of the behavior of

simple computing systems. Wolfram reminds me of an 18th century scientist

exploring a new part of the natural world, say an early microscopist

or astronomer. He is just trying to get the basic classifications into

place, observing similarities among different systems. But instead of

studying plants and animals, he is studying computational systems.

In our terms, this portion of the book can be thought of as a naturalist's

beginning explorations of the multiverse, the collection of all universes

produced by computer programs. His main result so far seems to be that

it doesn't matter what is the basis for your computational engine: CAs,

Turing machines, numeric recurrence relations, partial differential

equations; all produce the same basic classes of patterns. He examines

10-20 different ways of implementing simple computational systems and

shows that they all look much the same. The main difference is that

some produce structure in as many as ~10% of programs, while for others

it is only perhaps one in a million.

I see some connections between Wolfram's explorations and Greg Chaitin's

(creator of Algorithmic Information Theory) results. For years, Chaitin

has emphasized the need to approach mathematics as an experimental

science, but no one has taken him very seriously. Wolfram's work suggests

to me an approach to putting this program into action. For example,

here is a quote from Chaitin I ran into last night, from a recent essay,

http://www.cs.auckland.ac.nz/CDMTCS/chaitin/amsci.html:

A case in point is elementary number theory, where there are some very

difficult questions. Consider the prime numbers. Individual prime

numbers behave in a very unpredictable way, if you're interested

in their detailed structure. It's true that there are statistical

patterns. There's a thing called the prime number theorem that predicts

fairly accurately the overall average distribution of the primes. But

as for the detailed distribution of individual prime numbers, that

looks pretty random.

So I began to think that maybe the inherent randomness in mathematics

provided a deeper reason for all this incompleteness.

Wolfram has a chapter on patterns in the integers, and several of his

examples of randomness are based on the distribution of primes. Is the

randomness of this distribution caused by the same effects which Wolfram

has identified, which make so many simple computational systems produce

similarly random distributions? If we understand that randomness is a

frequent consequence of simple systems, then it is not surprising that

primes are random in the details of their distribution.

Another connection between Chaitin and Wolfram has to do with the

underpinnings of the multiverse. From Chaitin and AIT we know that any

universal computer can emulate any other using a fixed size program.

This means that universes agree on their relative measures to within a

constant factor (whose value depends on which system is emulating which

other system). Therefore, in the limit of large programs, where this

constant factor becomes insignificant, universes have the same relative

measures independent of what the details are of the computational

substrate which runs the multiverse. (I interpret this to mean that we

can never find out what kind of computer it is that the multiverse is

based on, or equivalently, that it doesn't matter.)

This seems to somewhat contradict Wolfram's experiments, which show that

different computational systems have rather different (a few orders of

magnitude) probabilities of showing complex behavior. Presumably the

reason is that Wolfram is only dealing with simple programs. If it were

possible to extend his results to larger programs, we might predict

based on AIT that all computational systems would converge to equal

probabilities for the different classes of behavior.

I understand that Wolfram introduces some ideas about computational

equivalence in the last chapter of the book, but I don't think he goes

in exactly this direction.

Hal Finney

Received on Wed Jul 03 2002 - 15:05:34 PDT

Date: Wed, 3 Jul 2002 14:53:27 -0700

Brent Meeker writes:

I am only 1/4 way through the book, but at this point he seems to mostly

be basing his criteria on how the output looks. I understand from book

reviews that at some point he will bring in the notion of computational

systems that have the power of universal computers, and show that some

simple systems are able to do this. These are all inherently of class 4

(structured), I believe.

The first 6 (of 12) chapters, which are as far as I have read, are

essentially an observationally-oriented discussion of the behavior of

simple computing systems. Wolfram reminds me of an 18th century scientist

exploring a new part of the natural world, say an early microscopist

or astronomer. He is just trying to get the basic classifications into

place, observing similarities among different systems. But instead of

studying plants and animals, he is studying computational systems.

In our terms, this portion of the book can be thought of as a naturalist's

beginning explorations of the multiverse, the collection of all universes

produced by computer programs. His main result so far seems to be that

it doesn't matter what is the basis for your computational engine: CAs,

Turing machines, numeric recurrence relations, partial differential

equations; all produce the same basic classes of patterns. He examines

10-20 different ways of implementing simple computational systems and

shows that they all look much the same. The main difference is that

some produce structure in as many as ~10% of programs, while for others

it is only perhaps one in a million.

I see some connections between Wolfram's explorations and Greg Chaitin's

(creator of Algorithmic Information Theory) results. For years, Chaitin

has emphasized the need to approach mathematics as an experimental

science, but no one has taken him very seriously. Wolfram's work suggests

to me an approach to putting this program into action. For example,

here is a quote from Chaitin I ran into last night, from a recent essay,

http://www.cs.auckland.ac.nz/CDMTCS/chaitin/amsci.html:

A case in point is elementary number theory, where there are some very

difficult questions. Consider the prime numbers. Individual prime

numbers behave in a very unpredictable way, if you're interested

in their detailed structure. It's true that there are statistical

patterns. There's a thing called the prime number theorem that predicts

fairly accurately the overall average distribution of the primes. But

as for the detailed distribution of individual prime numbers, that

looks pretty random.

So I began to think that maybe the inherent randomness in mathematics

provided a deeper reason for all this incompleteness.

Wolfram has a chapter on patterns in the integers, and several of his

examples of randomness are based on the distribution of primes. Is the

randomness of this distribution caused by the same effects which Wolfram

has identified, which make so many simple computational systems produce

similarly random distributions? If we understand that randomness is a

frequent consequence of simple systems, then it is not surprising that

primes are random in the details of their distribution.

Another connection between Chaitin and Wolfram has to do with the

underpinnings of the multiverse. From Chaitin and AIT we know that any

universal computer can emulate any other using a fixed size program.

This means that universes agree on their relative measures to within a

constant factor (whose value depends on which system is emulating which

other system). Therefore, in the limit of large programs, where this

constant factor becomes insignificant, universes have the same relative

measures independent of what the details are of the computational

substrate which runs the multiverse. (I interpret this to mean that we

can never find out what kind of computer it is that the multiverse is

based on, or equivalently, that it doesn't matter.)

This seems to somewhat contradict Wolfram's experiments, which show that

different computational systems have rather different (a few orders of

magnitude) probabilities of showing complex behavior. Presumably the

reason is that Wolfram is only dealing with simple programs. If it were

possible to extend his results to larger programs, we might predict

based on AIT that all computational systems would converge to equal

probabilities for the different classes of behavior.

I understand that Wolfram introduces some ideas about computational

equivalence in the last chapter of the book, but I don't think he goes

in exactly this direction.

Hal Finney

Received on Wed Jul 03 2002 - 15:05:34 PDT

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*