- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Hal Finney <hal.domain.name.hidden>

Date: Tue, 13 Jan 2004 10:06:04 -0800

Georges Quenot writes:

*> I do not believe in either case that a simulation with this level
*

*> of detail can be conducted on any computer that can be built in
*

*> our universe (I mean a computer able to simulate a universe
*

*> containing a smaller computer doing the calculation you considered
*

*> with a level of accuracy sufficient to ensure that the simulation
*

*> of the behavior of the smaller computer would be meaningful).
*

*> This is only a theoretical speculation.
*

What about the idea of simulating a universe with simpler laws using such

a technique? For example, consider a 2-D or 1-D cellular automaton (CA)

system like Conway's "Life" or the various systems considered by Wolfram.

Suppose we sought to construct a consistent history of such a CA system

by first starting with purely random values at each point in space and

time. Now, obviously this arrangement will not satisfy the CA rules.

But then we go through and start modifying things locally so as to

satisfy the rules. We move around through the mesh in some pattern,

repeatedly making small modifications so as to provide local obedience

to the rules. Eventually, if we take enough time, we ought to reach a

point where the entire system satisfies the specified rules.

Now, I'm not sure how to combine this process with Georges' proposal to

maximize some criterion such as the gradient of orderliness. I suppose

you could simply repeat this process many times, saving or remembering

the best solution found so far. But it would be nice if you could

combine the two steps somehow, looking for valid solutions which also

scored highly in the desired optimization property.

Among simple CA models are ones which have been shown to be universal,

meaning that you can set up systems which do computation within the CA

"universe", and those systems could do various sorts of sequential

calculations. Let's suppose, as Georges' ideas might suggest, that

some optimization principle can implicitly promote the formation of such

sequential computational systems within the simulated universe.

To get back to Wei's question, it would seem that when we do manage to

create such a universe using non-sequential optimization as described

above, there would be no particular need for the early steps of the

simulated computation to be stabilized before the later steps. The order

in which stabilization occurs in any given run could be essentially

arbitrary or random.

Hal

Received on Tue Jan 13 2004 - 13:08:29 PST

Date: Tue, 13 Jan 2004 10:06:04 -0800

Georges Quenot writes:

What about the idea of simulating a universe with simpler laws using such

a technique? For example, consider a 2-D or 1-D cellular automaton (CA)

system like Conway's "Life" or the various systems considered by Wolfram.

Suppose we sought to construct a consistent history of such a CA system

by first starting with purely random values at each point in space and

time. Now, obviously this arrangement will not satisfy the CA rules.

But then we go through and start modifying things locally so as to

satisfy the rules. We move around through the mesh in some pattern,

repeatedly making small modifications so as to provide local obedience

to the rules. Eventually, if we take enough time, we ought to reach a

point where the entire system satisfies the specified rules.

Now, I'm not sure how to combine this process with Georges' proposal to

maximize some criterion such as the gradient of orderliness. I suppose

you could simply repeat this process many times, saving or remembering

the best solution found so far. But it would be nice if you could

combine the two steps somehow, looking for valid solutions which also

scored highly in the desired optimization property.

Among simple CA models are ones which have been shown to be universal,

meaning that you can set up systems which do computation within the CA

"universe", and those systems could do various sorts of sequential

calculations. Let's suppose, as Georges' ideas might suggest, that

some optimization principle can implicitly promote the formation of such

sequential computational systems within the simulated universe.

To get back to Wei's question, it would seem that when we do manage to

create such a universe using non-sequential optimization as described

above, there would be no particular need for the early steps of the

simulated computation to be stabilized before the later steps. The order

in which stabilization occurs in any given run could be essentially

arbitrary or random.

Hal

Received on Tue Jan 13 2004 - 13:08:29 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:09 PST
*