- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Stephen Paul King <stephenk1.domain.name.hidden>

Date: Sun, 25 Jan 2004 13:02:27 -0500

Dear Jesse,

----- Original Message -----

From: "Jesse Mazer" <lasermazer.domain.name.hidden>

To: <everything-list.domain.name.hidden>

Sent: Wednesday, January 07, 2004 9:45 PM

Subject: RE: Is the universe computable?

*> David Barrett-Lennard wrote:
*

*> >
*

*> >Georges Quenot wrote:
*

*> >
*

*> > > Also I feel some confusion between the questions "Is the universe
*

*> > > computable ?" and "Is the universe actually 'being' computed ?".
*

*> > > What links do the participants see between them ?
*

*> >
*

*> >An important tool in mathematics is the idea of an isomorphism between
*

*> >two sets, which allows us to say *the* integers or *the* Mandelbrot set.
*

*> >This allows us to say *the* computation, and the device (if any) on
*

*> >which it is run is irrelevant to the existence of the computation. This
*

*> >relates to the idea of the Platonic existence of mathematical objects.
*

*> >
*

*> >This makes the "confusion" between the above questions irrelevant.
*

*> >
*

*> >I think it was John Searle (who argues that computers can't be aware)
*

*> >who said "A simulation of a hurricane is not a hurricane, therefore a
*

*> >simulation of mind is not mind". His argument breaks down if
*

*> >*everything* is a computation - because we can define an isomorphism
*

*> >between a computation and the simulation of that computation.
*

*> >
*

*> >- David
*

*>
*

*> Isn't there a fundamental problem deciding what it means for a given
*

*> simulated object to implement some other computation? Philosopher David
*

*> Chalmers discusses the similar question of how to decide whether a given
*

*> physical object is implementing a particular computation in his paper
*

"Does

*> a Rock Implement Every Finite-State Automaton?", available here:
*

*>
*

*> http://www.u.arizona.edu/~chalmers/papers/rock.html
*

*>
*

*> --Jesse Mazer
*

I am VERY interested in this question because it is part of a hypothesis

that I am working on as a model of interactions within Prof. Hitoshi

Kitada's theory of Local Time.

In the Chalmer's paper that you reference we find:

begin quote

***

For a Putnam-style counterexample to be possible, every component state must

be sensitive to every previous component state. The most straightforward way

to do this is as follows: build an implementation in which state [a,b,c]

with input I transits into state [abcI,abcI,abcI] (where abcI is a

concatenation of a, b, c, and I). Now, we are assured that for every

resultant component state, there is a unique candidate for the preceding

state and input. Then we can construct the natural mapping from strings abcI

(in various positions) onto substates of the CSA, without fear of troubles

with recombination. A recombined state such as [a,b',c'] will transit into a

new state with unique component states in every position, each of which can

be mapped to the appropriate CSA substate.

But this sensitivity comes at a price. A system like this will suffer from

an enormous combinatorial explosion, getting three times bigger at every

time-step. If the strings that make up each component have length L at one

point, within 100 time-steps they will have length 3^{100}L, which is about

5.10^{47} times larger. In a very short time, the system will be larger than

the known universe! CSAs that are candidates to be bases for cognition will

have many more than three components, so the situation there will only be

worse. Here, the "implementing" system will reach the boundaries of the

universe in number of steps corresponding to a fraction of a second in the

life of a brain. So there is no chance that any realistic system could ever

qualify as an implementation in this manner.

***

end quote

It is this "combinatorial explosion" that I have been addressing in

terms of NP-Completeness and has proposed that we consider the possibility

that the necessary "computational power" is available to QM systems and not

to classical ("realistic") systems. As an example please read:

http://arxiv.org/abs/quant-ph/0304128

It has been pointed out by Feynman and Deutsch that classical systems

can be simulated with arbitrary precision by a quantum computation that has

"sufficient resources", and these "resources" are the Hilbert space

dimensions of the QM system that is doing (via its unitary evolution?) the

computing.

http://citeseer.nj.nec.com/gramss94speed.html

http://beige.ucs.indiana.edu/B679/

My conjecture is that the Unitary evolution of an arbitrary QM system is

equivalent to the computational behavior of an quantum computer.

One idea that I have proposed informally is that "an experienced object

is indistinguishable from the *best possible* simulation of the object".

The reasoning that I am using here follows a similar line as that which

Turing used in his famous "Test" for intelligence combined with an inversion

of Wolfram's observation that an arbitrary "physical system" can not be

simulated better or "faster" than they are actually experienced to evolve.

http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/2/text.html

"This paper has suggested that many physical systems are computationally

irreducible, so that their own evolution is effectively the most efficient

procedure for determining their future."

The key then become a question of finding a formal way to describe how

the simulation of the

"classical aspects" of one QM system by another leads to a way to describe

the notion of communication between them.

Kindest regards,

Stephen

Received on Sun Jan 25 2004 - 13:18:21 PST

Date: Sun, 25 Jan 2004 13:02:27 -0500

Dear Jesse,

----- Original Message -----

From: "Jesse Mazer" <lasermazer.domain.name.hidden>

To: <everything-list.domain.name.hidden>

Sent: Wednesday, January 07, 2004 9:45 PM

Subject: RE: Is the universe computable?

"Does

I am VERY interested in this question because it is part of a hypothesis

that I am working on as a model of interactions within Prof. Hitoshi

Kitada's theory of Local Time.

In the Chalmer's paper that you reference we find:

begin quote

***

For a Putnam-style counterexample to be possible, every component state must

be sensitive to every previous component state. The most straightforward way

to do this is as follows: build an implementation in which state [a,b,c]

with input I transits into state [abcI,abcI,abcI] (where abcI is a

concatenation of a, b, c, and I). Now, we are assured that for every

resultant component state, there is a unique candidate for the preceding

state and input. Then we can construct the natural mapping from strings abcI

(in various positions) onto substates of the CSA, without fear of troubles

with recombination. A recombined state such as [a,b',c'] will transit into a

new state with unique component states in every position, each of which can

be mapped to the appropriate CSA substate.

But this sensitivity comes at a price. A system like this will suffer from

an enormous combinatorial explosion, getting three times bigger at every

time-step. If the strings that make up each component have length L at one

point, within 100 time-steps they will have length 3^{100}L, which is about

5.10^{47} times larger. In a very short time, the system will be larger than

the known universe! CSAs that are candidates to be bases for cognition will

have many more than three components, so the situation there will only be

worse. Here, the "implementing" system will reach the boundaries of the

universe in number of steps corresponding to a fraction of a second in the

life of a brain. So there is no chance that any realistic system could ever

qualify as an implementation in this manner.

***

end quote

It is this "combinatorial explosion" that I have been addressing in

terms of NP-Completeness and has proposed that we consider the possibility

that the necessary "computational power" is available to QM systems and not

to classical ("realistic") systems. As an example please read:

http://arxiv.org/abs/quant-ph/0304128

It has been pointed out by Feynman and Deutsch that classical systems

can be simulated with arbitrary precision by a quantum computation that has

"sufficient resources", and these "resources" are the Hilbert space

dimensions of the QM system that is doing (via its unitary evolution?) the

computing.

http://citeseer.nj.nec.com/gramss94speed.html

http://beige.ucs.indiana.edu/B679/

My conjecture is that the Unitary evolution of an arbitrary QM system is

equivalent to the computational behavior of an quantum computer.

One idea that I have proposed informally is that "an experienced object

is indistinguishable from the *best possible* simulation of the object".

The reasoning that I am using here follows a similar line as that which

Turing used in his famous "Test" for intelligence combined with an inversion

of Wolfram's observation that an arbitrary "physical system" can not be

simulated better or "faster" than they are actually experienced to evolve.

http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/2/text.html

"This paper has suggested that many physical systems are computationally

irreducible, so that their own evolution is effectively the most efficient

procedure for determining their future."

The key then become a question of finding a formal way to describe how

the simulation of the

"classical aspects" of one QM system by another leads to a way to describe

the notion of communication between them.

Kindest regards,

Stephen

Received on Sun Jan 25 2004 - 13:18:21 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:09 PST
*