- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Christopher Maloney <dude.domain.name.hidden>

Date: Fri, 12 Nov 1999 22:29:55 -0500

Marchal wrote:

*>
*

*> This is linked to something said by Russell Standish in his paper
*

*> and in some posts. Russell Standish pretends that (I quote him):
*

*>
*

*> Each self-consistent mathematical structure is completely described
*

*> by a finite set of symbols, axioms and rule.
*

*>
*

*> This is totaly incorrect, I'm afraid. It is known that you cannot
*

*> find a recursively enumerable (RE) set of axioms to specify the
*

*> (N,+,x) structure (N) of the natural numbers with addition and
*

*> multiplication. Each RE description of N admit plenty of non
*

*> isomorphic models which belongs to any reasonable set of
*

*> mathematical structures.
*

I don't understand this. Do you have a reference? I thought that

such a system as N with operations of addition and multiplication

are easily definable as a formal system. I must be missing what

you mean by "structure".

*> And things are worst with sets,
*

*> categories, etc.
*

*> Of course you can guess this from the fact that the set of RE
*

*> theories are countable, and the set of possible mathematical
*

*> structure is ... much much bigger, to say the least.
*

*>
*

*> People should realize that notion like "definability",
*

*> "provability" etc. are highly relative mathematical concept, by
*

*> which I mean that they are formalism-dependent.
*

*> "computability" is the first and the last (until now) purely
*

*> absolute concept, which doesn't depend on the formalism choosed.
*

*> Godel said that with the computability concept there is some
*

*> kind of miracle. The miracle is the apparent truth of Church's
*

*> thesis.
*

If this hadn't been a reference to Goedel, I'd have been tempted

to dismiss it. As it is, I'm merely quite skeptical. It seems

to me that computability is just a mathematical formalism like

any other, where you define symbols, axioms, and rules. Whenever

I hear anyone say that mathematical truths are "relative", my

guard goes up. It's one of the things I didn't like about David

Deutsch's book.

IMO, mathematical truths are fundamentally tautologies. They are

equivalent to "If A then A". I am not one to question formal

propositional calculus (the rules for making sylogisms) and

therefore I adhere to the belief that mathematical truths are

absolute, because they are tautologies. I.e. any correct proof

of a theorem T based on axioms A, B, and C could be stated

simply, "If A, B, and C, then T".

So I certainly don't see that computability theory could be in

any way more fundamental.

*>
*

*> That is why Tegmark or GSlevy approach, although very interesting,
*

*> is still ill-defined. Tegmark will need some powerfull constructive
*

*> axiom to give a sufficiently precise sense to the "whole set
*

*> of mathematical structures" so that he can make the white
*

*> rabbit disappearing, by isolating the right prior.
*

His axiom was explicit: there is one of each structure, up to

isomorphism. Making anything useful out of that might be difficult.

*>
*

*> Schmidhuber has chosen the "right" (absolute) objective realm:
*

*> the universe of computations, but still doesn't realise that
*

*> with comp, the prior cannot be defined on the computation,
*

*> but only on the relative continuation, and this in a "trans-universe"
*

*> manner (cf the UDA).
*

*> The Universal Dovetailer Argument (UDA as I
*

*> now call it, hoping Chris appreciates)
*

Much better! =:-)

*> shows also that we must
*

*> take into account the continuum.
*

*> It seems that the UDA makes some links between Tegmark and
*

*> Schmidhuber approach, indeed. Because it shows that with comp
*

*> some prior needs non RE mathematical structures.
*

Which brings me to my question: what is your basis for establishing

the measure? I think I understand Juergen's now, that it's based on

what he calls a "universal prior", that the measure of each program

is related to the chance of guessing it -- and is therefore

directly related to its length.

I don't think I understand Hal's post of several weeks ago where he

discussed that the measure must be related not only to the structure

but also to the ability to locate that structure within the

universe. But this question is motivated by his post. I thought

that I was comfortable with the idea that a Universal Program, or a

Universal Dovetailer, or a Great Programmer, would naturally result

in our witnessing a coherent universe. I was thinking that Hal and

others weren't giving enough consideration to the explanatory power

of the principle of computational indeterminacy.

But I've since changed my mind. I am now completely stumped to the

core.

For one thing, I reject Juergen's principle because it is ad hoc.

I insist that any measure must result from zero information -- that

is the whole appeal of the AUH, after all.

I also reject James Higgo's contention that the WAP can explain it

all. After all, I can think of plenty of scenarios where I survive

but the universe nevertheless acts chaotically. This doesn't

explain the very deep connections I see between my memories and

what I experience now -- such as the fact that my dog responds to

the name "Steve".

Tegmark says "one of each structure up to isomorphism". But now I

keep thinking that that won't do at all, either. As the newcomer

Fritz just recently said:

Considering that every possible state does exist in some world,

it seems safe for me to conclude that there is only one world

corresponding to every state, and the chance of finding

ourselves in any possible universe is just as likely as any

other. The result would be total chaos.

If there's one of each, and each has an equal measure, then how come

I don't find myself embroiled in a chaotic universe?

Same problem, as far as I can see, with any computational theory.

Tegmark recently wrote that (paraphrasing) any Turing machine could

be described as a function on the natural numbers, f(n). It's

state at any time would be n. Then, it's state at the next time

interval would be f(n), then f(f(n)), and so on. This is just a

HLUT (Huge Look-Up Table). Now, if all Turing machines exist in

equal measure, then it seems to me, once again, that we should

expect chaos.

I think you've probably written on your solution to this before,

Bruno, so perhaps I should look back through the archives. Perhaps

you could send me a keyword or two to use.

*>
*

*> I have still a question for Russell, what is the meaning of
*

*> giving a "physical existence" to a mathematical structure ? This is
*

*> not at all a clear statement for me.
*

I don't have a problem with this at all. Tegmark, again, was pretty

clear: ME == PE (Mathematical Existence == Physical Existence).

*>
*

*> Bruno
*

Great posts recently, everyone! I do very much enjoy this group!

Date: Fri, 12 Nov 1999 22:29:55 -0500

Marchal wrote:

I don't understand this. Do you have a reference? I thought that

such a system as N with operations of addition and multiplication

are easily definable as a formal system. I must be missing what

you mean by "structure".

If this hadn't been a reference to Goedel, I'd have been tempted

to dismiss it. As it is, I'm merely quite skeptical. It seems

to me that computability is just a mathematical formalism like

any other, where you define symbols, axioms, and rules. Whenever

I hear anyone say that mathematical truths are "relative", my

guard goes up. It's one of the things I didn't like about David

Deutsch's book.

IMO, mathematical truths are fundamentally tautologies. They are

equivalent to "If A then A". I am not one to question formal

propositional calculus (the rules for making sylogisms) and

therefore I adhere to the belief that mathematical truths are

absolute, because they are tautologies. I.e. any correct proof

of a theorem T based on axioms A, B, and C could be stated

simply, "If A, B, and C, then T".

So I certainly don't see that computability theory could be in

any way more fundamental.

His axiom was explicit: there is one of each structure, up to

isomorphism. Making anything useful out of that might be difficult.

Much better! =:-)

Which brings me to my question: what is your basis for establishing

the measure? I think I understand Juergen's now, that it's based on

what he calls a "universal prior", that the measure of each program

is related to the chance of guessing it -- and is therefore

directly related to its length.

I don't think I understand Hal's post of several weeks ago where he

discussed that the measure must be related not only to the structure

but also to the ability to locate that structure within the

universe. But this question is motivated by his post. I thought

that I was comfortable with the idea that a Universal Program, or a

Universal Dovetailer, or a Great Programmer, would naturally result

in our witnessing a coherent universe. I was thinking that Hal and

others weren't giving enough consideration to the explanatory power

of the principle of computational indeterminacy.

But I've since changed my mind. I am now completely stumped to the

core.

For one thing, I reject Juergen's principle because it is ad hoc.

I insist that any measure must result from zero information -- that

is the whole appeal of the AUH, after all.

I also reject James Higgo's contention that the WAP can explain it

all. After all, I can think of plenty of scenarios where I survive

but the universe nevertheless acts chaotically. This doesn't

explain the very deep connections I see between my memories and

what I experience now -- such as the fact that my dog responds to

the name "Steve".

Tegmark says "one of each structure up to isomorphism". But now I

keep thinking that that won't do at all, either. As the newcomer

Fritz just recently said:

Considering that every possible state does exist in some world,

it seems safe for me to conclude that there is only one world

corresponding to every state, and the chance of finding

ourselves in any possible universe is just as likely as any

other. The result would be total chaos.

If there's one of each, and each has an equal measure, then how come

I don't find myself embroiled in a chaotic universe?

Same problem, as far as I can see, with any computational theory.

Tegmark recently wrote that (paraphrasing) any Turing machine could

be described as a function on the natural numbers, f(n). It's

state at any time would be n. Then, it's state at the next time

interval would be f(n), then f(f(n)), and so on. This is just a

HLUT (Huge Look-Up Table). Now, if all Turing machines exist in

equal measure, then it seems to me, once again, that we should

expect chaos.

I think you've probably written on your solution to this before,

Bruno, so perhaps I should look back through the archives. Perhaps

you could send me a keyword or two to use.

I don't have a problem with this at all. Tegmark, again, was pretty

clear: ME == PE (Mathematical Existence == Physical Existence).

Great posts recently, everyone! I do very much enjoy this group!

-- Chris Maloney http://www.chrismaloney.com "Donuts are so sweet and tasty." -- Homer SimpsonReceived on Fri Nov 12 1999 - 19:37:56 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST
*