- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: <juergen.domain.name.hidden>

Date: Thu, 8 Feb 2001 17:54:12 +0100

Algorithmic theories of everything (TOEs) are limited to universe

histories describable by finite computer algorithms. The histories

themselves may be infinite though, computable by forever running programs.

To predict a future we need some conditional probability distribution

on possible futures, given the past. Algorithmic TOEs are limited

to distributions computable in the limit, that is, there must be some

(possibly forever running) finite computer program that approximates

with arbitrary precision the probability of any possible future.

What about NONalgorithmic TOEs? For instance, once one assumes that

the cardinality of possible futures equals the cardinality of the real

numbers one has a NONalgorithmic assumption.

I postulate that the restricted, algorithmic TOEs are preferable over

nonalgorithmic TOEs, for two reasons: they are simpler, yet there is

no evidence that they are too simple. They are simpler not just in some

vague, model-dependent, quantitative way but even in a fundamental and

qualitative way, because they are fully describable by finitely many

bits of information, while NONalgorithmic TOEs are not. We can write

a book that completely describes an algorithmic TOE. We cannot write a

book that completely describes a nonalgorithmic TOE.

Algorithmic TOEs vs nonalgorithmic TOEs - it's really like describable

things vs nondescribable things.

I join those who claim that things one cannot describe do not exist.

For instance, most real numbers do not exist. Huh? Isn't there a

well-known set of axioms that uniquely characterizes the real numbers?

No, there is not. The Loewenheim-Skolem Theorem implies that any first

order theory with an uncountable model such as the real numbers also

has a countable model. No existing proof concerning any property of

the real numbers really depends on the "continuum of real numbers",

whatever that may be. Our vague ideas of the continuum are just that:

vague ideas without formal anchor.

Some might think there is an algorithmic way of going beyond

algorithmic TOEs, by writing a never halting program that outputs

all beginnings of all possible universe histories in a continuum

of histories --- let us call this an "ensemble approach":

http://www.idsia.ch/~juergen/everything/html.html

But while there is a clever ensemble approach that needs only countably

many time steps to output ALL infinite histories that are INDIVIDUALLY

computable within countable time, and to output their FINITE complete

descriptions, there is no ensemble approach that outputs ALL histories of

a continuum: http://rapa.idsia.ch/~juergen/toesv2/node27.html The ensemble

approach is firmly grounded in the realm of algorithmic TOEs, and cannot

go beyond those. An ensemble approach can output all incompressible finite

strings, all with finite descriptions, but it cannot output anything that

is not computable in the limit. In particular, the ensemble approach

cannot provide us with things such as infinite random strings, which

simply do not exist!

The algorithmic constraint represents a strong, plausible, beautiful,

satisfying bias towards describability. What we cannot even describe

we must ignore anyway. The algorithmic constraint by itself is already

sufficient to show that only histories with finite descriptions can have

nonvanishing probability, and that only histories with short descriptions

can have large probability: http://rapa.idsia.ch/~juergen/toesv2/ More

is not necessary to demonstrate that weird universes with flying rabbits

etc. are generally unlikely. Why assume more than necessary?

Provocative (?), actually straightforward conclusion:

henceforth we should ignore nonalgorithmic TOEs.

Received on Thu Feb 08 2001 - 09:04:37 PST

Date: Thu, 8 Feb 2001 17:54:12 +0100

Algorithmic theories of everything (TOEs) are limited to universe

histories describable by finite computer algorithms. The histories

themselves may be infinite though, computable by forever running programs.

To predict a future we need some conditional probability distribution

on possible futures, given the past. Algorithmic TOEs are limited

to distributions computable in the limit, that is, there must be some

(possibly forever running) finite computer program that approximates

with arbitrary precision the probability of any possible future.

What about NONalgorithmic TOEs? For instance, once one assumes that

the cardinality of possible futures equals the cardinality of the real

numbers one has a NONalgorithmic assumption.

I postulate that the restricted, algorithmic TOEs are preferable over

nonalgorithmic TOEs, for two reasons: they are simpler, yet there is

no evidence that they are too simple. They are simpler not just in some

vague, model-dependent, quantitative way but even in a fundamental and

qualitative way, because they are fully describable by finitely many

bits of information, while NONalgorithmic TOEs are not. We can write

a book that completely describes an algorithmic TOE. We cannot write a

book that completely describes a nonalgorithmic TOE.

Algorithmic TOEs vs nonalgorithmic TOEs - it's really like describable

things vs nondescribable things.

I join those who claim that things one cannot describe do not exist.

For instance, most real numbers do not exist. Huh? Isn't there a

well-known set of axioms that uniquely characterizes the real numbers?

No, there is not. The Loewenheim-Skolem Theorem implies that any first

order theory with an uncountable model such as the real numbers also

has a countable model. No existing proof concerning any property of

the real numbers really depends on the "continuum of real numbers",

whatever that may be. Our vague ideas of the continuum are just that:

vague ideas without formal anchor.

Some might think there is an algorithmic way of going beyond

algorithmic TOEs, by writing a never halting program that outputs

all beginnings of all possible universe histories in a continuum

of histories --- let us call this an "ensemble approach":

http://www.idsia.ch/~juergen/everything/html.html

But while there is a clever ensemble approach that needs only countably

many time steps to output ALL infinite histories that are INDIVIDUALLY

computable within countable time, and to output their FINITE complete

descriptions, there is no ensemble approach that outputs ALL histories of

a continuum: http://rapa.idsia.ch/~juergen/toesv2/node27.html The ensemble

approach is firmly grounded in the realm of algorithmic TOEs, and cannot

go beyond those. An ensemble approach can output all incompressible finite

strings, all with finite descriptions, but it cannot output anything that

is not computable in the limit. In particular, the ensemble approach

cannot provide us with things such as infinite random strings, which

simply do not exist!

The algorithmic constraint represents a strong, plausible, beautiful,

satisfying bias towards describability. What we cannot even describe

we must ignore anyway. The algorithmic constraint by itself is already

sufficient to show that only histories with finite descriptions can have

nonvanishing probability, and that only histories with short descriptions

can have large probability: http://rapa.idsia.ch/~juergen/toesv2/ More

is not necessary to demonstrate that weird universes with flying rabbits

etc. are generally unlikely. Why assume more than necessary?

Provocative (?), actually straightforward conclusion:

henceforth we should ignore nonalgorithmic TOEs.

Received on Thu Feb 08 2001 - 09:04:37 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST
*