- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Georges Quenot <Georges.Quenot.domain.name.hidden>

Date: Mon, 12 Jan 2004 09:54:52 +0100

Hal Finney wrote:

*>
*

*> Georges Quenot writes:
*

*> > Considering the kind of set of equation we figure up to now,
*

*> > completely specifying our universe from them seems to require
*

*> > two additional things:
*

*> >
*

*> > 1) The specification of boundary conditions (or any other equivalent
*

*> > additional constraint.
*

*> > 2) The selection of a set of global parameters.
*

*> >
*

*> > My suggestion is that for 1), instead of specifying initial
*

*> > conditions (what might be problematic for a number of reasons),
*

*> > one could use another form of additional high level constraint
*

*> > which would be that the solution universe should be "as much as
*

*> > possible more ordered on one side than on the other". Of course,
*

*> > this rely on the possibility to give a sound sense to this, which
*

*> > implies to be able to find a canonical way to tell whether one
*

*> > solution of the set of equations in more "more ordered on one
*

*> > side than on the other" than another solution.
*

*>
*

*> I think this is a valid approach, but I would put it into a larger
*

*> perspective. The program you describe, if we were to actually implement
*

*> it, would have these parts: It has a certain set of laws of physics; it
*

*> has a certain order-measuring function (perhaps equivalent to what we know
*

*> as entropy); and it has a goal of finding conditions which maximize the
*

*> difference in this function's values from one side to the other of some
*

*> data structure that it is modifying or creating, and which represents
*

*> the universe.
*

That's it. I would say that this is a clever reformulation back

in the context of the computational perspective. However I do not

find this perpective larger.

*> It would not be particularly difficult to implement a
*

*> "toy" version of such a program based on some simple laws of physics, and
*

*> perhaps as you suggest our own universe might be the result of an instance
*

*> of such a program which is not all that much more large or complex.
*

*>
*

*> In the context of the All Universe Principle as interpreted by
*

*> Schmidhuber, all programs exist, and all the universes that they generate
*

*> exist. This program that you describe is one of them, and the universe
*

*> that is thus generated is therefore part of the multiverse.
*

*>
*

*> So to first order, there is nothing particularly surprising or
*

*> problematical in envisioning programs like this as contributing to the
*

*> multiverse, along with the perhaps more naively obvious programs which
*

*> perform sequential simulation from some initial conditions. All programs
*

*> exist, including ones which create universes in even more strange or
*

*> surprising ways than these.
*

*>
*

*> By the way, Wolfram's book (wolframscience.com) does consider some
*

*> non-sequential simulations as models for simple 1- and 2-dimensional
*

*> universes. These are what he calls "Systems Based on Constraints"
*

*> discussed in his chapter 5.
*

*>
*

*> Where I think your idea is especially interesting is the possibility that
*

*> the program which creates our universe via this kind of optimization
*

*> technique (maximizing the difference in complexity) might be much
*

*> shorter than a more conventional program which creates our universe
*

*> via specifying initial conditions. Shorter programs are considered
*

*> to have larger measure in the Schmidhuber model, hence it is of great
*

*> importance to discover the shortest program which generates our universe,
*

*> and if optimization rather than sequential simulation does lead to a
*

*> much shorter program, that means our universe has much higher measure
*

*> than we might have thought.
*

In the more classical mathematical perspective, I would say that

this principle could be considered as an additional axiom from

which a lot could be derived, leading (possibly) to a description

of universes much shorter in axiom count than many alternatives.

An even more general axiom would be that "if a symmetry has to

be broken, it has to be broken as much as possible", things having

to either as symmetrical as possible or as asymmetrical as possible.

*> However, I don't think we can evaluate this possibility in a meaningful
*

*> way until we have a better understanding of the physics of our own
*

*> universe.
*

Yes and maybe even if we finally figure which laws are to be used.

*> I am somewhat skeptical that this particular optimization
*

*> principle is going to work, because our universe's disorder gradient is
*

*> dominated by the Big Bang's decay to heat death, and these cosmological
*

*> phenomena don't necessarily seem to require the kinds of atomic and
*

*> temporal structures that lead to observers.
*

I know of the dominance of the near big bang decay to heat death

but it might be that however small the remaining might be, it could

still be enough to make a difference. Also, the remaining "operates"

on a much longer time-scale and this could somehow balance things.

It is certainly too early to decide whether this optimization

principle is actually useful and whether the optimal point would

actually turn out to be our type of universe. I am not so confident

that it would but I don't think either that this could be ruled

out yet.

*> If you look at Tegmark's
*

*> paper http://www.hep.upenn.edu/~max/toe.html which lists a number of the
*

*> physical-constant coincidences necessary for life, not all of them would
*

*> have cosmological importance and change the order-to-disorder gradient
*

*> of the universe.
*

Maybe the principle would work only for some of the global

parameters, others being justified from other principles and yet

others really being free ones.

*> > It might well be that this additional constraint can also be
*

*> > used for selecting the appropriate set of global parameter for
*

*> > the set of equations considered in 2). It does not seem
*

*> > counter-intuitive that the sets of global parameters that
*

*> > allows for the maximization of the gradient of order among all
*

*> > possible solutions considering all possible values for global
*

*> > parameters be precisely those for which SASs emerges and
*

*> > therefore those we see in our universe: universes not able to
*

*> > generate complex enough substructures to be self aware would
*

*> > probably equally fail to exhibit large gradients of order and
*

*> > vice versa.
*

*>
*

*> Certainly an interesting suggestion. Again, when we look at the larger
*

*> view of all possible programs, we have optimization programs which
*

*> have some parameters fixed; and optimization programs which allow the
*

*> parameters to vary as part of the optimization process. The latter
*

*> programs would tend to be smaller since they don't have to store the
*

*> value of the fixed parameters; but on the other hand the need to allow
*

*> for varying the parameters may add some complexity, so it might be that
*

*> particularly simple values of parameters can be accommodated without
*

*> increasing program size.
*

Within the mathematical perspective, specifying a real value

for a parameter requires additional axioms. From the uncountable

number of reals, only a countable number of them can be specified

using a finite set of axioms. It is not clear that universes

associated (via global parameters) to those reals that cannot be

specified using a finite set of axioms have the same mathematical

existence as those that can (the first series of universes would

need an infinite set of axioms to be described while the others

only needs a finite set of axioms). It might be that the universes

with smaller number of axioms would be more generic and have more

chance or weight to exist. Form this perpective, the lesser the

number of axioms, the better.

Georges Quénot.

*> > The hypothesis of the maximization the gradient of order seems
*

*> > even Popper-falsfiable. At least one prediction can be made:
*

*> >
*

*> > Given the set of equation that describe our universe and the
*

*> > corresponding set of global parameters, if we can find a canonical
*

*> > way to compare the relative global gradient of order within the
*

*> > universes that satisfy this set of equations:
*

*> >
*

*> > 1) It could be possible to determine the subset of universes
*

*> > that maximize the gradient for each set of global parameters
*

*> > (comparing all possible universes for a given set of global
*

*> > parameters), these being called "optimal" for this set of
*

*> > global parameters.
*

*> >
*

*> > 2) It could be possible to determine the sets of global parameters
*

*> > that maximize the gradient in an absolute way (comparing
*

*> > optimal universes for all possible sets of global parameters).
*

*> >
*

*> > The prediction is that the set of global parameter that we observe
*

*> > is one of those that maximizes the gradient of order within the
*

*> > corresponding optimal universes.
*

*>
*

*> Yes, that's a good prediction, and you may be right that we could already
*

*> take some steps towards testing it. Tegmark's paper can be interpreted
*

*> as providing some such tests.
*

*>
*

*> > Maybe also the constraint could be used at a third level if it
*

*> > can remain consistent as a mean to select the appropriate set of
*

*> > equations.
*

*>
*

*> Yes, or at least for part of the equations. As with the case of
*

*> parameters, all different versions of such programs would exist, and
*

*> the real question is which one is shortest.
*

*>
*

*> >From Wolfram's book, though, I can't escape the suspicion that no such
*

*> programs will turn out to be the shortest ones; but that there will be
*

*> some much smaller program that lacks the logic and clean division of
*

*> function that I describe above (the three parts, etc) but which still
*

*> manages to create our universe. Once we move away from sequential
*

*> simulation and start considering optimization and other more exotic
*

*> techniques, it is difficult to avoid taking the next step and considering
*

*> random programs. Wolfram's first few chapters amount to a taxonomy
*

*> of how random programs behave, and his tentative conclusion is that
*

*> a substantial fraction of them generate complex-looking structure.
*

*>
*

*> It may be a leap of faith to suppose that our highly intricate and ordered
*

*> universe could be generated by some incomprehensible mess of a program,
*

*> something much smaller and tighter than any human programmer could
*

*> create or perhaps even understand. But there is some historical basis
*

*> for the idea that random programs can be more efficient in size than
*

*> human-designed ones; I recall that in one of the early Artificial Life
*

*> experiments, the original replicator carefully designed as the initial
*

*> seed soon self-improved to be even smaller than the human designer had
*

*> thought possible.
*

Received on Mon Jan 12 2004 - 03:55:48 PST

Date: Mon, 12 Jan 2004 09:54:52 +0100

Hal Finney wrote:

That's it. I would say that this is a clever reformulation back

in the context of the computational perspective. However I do not

find this perpective larger.

In the more classical mathematical perspective, I would say that

this principle could be considered as an additional axiom from

which a lot could be derived, leading (possibly) to a description

of universes much shorter in axiom count than many alternatives.

An even more general axiom would be that "if a symmetry has to

be broken, it has to be broken as much as possible", things having

to either as symmetrical as possible or as asymmetrical as possible.

Yes and maybe even if we finally figure which laws are to be used.

I know of the dominance of the near big bang decay to heat death

but it might be that however small the remaining might be, it could

still be enough to make a difference. Also, the remaining "operates"

on a much longer time-scale and this could somehow balance things.

It is certainly too early to decide whether this optimization

principle is actually useful and whether the optimal point would

actually turn out to be our type of universe. I am not so confident

that it would but I don't think either that this could be ruled

out yet.

Maybe the principle would work only for some of the global

parameters, others being justified from other principles and yet

others really being free ones.

Within the mathematical perspective, specifying a real value

for a parameter requires additional axioms. From the uncountable

number of reals, only a countable number of them can be specified

using a finite set of axioms. It is not clear that universes

associated (via global parameters) to those reals that cannot be

specified using a finite set of axioms have the same mathematical

existence as those that can (the first series of universes would

need an infinite set of axioms to be described while the others

only needs a finite set of axioms). It might be that the universes

with smaller number of axioms would be more generic and have more

chance or weight to exist. Form this perpective, the lesser the

number of axioms, the better.

Georges Quénot.

Received on Mon Jan 12 2004 - 03:55:48 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:09 PST
*