Re: what relation do mathematical models have with reality?

From: Wei Dai <weidai.domain.name.hidden>
Date: Sun, 31 Jul 2005 00:50:08 +0800

Hal Finney wrote:
> No doubt this is true. But there are still two somewhat-related problems.
> One is, you can go back in time to the first replicator on earth, and
> think of its evolution over the ages as a learning process. During this
> time it learned this "intuitive physics", i.e. mathematics and logic.
> But how did it learn it? Was it a Bayesian-style process? And if so,
> what were the priors? Can a string of RNA have priors?

I'd say that biological evolution bears little resemblance to Bayesian
learning, because Bayesian learning assumes logical omniscience, whereas
evolution cannot be viewed as having much ability to make logical
deductions.

> And more abstractly, if you wanted to design a perfect learning machine,
> one that makes observations and optimally produces theories based on
> them, do you have to give it prior beliefs and expectations, including
> math and logic? Or could you somehow expect it to learn those? But to
> learn them, what would be the minimum you would have to give it?
>
> I'm trying to ask the same question in both of these formulations.
> On the one hand, we know that life did it, it created a very good (if
> perhaps not optimal) learning machine. On the other hand, it seems like
> it ought to be impossible to do that, because there is no foundation.

Suppose we create large numbers of robots with much computational power, but
random programs, and set them to compete against each other for limited
resources in a computable environment. If the initial number is sufficiently
large, we can expect that the ones that survive in the end will approximate
Bayesian reasoners with priors where actual reality has a significant
probabilty. We can further expect that the priors will mostly be UDist
because that is the simplest prior where the actual environment has a
significant probabilty. Thus we've created foundation out of none. Actual
evolution can be seen as a more efficient version of this.

Now suppose one of these suriviving robots has an interest in philosophy. We
might expect that it would notice that its learning process resembles that
of a Bayesian reasoner with UDist as prior, and therefore invent a
Schmidhuberian-style philosophy to provide self justification. I wonder if
this is what has happened in our own case as well.
Received on Sat Jul 30 2005 - 12:55:28 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST