Re: Optimal Prediction

From: Bill Jefferys <bill.domain.name.hidden>
Date: Wed, 3 Apr 2002 08:45:14 -0600

At 10:59 AM +0200 4/3/02, Juergen Schmidhuber wrote:
>The theory of inductive inference is Bayesian, of course.
>But Bayes' rule by itself does not yield Occam's razor.

"By itself?" No one said it did. Of course assumptions must be made.
At minimum one always has to choose priors in Bayesian inference.

Our paper shows that there is a Bayesian interpretation that yields
something very suggestive of Ockham's razor. It is appealing in that
if one has a "simple" versus a "complex" hypothesis, "simple" meaning
that the prior probability is concentrated and "complex" meaning that
it is vague and spread out, "simple" meaning that you don't have many
knobs to tweak, "complex" meaning the opposite, then the "simple"
hypothesis will be favored over the "complex" one unless the data lie
well away from where the "simple" hypothesis has placed its bets.
Bayesians distinguish this from Ockham's formulation by calling it
the "Bayesian Ockham's razor", recognizing that it is not what
William of Ockham wrote, "Entia non sunt multiplicanda sine
necessitate" (or one of his other genuine formulations).

Please don't read more into our article than is there.

"By itself." First you said that the AP "by itself" has no predictive
power. I missed the "by itself" so misunderstood you, but when I
understood what you were saying I agreed. Now you say that Bayes'
rule "by itself" does not yield Ockham's razor. Jim and I never said
that it did. I am hard pressed to see how anything nontrivial
relating to the real world can be gotten from any principle "by
itself," so I don't regard these comments as very profound, or very
useful.

[Remainder of article snipped]

Bill
Received on Wed Apr 03 2002 - 10:15:42 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST