At 2:39 PM -0800 3/28/02, Hal Finney wrote:
>Bill Jefferys, <bill.domain.name.hidden>, writes:
>> >> Ockham's razor is a consequence of probability theory, if you look at
>> > > things from a Bayesian POV, as I do.
>>
>> This is well known in Bayesian circles as the Bayesian Ockham's
>> Razor. A simple discussion is found in the paper that Jim Berger and
>> I wrote:
>>
>> http://bayesrules.net/papers/ockham.pdf
>
>This is an interesting paper, however it uses a slightly unusual
>interpretation of Ockham's Razor. Usually this is stated as that the
>simpler theory is preferred, or as your paper says, "an explanation of
>the facts should be no more complicated than is necessary." However the
>bulk of your paper seems to use a different definition, which is that
>the simpler theory is the one which is more easily falsified and which
>makes sharper predictions.
>
>I think most people have an intuitive sense of what "simpler" means, and
>while being more easily falsified frequently means being simpler, they
>aren't exactly the same. It is true that a theory with many parameters
>is both more complex and often less easily falsified, because it has more
>knobs to tweak to try to match the facts. So the two concepts often do
>go together.
>
>But not always. You give the example of a strongly biased coin being
>a simpler hypothesis than a fair coin. I don't think that is what
>most people mean by "simpler". If anything, the fair coin seems like
>a simpler hypothesis (by the common meaning) since a biased coin has a
>parameter to tweak, the degree of bias.
Depends on whether you know the degree of bias. If you are choosing
between a two-headed coin and a fair coin, the two-headed coin is
simpler since it can explain only one outcome, whereas a fair coin
would be consistent with any outcome. On the other hand, if you don't
know the bias, then between a fair coin and a coin with unknown bias,
the fair coin is simpler. This automatically pops out when you do the
analysis.
>By equating "simpler" with "more easily falsified" you are able to tie it
>into the Bayesian paradigm, which essentially deals with falsifiability.
>A more easily falsified theory gets a Bayesian boost when it happens to
>be correct, because that was a priori unlikely. But I don't think you
>can legitimately say that this is a Bayesian version of Ockham's Razor,
>because you have to use this rather specialized definition of simple,
>which is more restricted than what people usually mean when they are
>discussing Ockham.
Regardless, it is called the Bayesian Ockham's razor in the
literature; I will agree that there are some differences between it
and the "philosopher's" Ockham's razor, and Jim and I (and other
Bayesians) don't claim otherwise. The interesting thing is that a
Bayesian approach automatically penalizes models with more parameters
relative to those with fewer parameters. It does not rely on _ad
hockery_.
Bill
Received on Fri Mar 29 2002 - 08:00:11 PST