Re: Bayes Destroyed?

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Fri, 28 Aug 2009 10:21:06 -0700

marc.geddes wrote:
>
>
> On Aug 27, 7:35 pm, Bruno Marchal <marc....domain.name.hidden> wrote:
>
>> Zermelo Fraenkel theory has full transfinite induction power, but is
>> still limited by Gödel's incompleteness. What Gentzen showed is that
>> you can prove the consistency of ARITHMETIC by a transfinite induction
>> up to epsilon_0. This shows only that transfinite induction up to
>> epsilon_0 cannot be done in arithmetic.
>
> Yes. That's all I need for the purposes of my criticism of Bayes.
> SInce ZF theory has full transfinite induction power, it is more
> powerful than arithmetic.
>
> The analogy I was suggesting was:
>
> Arithmetic = Bayesian Inference
> Set Theory =Analogical Reasoning
>
> If the above match-up is valid, from the above (Set/Category more
> powerful than Arithmetic), it follows that analogical reasoning is
> more powerful than Bayesian Inference,

 From analogies are only suggestive - not proofs.

>and Bayes cannot be the
> foundation of rationality as many logicians claim.
>
> The above match-up is justified by (Brown, Porter), who shows that
> there's a close match-up between analogical reasoning and Category
> Theory.

But did Brown and Porter justify Arithmetic=Bayesian inference? ISTM
that Bayesian math is just rules of inference for reasoning with
probabilities replacing modal operators "necessary" and "possible".


> See:
>
> ‘"Category Theory: an abstract setting for analogy and
> comparison" (Brown, Porter)
>
> http://www.maths.bangor.ac.uk/research/ftp/cathom/05_10.pdf
>
> ‘Comparison’ and ‘Analogy’ are fundamental aspects of knowledge
> acquisition.
> We argue that one of the reasons for the usefulness and importance
> of Category Theory is that it gives an abstract mathematical setting
> for analogy and comparison, allowing an analysis of the process of
> abstracting
> and relating new concepts.’
>
> This shows that analogical reasoning is the deepest possible form of
> reasoning, and goes beyond Bayes.
>
>
>> I agree with your critics on Bayesianism, because it is a good tool
>> but not a panacea, and it does not work for the sort of credibility
>> measure we need in artificial intelligence.
>
> The problem of priors in Bayesian inference is devastating. Simple
> priors only work for simple problems, and complexity priors are
> uncomputable.

Look at Winbugs or R. They compute with some pretty complex priors -
that's what Markov chain Monte Carlo methods were invented for.
Complex =/= uncomputable.

> The deeper problem of different models cannot be
> solved by Bayesian inference at all:

Actually Bayesian inference gives a precise and quatitative meaning to
  Occam's razor in selecting between models.

http://quasar.as.utexas.edu/papers/ockham.pdf


>
> See:
> http://74.125.155.132/search?q=cache:_XQwv9eklmkJ:eprints.pascal-network.org/archive/00003012/01/statisti.pdf+%22bayesian+inference%22+%22problem+of+priors%22&cd=9&hl=en&ct=clnk&gl=nz
>
>
> "One of the most criticized issues in the Bayesian approach is related
> to
> priors. Even if there is a consensus on the use of probability
> calculus to
> update beliefs, wildly different conclusions can be arrived at from
> different
> states of prior beliefs.

A feature, not a bug.


>While such differences tend to diminish with
> increas-
> ing amount of observed data, they are a problem in real situations
> where
> the amount of data is always finite.

And beliefs do not converge, even in probability - compare Islam and
Judaism. Why would any correct theory of degrees of belief suppose
that finite data should remove all doubt?

>Further, it is only true that
> posterior
> beliefs eventually coincide if everyone uses the same set of models
> and all
> prior distributions are mutually continuous, i.e., assign non-zero
> probabili-
> ties to the same subsets of the parameter space (‘Cromwell’s rule’,
> see [67];
> these conditions are very similar to those guaranteeing consistency
> [8]).
> As an interesting sidenote, a Bayesian will always be sure that her
> own
> predictions are ‘well-calibrated’, i.e., that empirical frequencies
> eventually
> converge to predicted probabilities, no matter how poorly they may
> have
> performed so far [22].
>
> It is actually somewhat misleading to speak of the aforementioned
> crit-
> icism as the ‘problem of priors’, as it were, since what is meant is
> often at
> least as much a ‘problem of models’: if a different set of models is
> assumed,
> differences in beliefs never vanish even with the amount of data going
> to
> infinity."

But some models are more probable than others.

Brent

>
>
> >
>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Aug 28 2009 - 10:21:06 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:16 PST