Re: Predictions & duplications

From: Juho Pennanen <juho.pennanen.domain.name.hidden>
Date: Thu, 25 Oct 2001 15:36:20 +0300

juergen wrote:

> Russell, at the risk of beating a dead horse: a uniform measure is _not_ a
> uniform probability distribution. Why were measures invented in the first
> place? To deal with infinite sets. You cannot have a uniform probability
> distribution on infinitely many things.


The last sentence is trivially true when 'probability distributions' are
defined as probability measures on discrete sets. But someone could also
think this is just irrelevant word-play on definitions.

There are uniform probability (density) functions on infinite sets, e.g.
the uniform distribution on the interval [0,1], which gives the measure
(probability) t for each interval [x,x+t]. Obviously, this distribution
gives measure 0 for any singleton containing only one real number. Still
it's uniform, though not a 'probability distribution' if one wishes to
use the most restricted definition.

Similarly, there is the natural probability measure on the set of all
infinite strings of 0's and 1's. It is 'uniform' in the sense that no
string or place in a string is in a priviledged position. To define it,
set n_0 as the set of all strings that have a 0 at the n'th place and
n_1 as the set of all strings that have a 1 at the n'th place, for any
n. Then set m(n_0)=m(n_1)=1/2, for all n. Using the standard definition
of measure that has been cited on this list (Kolmogorov axioms), m has a
unique extension which is a measure on the set of all strings.

So there may be no 'uniform probability distribution' on the set of all
strings, but there is the natural probability measure, that is in many
cases exactly as useful.

Juho
Received on Thu Oct 25 2001 - 05:37:01 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST