- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Jesse Mazer <lasermazer.domain.name.hidden>

Date: Tue, 05 Oct 2004 04:12:53 -0400

Brent Meeker wrote:

*>Of course in the real world you have some idea about how much
*

*>money is in play so if you see a very large amount you infer it's
*

*>probably the larger amount. But even without this assumption of
*

*>realism it's an interesting problem and taken as stated there's
*

*>still no paradox. I saw this problem several years ago and here's
*

*>my solution. It takes the problem as stated, but I do make one
*

*>small additional restrictive assumption:
*

*>
*

*>Let: s = envelope with smaller amount is selected.
*

*> l = envelope with larger amount is selected.
*

*> m = the amount in the selected envelope.
*

*>
*

*>Since any valid resolution of the paradox would have to work for
*

*>ratios of money other than two, also define:
*

*>
*

*> r = the ratio of the larger amount to the smaller.
*

*>
*

*>Now here comes the restrictive assumption, which can be thought of
*

*>as a restrictive rule about how the amounts are chosen which I
*

*>hope to generalize away later. Expressed as a rule, it is this:
*

*>
*

*> The person putting in the money selects, at random (not
*

*>necessarily uniformly), the smaller amount from a range (x1, x2)
*

*>such that x2 < r*x1. In other words, the range of possible
*

*>amounts is such that the larger and smaller amount do not overlap.
*

*>Then, for any interval of the range (x,x+dx) for the smaller
*

*>amount with probability p, there is a corresponding interval (r*x,
*

*>r*x+r*dx) with probability p for the larger amount. Since the
*

*>latter interval is longer by a factor of r
*

*>
*

*> P(l|m)/P(s|m) = r ,
*

*>
*

*>In other words, no matter what m is, it is r-times more likely to
*

*>fall in a large-amount interval than in a small-amount interval.
*

*>
*

*>But since l and s are the only possibilities (and here's where I
*

*>need the non-overlap),
*

*>
*

*> P(1|m) + P(s|m) = 1
*

*>
*

*>which implies,
*

*>
*

*> P(s|m) = 1/(1+r) and P(1|m) = r/(1+r) .
*

*>
*

*>Then the rest is straightforward algebra. The expected values are:
*

*>
*

*> E(don't switch) = m
*

*>
*

*> E(switch) = P(s|m)rm + P(l|m)m/r
*

*> = [1/(1+r)]rm + [r/(1+r)]m/r
*

*> = m
*

*>
*

*> and no paradox.
*

This is right, but it's a pretty special case--there are an infinite number

of possible probability distributions the millionaire could use when

deciding how much money to put in one envelope, even if we assume he always

puts double in the other. For example, he could use a distribution that

gives him a 1/2 probability of putting between 0 and 1 dollars in one

envelope (assume the dollar amounts can take any positive real value, and he

uses a flat probability distribution to pick a number between 0 and 1), a

1/4 probability of putting in between 1 and 2 dollars, a 1/8 probability of

putting in between 2 and 3 dollars, and in general a 1/2^n probability of

putting in between n-1 and n dollars. This would insure there was some

finite probability that *any* positive real number could be found in either

envelope.

The basic paradox is that the argument tries to show that the average

expected payoff from picking the second envelope is higher than the average

expected payoff from sticking with the first one, *regardless of what amount

you found in the first envelope*--in other words, even without opening the

first envelope you'd be better off switching to the second, which doesn't

make sense since the envelopes are identical and your first pick was random.

But it's not actually possible that, regardless of what you found in the

first envelope, there would always be a 50% chance the other envelope

contained half that and a 50% chance it contained double that...for that to

be true, the amount in the first envelope would have to be picked using a

flat probability distribution which is equally likely to give any number

from 0 to infinity, and as I said that's impossible. But my argument was not

really sufficiently general either, because it doesn't rule out other

possibilities like a 55% chance the other envelope contained half what was

found in the first envelope and a 45% chance it contained double, in which

case your average expected payoff would still be higher if you switched.

A truly general argument would have to show that, for any logically possible

probability distribution the millionaire uses to pick the amounts in the

envelopes, the average expected payoff from switching will always be exactly

equal to the average expected winnings from sticking with your first choice.

There are two different ways this can be true:

Possibility #1: it may be that you know enough about the probability

distribution that opening the envelope and seeing how much is inside allows

you to refine your evaluation of the average expected payoff from switching.

I gave an example of this in my post, where the millionaire picks an amount

from 1 to a million to put in one envelope and puts double that in the

other; in that case, if you open your first pick and find an amount greater

than a million, the average expected payoff from switching is 0. But even if

the average expected payoff may vary depending on what you find in the first

envelope, the weighted average of all these possible average expected

payoffs should be equal to the average amount found in the first envelope.

In other words, if you take the sum over x (or integral, if you allow

continuous amounts of money) of

(average expected payoff from switching | amount found in first envelope was

x) * P(finding x in first envelope)

..it should be the case that this is equal to the average amount found in

the first envelope, or

x * P(finding x in first envelope)

For example, if the millionaire flips a coin to decide whether to put 1 or

two dollars in one envelope, then the following outcomes are equally

likely:

envelopes contain 1 and 2 dollars, first envelope I pick contains 1 dollar

envelopes contain 1 and 2 dollars, first envelope I pick contains 2 dollars

envelopes contain 2 and 4 dollars, first envelope I pick contains 2 dollars

envelopes contain 2 and 4 dollars, first envelope I pick contains 4 dollars

Thus the sum above would be

(average expected payoff from switching = 2 dollars | first envelope

contained 1 dollar)*(1/4 probability of finding 1 dollar in first envelope)

+

(average expected payoff from switching = 2.5 dollars | first envelope

contained 2 dollars)*(1/2 probability of finding 2 dollars in first

envelope) +

(average expected payoff from switching = 2 dollars | first envelope

contained 4 dollars)*(1/4 probability of finding 4 dollars in first

envelope)

= 0.5 + 1.25 + 0.5 = 2.25

This is indeed equal to the average amount found in the first envelope, or

1*1/4 + 2*1/2 + 4*1/4 = 2.25. This means that *before* you open the first

envelope, you can rest assured that the average expected payoff from

switching is equal to the average expected payoff from opening that

envelope.

Possibility #2: it may be that opening the first envelope *doesn't* allow

you to refine your calculation of the average expected payoff from

switching. Your example was like this, since although you knew the

millionaire used a flat probability distribution to pick an amount between

x1 and x2 to put in one envelope, and always put double that amount in the

other envelope, you don't actually know the values of x1 and x2, so opening

the first envelope gives you no additional information about the average

expected payoff from switching. In cases like this, whatever amount x you

found in the first envelope, the average expected payoff for switching must

also be equal to x. This should be true for any logically possible

probability distribution (or class of probability distributions, as in your

example) where opening one envelope gives you no new information about the

average expected amount in the second.

Dunno how you'd prove all this though, it would probably require making some

general statements about the set of all infinite probability distributions

that are allowable and the set that are not (like the infinite flat

distribution). I don't think you'd need separate proofs for possibility #1

and possibility #2 above, since the second possibility would probably just

be a special case of the first, although it could be that possibility #1

only happens when you know the specific probability distribution and

possibility #2 only happens when you only know the probability distribution

was a member of a particular class of distributions, as in your example.

Jesse

Received on Tue Oct 05 2004 - 04:14:52 PDT

Date: Tue, 05 Oct 2004 04:12:53 -0400

Brent Meeker wrote:

This is right, but it's a pretty special case--there are an infinite number

of possible probability distributions the millionaire could use when

deciding how much money to put in one envelope, even if we assume he always

puts double in the other. For example, he could use a distribution that

gives him a 1/2 probability of putting between 0 and 1 dollars in one

envelope (assume the dollar amounts can take any positive real value, and he

uses a flat probability distribution to pick a number between 0 and 1), a

1/4 probability of putting in between 1 and 2 dollars, a 1/8 probability of

putting in between 2 and 3 dollars, and in general a 1/2^n probability of

putting in between n-1 and n dollars. This would insure there was some

finite probability that *any* positive real number could be found in either

envelope.

The basic paradox is that the argument tries to show that the average

expected payoff from picking the second envelope is higher than the average

expected payoff from sticking with the first one, *regardless of what amount

you found in the first envelope*--in other words, even without opening the

first envelope you'd be better off switching to the second, which doesn't

make sense since the envelopes are identical and your first pick was random.

But it's not actually possible that, regardless of what you found in the

first envelope, there would always be a 50% chance the other envelope

contained half that and a 50% chance it contained double that...for that to

be true, the amount in the first envelope would have to be picked using a

flat probability distribution which is equally likely to give any number

from 0 to infinity, and as I said that's impossible. But my argument was not

really sufficiently general either, because it doesn't rule out other

possibilities like a 55% chance the other envelope contained half what was

found in the first envelope and a 45% chance it contained double, in which

case your average expected payoff would still be higher if you switched.

A truly general argument would have to show that, for any logically possible

probability distribution the millionaire uses to pick the amounts in the

envelopes, the average expected payoff from switching will always be exactly

equal to the average expected winnings from sticking with your first choice.

There are two different ways this can be true:

Possibility #1: it may be that you know enough about the probability

distribution that opening the envelope and seeing how much is inside allows

you to refine your evaluation of the average expected payoff from switching.

I gave an example of this in my post, where the millionaire picks an amount

from 1 to a million to put in one envelope and puts double that in the

other; in that case, if you open your first pick and find an amount greater

than a million, the average expected payoff from switching is 0. But even if

the average expected payoff may vary depending on what you find in the first

envelope, the weighted average of all these possible average expected

payoffs should be equal to the average amount found in the first envelope.

In other words, if you take the sum over x (or integral, if you allow

continuous amounts of money) of

(average expected payoff from switching | amount found in first envelope was

x) * P(finding x in first envelope)

..it should be the case that this is equal to the average amount found in

the first envelope, or

x * P(finding x in first envelope)

For example, if the millionaire flips a coin to decide whether to put 1 or

two dollars in one envelope, then the following outcomes are equally

likely:

envelopes contain 1 and 2 dollars, first envelope I pick contains 1 dollar

envelopes contain 1 and 2 dollars, first envelope I pick contains 2 dollars

envelopes contain 2 and 4 dollars, first envelope I pick contains 2 dollars

envelopes contain 2 and 4 dollars, first envelope I pick contains 4 dollars

Thus the sum above would be

(average expected payoff from switching = 2 dollars | first envelope

contained 1 dollar)*(1/4 probability of finding 1 dollar in first envelope)

+

(average expected payoff from switching = 2.5 dollars | first envelope

contained 2 dollars)*(1/2 probability of finding 2 dollars in first

envelope) +

(average expected payoff from switching = 2 dollars | first envelope

contained 4 dollars)*(1/4 probability of finding 4 dollars in first

envelope)

= 0.5 + 1.25 + 0.5 = 2.25

This is indeed equal to the average amount found in the first envelope, or

1*1/4 + 2*1/2 + 4*1/4 = 2.25. This means that *before* you open the first

envelope, you can rest assured that the average expected payoff from

switching is equal to the average expected payoff from opening that

envelope.

Possibility #2: it may be that opening the first envelope *doesn't* allow

you to refine your calculation of the average expected payoff from

switching. Your example was like this, since although you knew the

millionaire used a flat probability distribution to pick an amount between

x1 and x2 to put in one envelope, and always put double that amount in the

other envelope, you don't actually know the values of x1 and x2, so opening

the first envelope gives you no additional information about the average

expected payoff from switching. In cases like this, whatever amount x you

found in the first envelope, the average expected payoff for switching must

also be equal to x. This should be true for any logically possible

probability distribution (or class of probability distributions, as in your

example) where opening one envelope gives you no new information about the

average expected amount in the second.

Dunno how you'd prove all this though, it would probably require making some

general statements about the set of all infinite probability distributions

that are allowable and the set that are not (like the infinite flat

distribution). I don't think you'd need separate proofs for possibility #1

and possibility #2 above, since the second possibility would probably just

be a special case of the first, although it could be that possibility #1

only happens when you know the specific probability distribution and

possibility #2 only happens when you only know the probability distribution

was a member of a particular class of distributions, as in your example.

Jesse

Received on Tue Oct 05 2004 - 04:14:52 PDT

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:10 PST
*