Re: Another tedious hypothetical

From: rmiller <rmiller.domain.name.hidden>
Date: Tue, 07 Jun 2005 15:54:02 -0500

At 02:45 PM 6/7/2005, Jesse Mazer wrote:
(snip)


>Of course in this example Feynman did not anticipate in advance what
>licence plate he'd see, but the kind of "hindsight bias" you are engaging
>in can be shown with another example. Suppose you pick 100 random words
>out of a dictionary, and then notice that the list contains the words
>"sun", "also", and "rises"...as it so happens, that particular 3-word
>"gestalt" is also part of the title of a famous book, "the sun also rises"
>by Hemingway. Is this evidence that Hemingway was able to anticipate the
>results of your word-selection through ESP? Would it be fair to test for
>ESP by calculating the probability that someone would title a book with
>the exact 3-word gestalt "sun, also, rises"? No, because this would be
>tailoring the choice of gestalt to Hemingway's book in order to make it
>seem more unlikely, in fact there are 970,200 possible 3-word gestalts you
>could pick out of a list of 100 possible words, so the probability that a
>book published earlier would contain *any* of these gestalts is a lot
>higher than the probability it would contain the precise gestalt "sun,
>also, rises". Selecting a precise target gestalt on the basis of the fact
>that you already know there's a book/story containing that gestalt is an
>example of hindsight bias--in the Heinlein example, you wouldn't have
>chosen the precise gestalt of Szilard/lens/beryllium/uranium/bomb from a
>long list of words associated with the Manhattan Project if you didn't
>already know about Heinlein's story.
>
>RM wrote:
In two words: Conclusions first.
Can you really offer no scientific procedure to evaluate Heinlein's
story? At the cookie jar level, can you at least grudgingly admit that the
word "Szilard" sure looks like "Silard"? Sounds like it too. Or is that a
coincidence as well? What are the odds. Should be calculable--how many
stories written in 1939 include the names of Los Alamos scientists in
conjunction with the words "bomb" , "uranium. . ."

You're shaking your head. This, I assume is already a done deal, for you.

And that, in my view, is the heart of the problem. Rather than swallow
hard and look at this in a non-biased fashion, you seem to be glued to the
proposition that (1) it's intractable or (2) it's not worth analyzing
because the answer is obvious.

If your answer is (1), then fine. Let others worry about it. But if your
answer is (2), then congratulations---you've likely committed a Type II
error. In all of your posts, you seem to present reasons why the Heinlein
story should not be investigated because (I'm paraphrasing, of course) it's
"obviously" not worthy of investigation. You exclude ALL the
evidence---even the Bonferroni doesn't do that. Logically, if you exclude
all the evidence, then the probability that you might miss something go to.
. .1. One hundred percent.

When one chooses to use, say Spearman Correlation Coefficients to evaluate
multiple pairs, the usual protocol involves using the Bonferroni
correction--in which the alpha (often at 0.05) is divided by some multiple
of the number of pairs evaluated--usually simply the number of pairs. A
thousand pairs? then, the alpha should be divided by a thousand and the
resultant p value accepted as similar to a single p value of 0.05. Problem
is, this sort of trick will cost you statistical power. You may not decide
something is significant when it is not, but you may also throw out a value
that truly is important. As the type I error risk goes down, the Type II
error risk goes up. (Reducing alpha increases beta (the probability of
making a Type II error.) There are reputable statisticians who suggest not
using the Bonferroni at all. In my work, I evaluate cancer rates against
radioisotopes in nuclear fallout---but I require a very high Z score for
significance.

I've yet to see a good protocol defined here to evaluate the Heinlein
story, most prefer to fall back onto the soft couch of bias and
prejudgment. But in doing so, your beta goes out the roof--and you
guarantee yourself that you'll never recognize *anything* as
significant. It would seem that it would be far easier and more
scientifically sound to just admit that you are aware of no tools that can
properly evaluate it.

PS: Note I haven't mentioned anything about proof or causation---merely
the ability to apply the scientific method--properly free of bias---to a
set of circumstances. So far (as with the Thompkins quote)--it looks like
"conclusions first, justification later."

Hope your drug company doesn't use the same protocol. Because
*that* wouldn't be right, would it?;-)


RM
Received on Tue Jun 07 2005 - 16:55:58 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST