At 09:54 27/07/04 -0700, Hal Finney wrote:
>I am confused about how "belief" works in this logical reasoner of type 1.
>Suppose I am such a reasoner. I can be thought of as a theorem-proving
>machine who uses logic to draw conclusions from premises. We can imagine
>there is a numbered list of everything I believe and have concluded.
>It starts with my premises and then I add to it with my conclusions.
OK.
>In this case my premises might be:
>
>1. Knights always tell the truth
>2. Knaves always lie
>3. Every native is either a knight or a knave
>4. A native said, "you will never believe I am a knight".
>
>
>Now we can start drawing conclusions. Let t be the proposition that
>the native is a knight (and hence tells the truth). Then 3 implies:
>
>5. t or ~t
>
>Point 4 leads to two conclusions:
>
>6. t implies ~Bt
>7. ~t implies Bt
>
>Here I use ~ for "not", and Bx for "I believe x." I am ignoring some
>complexities involving the future tense of the word "will" but I think
>that is OK.
Perfect. Here "Hal believes p" means that sooner or later Hal will
assert, believe or prove p. It means p belongs to the list you mentionned.
>However now I am confused. How do I work with this letter B? What kind
>of rules does it follow?
>
>I understand that Bx, I believe x, is merely a shorthand for saying that
>x is on my list of premises/conclusions.
Correct. This means Bx is a equivalent with "Hal believes x". The
only difference is that "Bx" is supposed to be in the language of the machine.
>If I ever write down "x" on
>my numbered list, I could also write down "Bx" and "BBx" and "BBBx" as far
>as I feel like going. Is this correct?
Well, not necessarily. Unless you are a "normal machine", which
I hope you are!
So let us accept the following definition: a machine is normal when,
if it ever assert x, it will sooner or later asserts Bx.
Normality is a form of self-awareness: when the machine believes x,
it will believe Bx, that is it will believe that it will believe x.
>But what about the other direction? From Bx, can I deduce x? That's
>pretty important for this puzzle. If Bx merely is a shorthand for
>saying that x is on my list, then it seems fair to say that if I ever
>write down Bx I can also write down x. But this seems too powerful.
You are right. It is powerful, but rather fair also.
let us define a machine to be stable if that is the case. When the
machine believes Bx the machine believes x.
>So what are the correct rules that I, as a simple machine, can follow for
>dealing with the letter B?
Actually we will be interested in a lot of sort of machine. But I do
hope you are both normal and stable. Actually I'm sure you are.
>The problem is that the rules I proposed here lead to a contradiction.
>If x implies Bx, then I can write down:
>
>8. t implies Bt
>
>Note, this does not mean that if he is a knight I believe it, but rather
>that if I ever deduce he is a knight, I believe it, which is simply the
>definition of "believe" in this context.
Here you are mistaken. It is funny because you clearly see
the mistake, given that you say 'attention (t implies Bt) does
not mean "if he is a knight the I believe it". But of course (t implies Bt)
*does* mean "if he is a knight the I believe it". You add it means only
(and here I add a slight correction) "if I ever deduce he is a knight I
will deduce I believe he is a knight" which really is, in the machine
language: (Bt implies BBt) instead of (t implies Bt).
To be sure: a machine is normal if for any proposition p, if the machine
believes p, it will believe Bp. But this is equivalent with saying
that for any proposition p, the proposition (Bp implies BBp) is true
*about* the machine.
Same remark for stability: you can say a machine is stable
if all the propositions (BBp implies Bp) are true about the machine.
This does not mean the stable or normal machine will ever believe
being stable or normal. You have (momentarily) confuse a proposition
being true on a machine, and being believe by a machine.
>But 6 and 8 together mean that t implies a contradiction, hence I can
>conclude:
>
>9. ~t
>
>He is a knave. 7 then implies
>
>10. Bt
>
>I believe he is a knight. And if Bx implies x, then:
>
>11. t
>
>and I have reached a contradiction with 9.
>
>So I don't think I am doing this right.
By taking into account the confusion above, you should be able
to prove, with t still the same proposition (that is (t <-> -Bt)), that
(in case you are a normal reasoner of type 1):
if you are consistent, then t is not provable (believable, assertable by you)
if you are consistent and stable, then -t is not provable either.
That's Godel's first incompleteness theorem. (once we eliminate
the KK island from the reasoning to be sure). Bravo.
To sum up; any normal stable reasoner of type 1 meeting
a knight saying "you will never believe I'm a knight" will be
forever incomplete. (incomplete = there is a proposition like t,
which is neither provable nor refutable).
And so a machine cannot be omniscient because, although
the KK island does not exist, the diagonalization lemma
will supply the necessary believable self-referential propositions.
The distinction between "true about the machine" and
"believed by the machine" is the key for the G / G* distinction.
Bruno
http://iridia.ulb.ac.be/~marchal/
Received on Wed Jul 28 2004 - 07:14:28 PDT