Re: Overcoming Incompleteness

From: Jesse Mazer <lasermazer.domain.name.hidden>
Date: Thu, 24 May 2007 22:31:33 -0400

Russell Standish:

>
>You are right when it comes to the combination of two independent
>systems A and B. What the original poster's idea was a
>self-simulating, or self-aware system. In this case, consider the liar
>type paradox:
>
> I cannot prove this statement
>
>Whilst I cannot prove this statement, I do know it is true, simply
>because if I could prove the statement it would be false.

Yes, but Godel statements are more complex than verbal statements like the
one above, they actually encode the complete rules of the theorem-proving
system into the statement. A better analogy might be if you were an upload
(see http://en.wikipedia.org/wiki/Mind_transfer) living in a self-contained
deterministic computer simulation, and the only messages you could send to
the outside world were judgments about whether particular mathematical
theorems were true (once you make a judgement, you can't take it back...for
any judgements your program makes, there must be a halting program that can
show that you'll definitely make that judgement after some finite number of
steps). Suppose you know the complete initial conditions X and dynamical
rules Y of the simulation. Then suppose you're given a mathematical theorem
Z which you can see qualifies as an "encoding" of the statement "the
deterministic computer simulation with initial conditions X and dynamical
rules Y will never output theorem Z as a true statement." You can see
intuitively that it should be true if your reasoning remains correct, but
you can't be sure that at some point after the simulation is running for a
million years or something you won't decide to output that statement in a
fit of perversity, nor can you actually come up with a rigorous *proof* that
you'll never do that, since you can't find any shortcut to predicting the
system's behavior aside from actually letting the simulation run and seeing
what happens.

The same thing would be true even if you replaced an individual in a
computer simulation with a giant simulated community of mathematicians who
could only output a given theorem if they had a unanimous vote, and where
the size of the community was constantly growing so the probability of
errors should be ever-diminishing...although they might hope that they might
never make an error even if the simulation ran forever, they couldn't
rigorously prove this unless they found some shortcut for predicting their
own community's behavior better than just letting the program run and seeing
what would happen (if they did find such a shortcut, this would have strange
implications for their own feeling of free will!)

Jesse

_________________________________________________________________
More photos, more messages, more storage--get 2GB with Windows Live Hotmail.
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_2G_0507


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Thu May 24 2007 - 22:31:48 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST