LauLuna wrote:
>
>
>On 29 jun, 02:13, "Jesse Mazer" <laserma....domain.name.hidden> wrote:
> > LauLuna wrote:
> >
> > >For any Turing machine there is an equivalent axiomatic system;
> > >whether we could construct it or not, is of no significance here.
> >
> > But for a simulation of a mathematician's brain, the axioms wouldn't be
> > statements about arithmetic which we could inspect and judge whether
>they
> > were true or false individually, they'd just be statements about the
>initial
> > state and behavior of the simulated brain. So again, there'd be no way
>to
> > inspect the system and feel perfectly confident the system would never
> > output a false statement about arithmetic, unlike in the case of the
> > axiomatic systems used by mathematicians to prove theorems.
> >
>
>Yes, but this is not the point. For any Turing machine performing
>mathematical skills there is also an equivalent mathematical axiomatic
>system; if we are sound Turing machines, then we could never know that
>mathematical system sound, in spite that its axioms are the same we
>use.
I agree, a simulation of a mathematician's brain (or of a giant simulated
community of mathematicians) cannot be a *knowably* sound system, because we
can't do the trick of examining each axiom and seeing they are individually
correct statements about arithmetic as with the normal axiomatic systems
used by mathematicians. But that doesn't mean it's unsound either--it may in
fact never produce a false statement about arithmetic, it's just that we
can't be sure in advance, the only way to find out is to run it forever and
check.
But Penrose was not just arguing that human mathematical ability can't be
based on a knowably sound algorithm, he was arguing that it must be
*non-algorithmic*. I think my thought-experiment shows why this doesn't make
sense--we can see that Godel's theorem doesn't prove that an uploaded brain
living in a closed computer simulation S would think any different from us,
just that it wouldn't be able to correctly output a theorem about arithmetic
equivalent to "the simulation S will never output this statement". But this
doesn't show that the uploaded mind somehow is not self-aware or that we
know something it doesn't, since *we* can't correctly judge that statement
to be true either! It might very well be that the simulated brain will slip
up and make a mistake, giving that statement as output even though the act
of doing so proves it's a false statement about arithmetic--we have no way
to prove this will never happen, the only way to know is to run the program
forever and see.
>
>And the impossibility has to be a logical impossibility, not merely a
>technical or physical one since it depends on Gödel's theorem. That's
>a bit odd, isn't it?
No, I don't see anything very odd about the idea that human mathematical
abilities can't be a knowably sound algorithm--it is no more odd than the
idea that there are some cellular automata where there is no shortcut to
knowing whether they'll reach a certain state or not other than actually
simulating them, as Wolfram suggests in "A New Kind of Science". In fact I'd
say it fits nicely with our feeling of "free will", that there should be no
way to be sure in advance that we won't break some rules we have been told
to obey, apart from actually "running" us and seeing what we actually end up
doing.
Jesse
_________________________________________________________________
Need a break? Find your escape route with Live Search Maps.
http://maps.live.com/default.aspx?ss=Restaurants~Hotels~Amusement%20Park&cp=33.832922~-117.915659&style=r&lvl=13&tilt=-90&dir=0&alt=-1000&scene=1118863&encType=1&FORM=MGAC01
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Jun 29 2007 - 13:10:34 PDT