On 29 jun, 19:10, "Jesse Mazer" <laserma....domain.name.hidden> wrote:
> LauLuna wrote:
>
> >On 29 jun, 02:13, "Jesse Mazer" <laserma....domain.name.hidden> wrote:
> > > LauLuna wrote:
>
> > > >For any Turing machine there is an equivalent axiomatic system;
> > > >whether we could construct it or not, is of no significance here.
>
> > > But for a simulation of a mathematician's brain, the axioms wouldn't be
> > > statements about arithmetic which we could inspect and judge whether
> >they
> > > were true or false individually, they'd just be statements about the
> >initial
> > > state and behavior of the simulated brain. So again, there'd be no way
> >to
> > > inspect the system and feel perfectly confident the system would never
> > > output a false statement about arithmetic, unlike in the case of the
> > > axiomatic systems used by mathematicians to prove theorems.
>
> >Yes, but this is not the point. For any Turing machine performing
> >mathematical skills there is also an equivalent mathematical axiomatic
> >system; if we are sound Turing machines, then we could never know that
> >mathematical system sound, in spite that its axioms are the same we
> >use.
>
> I agree, a simulation of a mathematician's brain (or of a giant simulated
> community of mathematicians) cannot be a *knowably* sound system, because we
> can't do the trick of examining each axiom and seeing they are individually
> correct statements about arithmetic as with the normal axiomatic systems
> used by mathematicians. But that doesn't mean it's unsound either--it may in
> fact never produce a false statement about arithmetic, it's just that we
> can't be sure in advance, the only way to find out is to run it forever and
> check.
Yes, but how can there be a logical impossibility for us to
acknowledge as sound the same principles and rules we are using?
>
> But Penrose was not just arguing that human mathematical ability can't be
> based on a knowably sound algorithm, he was arguing that it must be
> *non-algorithmic*.
No, he argues in Shadows of the Mind exactly what I say. He goes on
arguing why a sound algorithm representing human intelligence is
unlikely to be not knowably sound.
>
>
> >And the impossibility has to be a logical impossibility, not merely a
> >technical or physical one since it depends on Gödel's theorem. That's
> >a bit odd, isn't it?
>
> No, I don't see anything very odd about the idea that human mathematical
> abilities can't be a knowably sound algorithm--it is no more odd than the
> idea that there are some cellular automata where there is no shortcut to
> knowing whether they'll reach a certain state or not other than actually
> simulating them, as Wolfram suggests in "A New Kind of Science".
The point is that the axioms are exactly our axioms!
>In fact I'd
> say it fits nicely with our feeling of "free will", that there should be no
> way to be sure in advance that we won't break some rules we have been told
> to obey, apart from actually "running" us and seeing what we actually end up
> doing.
I don't see how to reconcile free will with computationalism either.
Regards
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Thu Jul 05 2007 - 15:15:04 PDT