On Jul 5, 10:14 pm, "Jesse Mazer" <laserma....domain.name.hidden> wrote:
> LauLuna wrote:
>
> >On 29 jun, 19:10, "Jesse Mazer" <laserma....domain.name.hidden> wrote:
> > > LauLuna wrote:
>
> > > >On 29 jun, 02:13, "Jesse Mazer" <laserma....domain.name.hidden> wrote:
> > > > > LauLuna wrote:
>
> > > > > >For any Turing machine there is an equivalent axiomatic system;
> > > > > >whether we could construct it or not, is of no significance here.
>
> > > > > But for a simulation of a mathematician's brain, the axioms wouldn't
> >be
> > > > > statements about arithmetic which we could inspect and judge whether
> > > >they
> > > > > were true or false individually, they'd just be statements about the
> > > >initial
> > > > > state and behavior of the simulated brain. So again, there'd be no
> >way
> > > >to
> > > > > inspect the system and feel perfectly confident the system would
> >never
> > > > > output a false statement about arithmetic, unlike in the case of the
> > > > > axiomatic systems used by mathematicians to prove theorems.
>
> > > >Yes, but this is not the point. For any Turing machine performing
> > > >mathematical skills there is also an equivalent mathematical axiomatic
> > > >system; if we are sound Turing machines, then we could never know that
> > > >mathematical system sound, in spite that its axioms are the same we
> > > >use.
>
> > > I agree, a simulation of a mathematician's brain (or of a giant
> >simulated
> > > community of mathematicians) cannot be a *knowably* sound system,
> >because we
> > > can't do the trick of examining each axiom and seeing they are
> >individually
> > > correct statements about arithmetic as with the normal axiomatic systems
> > > used by mathematicians. But that doesn't mean it's unsound either--it
> >may in
> > > fact never produce a false statement about arithmetic, it's just that we
> > > can't be sure in advance, the only way to find out is to run it forever
> >and
> > > check.
>
> >Yes, but how can there be a logical impossibility for us to
> >acknowledge as sound the same principles and rules we are using?
>
> The axioms in a simulation of a brain would have nothing to do with the
> high-level conceptual "principles and rules" we use when thinking about
> mathematics, they would be axioms concerning the most basic physical laws
> and microscopic initial conditions of the simulated brain and its simulated
> environment, like the details of which brain cells are connected by which
> synapses or how one cell will respond to a particular electrochemical signal
> from another cell. Just because I think my high-level reasoning is quite
> reliable in general, that's no reason for me to believe a detailed
> simulation of my brain would be "sound" in the sense that I'm 100% certain
> that this precise arrangement of nerve cells in this particular simulated
> environment, when allowed to evolve indefinitely according to some
> well-defined deterministic rules, would *never* make a mistake in reasoning
> and output an incorrect statement about arithmetic (or even that it would
> never choose to intentionally output a statement it believed to be false
> just to be contrary).
But again, for any set of such 'physiological' axioms there is a
corresponding equivalent set of 'conceptual' axioms. There is all the
same a logical impossibility for us to know the second set is sound.
No consistent (and strong enough) system S can prove the soundness of
any system S' equivalent to S: otherwise S' would prove its own
soundness and would be inconsistent. And this is just what is odd.
> > > But Penrose was not just arguing that human mathematical ability can't
> >be
> > > based on a knowably sound algorithm, he was arguing that it must be
> > > *non-algorithmic*.
>
> >No, he argues in Shadows of the Mind exactly what I say. He goes on
> >arguing why a sound algorithm representing human intelligence is
> >unlikely to be not knowably sound.
>
> He does argue that as a first step, but then he goes on to conclude what I
> said he did, that human intelligence cannot be algorithmic. For example, on
> p. 40 he makes quite clear that his arguments throughout the rest of the
> book are intended to show that there must be something non-computational in
> human mental processes:
>
> "I shall primarily be concerned, in Part I of this book, with the issue of
> what it is possible to achieve by use of the mental quality of
> 'understanding.' Though I do not attempt to define what this word means, I
> hope that its meaning will indeed be clear enough that the reader will be
> persuaded that this quality--whatever it is--must indeed be an essentail
> part of that mental activity needed for an acceptance of the arguments of
> 2.5. I propose to show that the appresiation of these arguments must involve
> something non-computational."
>
> Later, on p. 54:
>
> "Why do I claim that this 'awareness', whatever it is, must be something
> non-computational, so that no robot, controlled by a computer, based merely
> on the standard logical ideas of a Turing machine (or equivalent)--whether
> top-down or bottom-up--can achieve or even simulate it? It is here that the
> Godelian argument plays its crucial role."
Yes, he ultimately argues for that.
> His whole Godelian argument is based on the idea that for any computational
> theorem-proving machine, by examining its construction we can use this
> "understanding" to find a mathematical statement which *we* know must be
> true, but which the machine can never output--that we understand something
> it doesn't.
I'd say this is rather Lucas's argument. Penrose's is like this:
1. Mathematicians are not using a knowably sound algorithm to do math.
2. If they were using any algorithm whatsoever, they would be using a
knowably sound one.
3. Ergo, they are not using any algorithm at all.
>But I think my argument shows that if you were really to build a
> simulated mathematician or community of mathematicians in a computer, the
> Godel statement for this system would only be true *if* they never made a
> mistake in reasoning or chose to output a false statement to be perverse,
> and that therefore there is no way for us on the outside to have any more
> confidence about whether they will ever output this statement than they do
> (and thus neither of us can know whether the statement is actually a true or
> false theorem of arithmetic).
>
> It's true that on p. 76, Penrose does restrict his conclusions about "The
> Godelian Case" to the following statement (which he denotes 'G'):
>
> "Human mathematicians are not using a knowably sound algorithm in order to
> ascertain mathematical truth."
>
> I have no objection to this proposition on its own, but then in Chapter 3,
> "The case for non-computability in mathematical thought" he does go on to
> argue (as the chapter title suggest) that this proposition G justifies the
> claim that human reasoning must be non-computable. In discussing objections
> to this argument, he dismisses the possibility that G might be correct but
> that humans are using an unknowable algorithm, or an unsound algorithm, but
> as far as I can see he never discusses the possibility I have been
> suggesting, that an algorithm that faithfully simulated the reasoning of a
> human mathematician (or community of mathematicians) might be both knowable
> (in the sense that the beings in the simulation are free to examine their
> own algorithm) and sound (meaning that if the simulation is run forever,
> they never output a false statement about arithmetic), but just not knowably
> sound (meaning that neither they nor us can find a *proof* that will tell us
> in advance that the simulation will never output a false statement, the only
> way to check is to run it forever and see).
>
>
>
> > > >And the impossibility has to be a logical impossibility, not merely a
> > > >technical or physical one since it depends on Gödel's theorem. That's
> > > >a bit odd, isn't it?
>
> > > No, I don't see anything very odd about the idea that human mathematical
> > > abilities can't be a knowably sound algorithm--it is no more odd than
> >the
> > > idea that there are some cellular automata where there is no shortcut to
> > > knowing whether they'll reach a certain state or not other than actually
> > > simulating them, as Wolfram suggests in "A New Kind of Science".
>
> >The point is that the axioms are exactly our axioms!
>
> Again, the "axioms" would be detailed statements about the initial
> conditions and behavior of the most basic elements of the simulation--the
> initial position and velocity of each simulated molecule along with rules
> for the molecules' behavior, perhaps--not the sort of high-level conceptual
> axioms we use in our minds when thinking about mathematics. If we can't even
> predict whether some very simple cellular automata will ever reach a given
> state, I don't see why it should be surprising that we can't predict whether
> some very complex physical simulation of an immortal brain and its
> environment will ever reach a given state (the state in which it decides to
> output the system's Godel statement, whether because of incorrect reasoning
> or just out of contrariness).
>
>
>
> > >In fact I'd
> > > say it fits nicely with our feeling of "free will", that there should be
> >no
> > > way to be sure in advance that we won't break some rules we have been
> >told
> > > to obey, apart from actually "running" us and seeing what we actually
> >end up
> > > doing.
>
> >I don't see how to reconcile free will with computationalism either.
>
> I am only talking about the feeling of free will which is perfectly
> compatible with ultimate determinism (seehttp://en.wikipedia.org/wiki/Compatibilism), not the philosophical idea of
> "libertarian free will" (seehttp://en.wikipedia.org/wiki/Libertarianism_(metaphysics) ) which requires
> determinism to be false. If we had some unerring procedure for predicting
> whether other people or even ourselves would make a certain decision in the
> future, it's hard to see how we could still have the same subjective sense
> of making choices whose outcomes aren't certain until we actually make them.
>
> Jesse
>
> _________________________________________________________________http://im.live.com/messenger/im/home/?source=hmtextlinkjuly07- Hide quoted text -
>
> - Show quoted text -
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Jul 06 2007 - 08:54:24 PDT