Re: Are conscious beings always fallible?

From: Eric Hawthorne <egh.domain.name.hidden>
Date: Mon, 19 Jan 2004 23:29:10 -0800

How would they ever know that I wonder?
"Well let's see. I'm conscious and I'm not fallible. Therefore...." ;-)

David Barrett-Lennard wrote:

> I'm wondering whether the following demonstrates that a computer that can
> only generate "thoughts" which are sentences derivable from some
> underlying
> axioms (and therefore can only generate "true" thoughts) is unable to
> think.
>
> This is based on the fact that a formal system can't understand sentences
> written down within that formal system (forgive me if I've worded this
> badly).
>
> Somehow we would need to support free parameters within quoted
> expressions.
> Eg to specify the rule
>
> It is a good idea to simplify "x+0" to "x"
>
> It is not clear that language reflection can be supported in a completely
> general way. If it can, does this eliminate the need for a
> meta-language?
> How does this relate to the claim above?
>
> - David
>
>
I don't see the problem with representing logical meta-language, and
meta-metalanguage... etc if necessary
in a computer. It's a bit tricky to get the semantics to work out
correctly, I think, but there's nothing
"extra-computational" about doing higher-order theorem proving.

http://www.cl.cam.ac.uk/Research/HVG/HOL/

This is an example of an interactive (i.e. partly human-steered)
higher-order thereom prover.
I think with enough work someone could get one of these kind of systems
doing some useful higher-order
logic reasoning on its own, for certain kinds of problem domains anyway.

Eric
Received on Tue Jan 20 2004 - 02:33:10 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:09 PST