- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: David Barrett-Lennard <davidbl.domain.name.hidden>

Date: Tue, 20 Jan 2004 00:43:59 +0800

I'm wondering whether the following demonstrates that a computer that can

only generate "thoughts" which are sentences derivable from some underlying

axioms (and therefore can only generate "true" thoughts) is unable to think.

This is based on the fact that a formal system can't understand sentences

written down within that formal system (forgive me if I've worded this

badly). For example, the expression "3+4" is both equal and not equal to "7"

depending on your point of view. If a mathematician was wearing special

glasses that mapped every expression into the object represented by that

expression, she couldn't possibly make sense of what she was doing. Eg

"3+4=7" wouldn't even appear as "7=7", but simply "true". An interesting

theorem would be replaced by "true" which is not very interesting!

When a mathematician thinks about the expression "3+4=7", there seems to be

a duality of perspectives that must be juggled simultaneously. I think this

duality is implicit to thinking. It seems to be associated with the

relationship between a language and a meta-language.

There are some good theorem proving programs around today. The output (and

internal trace) of these programs always correspond to true sentences,

making them infallible (but only in the sense of never making a mistake).

However, every step is based on rules expressed in a meta-language where

the underlying mathemetical truth seems to be isolated to the computer

scientist who wrote the program.

Consider that a computer has a rule "It is a good idea to simplify 3+4 to

7". If this rule is applied to itself the semantics of the rule is changed.

Clearly 3+4 is not equal to 7, for the purposes of the meta-language used to

represent both the computer's line of reasoning and the rules that it uses

to advance its line of reasoning.

This suggests that thoughts must be expressed in a meta-language that can't

directly be modelled on productions from a set of "true" axioms. Does this

demonstrate that any conscious being must be fallible?

Consider that we allow an expression to have sub-expressions *quoted* within

it? Eg We may write a rule like this

It is a good idea to simplify "3+4" to "7"

Somehow we would need to support free parameters within quoted expressions.

Eg to specify the rule

It is a good idea to simplify "x+0" to "x"

It is not clear that language reflection can be supported in a completely

general way. If it can, does this eliminate the need for a meta-language?

How does this relate to the claim above?

- David

Received on Mon Jan 19 2004 - 11:41:30 PST

Date: Tue, 20 Jan 2004 00:43:59 +0800

I'm wondering whether the following demonstrates that a computer that can

only generate "thoughts" which are sentences derivable from some underlying

axioms (and therefore can only generate "true" thoughts) is unable to think.

This is based on the fact that a formal system can't understand sentences

written down within that formal system (forgive me if I've worded this

badly). For example, the expression "3+4" is both equal and not equal to "7"

depending on your point of view. If a mathematician was wearing special

glasses that mapped every expression into the object represented by that

expression, she couldn't possibly make sense of what she was doing. Eg

"3+4=7" wouldn't even appear as "7=7", but simply "true". An interesting

theorem would be replaced by "true" which is not very interesting!

When a mathematician thinks about the expression "3+4=7", there seems to be

a duality of perspectives that must be juggled simultaneously. I think this

duality is implicit to thinking. It seems to be associated with the

relationship between a language and a meta-language.

There are some good theorem proving programs around today. The output (and

internal trace) of these programs always correspond to true sentences,

making them infallible (but only in the sense of never making a mistake).

However, every step is based on rules expressed in a meta-language where

the underlying mathemetical truth seems to be isolated to the computer

scientist who wrote the program.

Consider that a computer has a rule "It is a good idea to simplify 3+4 to

7". If this rule is applied to itself the semantics of the rule is changed.

Clearly 3+4 is not equal to 7, for the purposes of the meta-language used to

represent both the computer's line of reasoning and the rules that it uses

to advance its line of reasoning.

This suggests that thoughts must be expressed in a meta-language that can't

directly be modelled on productions from a set of "true" axioms. Does this

demonstrate that any conscious being must be fallible?

Consider that we allow an expression to have sub-expressions *quoted* within

it? Eg We may write a rule like this

It is a good idea to simplify "3+4" to "7"

Somehow we would need to support free parameters within quoted expressions.

Eg to specify the rule

It is a good idea to simplify "x+0" to "x"

It is not clear that language reflection can be supported in a completely

general way. If it can, does this eliminate the need for a meta-language?

How does this relate to the claim above?

- David

Received on Mon Jan 19 2004 - 11:41:30 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:09 PST
*