- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Bruno Marchal <marchal.domain.name.hidden>

Date: Thu, 9 Jun 2005 11:34:07 +0200

Le 09-juin-05, à 01:19, Jonathan Colvin a écrit :

*> I don't believe in observers, if by "observer" one means to assign
*

*> special
*

*> ontological status to mental states over any other arrangement of
*

*> matter.
*

I don't believe in matters, if by "matters" one means to assign special

ontological status to some substance, by which it is mean (Aristotle)

anything entirely determined by its parts.

*> This is similar to the objection to the classic interpretation of QM,
*

*> whereby an "observation" is required to collapse the WF (how do you
*

*> define
*

*> "observer"?..a rock?..a chicken?..a person?).
*

Yes, but Everett did succeed his explanation of the apparent collapse

by defining an observer by "just" classical memory machine.

*>
*

*> But this was in response to a comment that "it was time to get serious
*

*> about
*

*> observer-moments". An observer is such a poorly defined and nebulous
*

*> thing
*

*> that I don't think one can get serious about it.
*

My definition is that an observer is a universal (Turing) machine. With

Church's thesis we can drop the "Turing" qualification.

Actually an observer is a little more. It is a sufficiently "rich"

universal machine.

To be utterly precise (like in my thesis) an observer is a lobian

machine, by which I mean any machine which is able to prove "ExP(x) ->

Provable("ExP(x))" for any decidable predicate P(x). ExP(x) means

there is a natural number x such that P(x), and "provable" is the

provability predicate studied by Godel, Lob and many others.

But then I need to explain more on the provability logic to explain the

nuances between the scientist machine, the knowing machine, the

observing machine, etc. You can look at my sane paper for an overview.

*> I'd note that your
*

*> definition is close to being circular.."an observer is something
*

*> sufficiently similar to me that I might think I could have been it".
*

*> But how
*

*> do we decide what is "sufficient"? The qualities you list
*

*> (consciousness,
*

*> perception etc) are themselves poorly defined or undefinable.
*

Consciousness can be considered as a first person view of the result of

an automatic bet on the existence of a model (in the logician sense) of

oneself. From this we can explain why "consciousness" is not

representable in the language of a machine. And consciousness get a

role: self-speeding up oneself relatively to our most probable

computational histories.

It should develop in all self-moving mechanical entity.

I define variant of "first person view" by applying Theaetetus'

definition of knowledge (and "popperian" variants) on the Godel

self-referential provability predicate.

Perhaps you could try to tell me what do you mean by "matter?"

Bruno

http://iridia.ulb.ac.be/~marchal/

Received on Thu Jun 09 2005 - 05:43:24 PDT

Date: Thu, 9 Jun 2005 11:34:07 +0200

Le 09-juin-05, à 01:19, Jonathan Colvin a écrit :

I don't believe in matters, if by "matters" one means to assign special

ontological status to some substance, by which it is mean (Aristotle)

anything entirely determined by its parts.

Yes, but Everett did succeed his explanation of the apparent collapse

by defining an observer by "just" classical memory machine.

My definition is that an observer is a universal (Turing) machine. With

Church's thesis we can drop the "Turing" qualification.

Actually an observer is a little more. It is a sufficiently "rich"

universal machine.

To be utterly precise (like in my thesis) an observer is a lobian

machine, by which I mean any machine which is able to prove "ExP(x) ->

Provable("ExP(x))" for any decidable predicate P(x). ExP(x) means

there is a natural number x such that P(x), and "provable" is the

provability predicate studied by Godel, Lob and many others.

But then I need to explain more on the provability logic to explain the

nuances between the scientist machine, the knowing machine, the

observing machine, etc. You can look at my sane paper for an overview.

Consciousness can be considered as a first person view of the result of

an automatic bet on the existence of a model (in the logician sense) of

oneself. From this we can explain why "consciousness" is not

representable in the language of a machine. And consciousness get a

role: self-speeding up oneself relatively to our most probable

computational histories.

It should develop in all self-moving mechanical entity.

I define variant of "first person view" by applying Theaetetus'

definition of knowledge (and "popperian" variants) on the Godel

self-referential provability predicate.

Perhaps you could try to tell me what do you mean by "matter?"

Bruno

http://iridia.ulb.ac.be/~marchal/

Received on Thu Jun 09 2005 - 05:43:24 PDT

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:10 PST
*