- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Alastair Malcolm <amalcolm.domain.name.hidden>

Date: Sun, 14 Nov 1999 16:11:38 -0000

I think our views could be quite close here - if only I could persuade you

to forget about illogical universes!

----- Original Message -----

From: Russell Standish <R.Standish.domain.name.hidden>

*> In the Schmidhuber plenitude, non-wff bitstrings are perfectly
*

*> reasonable universes. It is not clear whether they would have
*

*> self-aware substructures, and what the SASes would make of their universe.
*

This is one area of your scheme that is not clear to me. As I have mentioned

before, the potential roles of TM's (or mappings in general) should not be

confused:

(1) Role of TM solely as an inference engine (implementing the specific

transition rules of Modus Ponens, substitutivity and so on) - this creates

at least two problems if it is applied to your scheme:

(a) An inference engine can do nothing with a non-wff (it is outside the

context of its application).

(b) Why do we have this specific TM - why not others? So we have to go to

(2):

(2) A set of UTM's operating all possible rules on bitstrings. But now the

term non-wff becomes defunct, since whatever program runs, operates on the

bits regardless of whether they denote symbols or not - I cannot see how

anything coherent comes out the other end. The Tegmark plenitude as built

from wff strings is likely to be an insignificant minority of universes

anyway in this scenario. If it all depends on the interpretation of the

output bits, then why have the UTM process in the first place - why not just

have all possible bit combinations? We are now anyway operating outside the

context of wff's and non-wffs.

Assuming your scheme either comes under (2), or doesn't involve UTM's at

all, I can't quite see where it differs from our earlier scheme based on all

possible interpretations of bitstrings (which as far as I understood it

featured a minimal bit specification of our TOE as the representation of the

SAS-compatible universe with the highest measure). Is your scheme using a

UTM or mapping as some kind of part-interpreter from a random bit string to

a frog-level (or phenomenological) universe specification? (In short, how

does the interpretation relate to the mapping/TM-process?)

*> You could interpret them as anything you like - UTM programs, axioms
*

*> of a mathematical theory, the works of Shakespeare. I'm not sure why
*

*> you are asking this though.
*

Again, to try to see how your scheme differs from our earlier one.

*> > Note that many other systems (for example pre-programmed microchips,
*

brains,

*> > other mathematical mappings) can implement transitional rules - there is
*

*> > nothing special about UTM sequential processors.
*

*>
*

*> Only that a UTM can implement any set of transitional rules. This is
*

*> not necessarily true of the other entities you mention.
*

Mathematical mappings can also implement other transition rules. If all

possible transition rules are necessary for your scheme, it suffers from the

problems of (2) above.

*> OK - I was using the dream idea as analogy, not asserting that dreams
*

*> are real universes. My point was that human beings, and by somewhat
*

*> iffy generalisation all SASes will attempt to interpret random (or
*

*> nonsense) data (in the K-C sense), and will partially succeed, even at
*

*> the expense of logical consistency. I'm not convinced that the
*

*> universe has to be logically consistent, or even logical at all.
*

Information represents what is true about (say) a universe. If you have

something that is both true and false (that is, a logical inconsistency), it

cannot be represented as information - information theory does not apply,

and so any logical-inconsistency-involving extensions to the 'Tegmark

plenitude' cannot be represented in the 'Schmidhuber plenitude'.

(I'll try to read your paper next week)

Alastair

Received on Sun Nov 14 1999 - 08:18:36 PST

Date: Sun, 14 Nov 1999 16:11:38 -0000

I think our views could be quite close here - if only I could persuade you

to forget about illogical universes!

----- Original Message -----

From: Russell Standish <R.Standish.domain.name.hidden>

This is one area of your scheme that is not clear to me. As I have mentioned

before, the potential roles of TM's (or mappings in general) should not be

confused:

(1) Role of TM solely as an inference engine (implementing the specific

transition rules of Modus Ponens, substitutivity and so on) - this creates

at least two problems if it is applied to your scheme:

(a) An inference engine can do nothing with a non-wff (it is outside the

context of its application).

(b) Why do we have this specific TM - why not others? So we have to go to

(2):

(2) A set of UTM's operating all possible rules on bitstrings. But now the

term non-wff becomes defunct, since whatever program runs, operates on the

bits regardless of whether they denote symbols or not - I cannot see how

anything coherent comes out the other end. The Tegmark plenitude as built

from wff strings is likely to be an insignificant minority of universes

anyway in this scenario. If it all depends on the interpretation of the

output bits, then why have the UTM process in the first place - why not just

have all possible bit combinations? We are now anyway operating outside the

context of wff's and non-wffs.

Assuming your scheme either comes under (2), or doesn't involve UTM's at

all, I can't quite see where it differs from our earlier scheme based on all

possible interpretations of bitstrings (which as far as I understood it

featured a minimal bit specification of our TOE as the representation of the

SAS-compatible universe with the highest measure). Is your scheme using a

UTM or mapping as some kind of part-interpreter from a random bit string to

a frog-level (or phenomenological) universe specification? (In short, how

does the interpretation relate to the mapping/TM-process?)

Again, to try to see how your scheme differs from our earlier one.

brains,

Mathematical mappings can also implement other transition rules. If all

possible transition rules are necessary for your scheme, it suffers from the

problems of (2) above.

Information represents what is true about (say) a universe. If you have

something that is both true and false (that is, a logical inconsistency), it

cannot be represented as information - information theory does not apply,

and so any logical-inconsistency-involving extensions to the 'Tegmark

plenitude' cannot be represented in the 'Schmidhuber plenitude'.

(I'll try to read your paper next week)

Alastair

Received on Sun Nov 14 1999 - 08:18:36 PST

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:06 PST
*