>
> I think our views could be quite close here - if only I could persuade you
> to forget about illogical universes!
>
> ----- Original Message -----
> From: Russell Standish <R.Standish.domain.name.hidden>
> > In the Schmidhuber plenitude, non-wff bitstrings are perfectly
> > reasonable universes. It is not clear whether they would have
> > self-aware substructures, and what the SASes would make of their universe.
>
> This is one area of your scheme that is not clear to me. As I have mentioned
> before, the potential roles of TM's (or mappings in general) should not be
> confused:
>
> (1) Role of TM solely as an inference engine (implementing the specific
> transition rules of Modus Ponens, substitutivity and so on) - this creates
> at least two problems if it is applied to your scheme:
> (a) An inference engine can do nothing with a non-wff (it is outside the
> context of its application).
> (b) Why do we have this specific TM - why not others? So we have to go to
This is indeed the role of the TM. As far as problem a) is concerned,
all bitstrings are interpretable as valid programs. However, the
output couble be rubbish, and may correspond to a non-wff, an
illogical universe, or even "nothing at all" - complete
randomness. The thing to keep in mind here, is that there are two
levels of description: the bitstring/UTM level, and the Mathematical
structure interpretation of this bitstring.
Problem b) I address in the paper. The bitstrings need to be
interpreted by any SAS that inhabits the world so constructed. No
other TM is required. I believe it is not even necessary for the SAS
to be a UTM, however it is simpler given our current state of
knowledge to assume that it is. It would appear that we humans are
capable of universal computation.
> (2):
>
> (2) A set of UTM's operating all possible rules on bitstrings. But now the
> term non-wff becomes defunct, since whatever program runs, operates on the
> bits regardless of whether they denote symbols or not - I cannot see how
> anything coherent comes out the other end.
The problem with a full set of UTMs is that it doesn't lead to a
consistent measure.
The Tegmark plenitude as built
> from wff strings is likely to be an insignificant minority of universes
> anyway in this scenario. If it all depends on the interpretation of the
> output bits, then why have the UTM process in the first place - why not just
> have all possible bit combinations? We are now anyway operating outside the
> context of wff's and non-wffs.
>
The UTM defines a measure structure, otherwise all you have is the
uniform measure, each bitstring is as equally likely as others, and no
infomration. As stated before, the SAS itself really is the
computational structure that defines the information structure. Any
other possibility seems to require an ad-hoc existence of something
that smacks of being G.O.D.
> Assuming your scheme either comes under (2), or doesn't involve UTM's at
> all, I can't quite see where it differs from our earlier scheme based on all
> possible interpretations of bitstrings (which as far as I understood it
> featured a minimal bit specification of our TOE as the representation of the
> SAS-compatible universe with the highest measure). Is your scheme using a
> UTM or mapping as some kind of part-interpreter from a random bit string to
> a frog-level (or phenomenological) universe specification? (In short, how
> does the interpretation relate to the mapping/TM-process?)
>
> > You could interpret them as anything you like - UTM programs, axioms
> > of a mathematical theory, the works of Shakespeare. I'm not sure why
> > you are asking this though.
>
> Again, to try to see how your scheme differs from our earlier one.
>
I don't believe what I'm saying now really differs from previous
schemes - if anything there is only evolutionary refinement.
> > > Note that many other systems (for example pre-programmed microchips,
> brains,
> > > other mathematical mappings) can implement transitional rules - there is
> > > nothing special about UTM sequential processors.
> >
> > Only that a UTM can implement any set of transitional rules. This is
> > not necessarily true of the other entities you mention.
>
> Mathematical mappings can also implement other transition rules. If all
> possible transition rules are necessary for your scheme, it suffers from the
> problems of (2) above.
>
> > OK - I was using the dream idea as analogy, not asserting that dreams
> > are real universes. My point was that human beings, and by somewhat
> > iffy generalisation all SASes will attempt to interpret random (or
> > nonsense) data (in the K-C sense), and will partially succeed, even at
> > the expense of logical consistency. I'm not convinced that the
> > universe has to be logically consistent, or even logical at all.
>
> Information represents what is true about (say) a universe. If you have
> something that is both true and false (that is, a logical inconsistency), it
> cannot be represented as information - information theory does not apply,
> and so any logical-inconsistency-involving extensions to the 'Tegmark
> plenitude' cannot be represented in the 'Schmidhuber plenitude'.
>
The result that in inconsistent theories, all theorems are trivially
true or false must surely only hold in logical universes. Why
shouldn't a universe that is not governed rigidly by the laws of logic
be partially inconsistent? Remember, all of these are possible in the
Schmidhuber plenitude.
> (I'll try to read your paper next week)
>
> Alastair
>
>
>
>
----------------------------------------------------------------------------
Dr. Russell Standish Director
High Performance Computing Support Unit,
University of NSW Phone 9385 6967
Sydney 2052 Fax 9385 6965
Australia R.Standish.domain.name.hidden
Room 2075, Red Centre
http://parallel.hpc.unsw.edu.au/rks
----------------------------------------------------------------------------
Received on Sun Nov 14 1999 - 16:35:28 PST