Re: Fwd: COUNTERFACTUALS/Implementations

From: Christopher Maloney <dude.domain.name.hidden>
Date: Fri, 09 Jul 1999 22:43:19 -0400

GSLevy.domain.name.hidden wrote:
>
>
> Beginning of Quote
> <<
> ""Claude E. Shannon, an American mathematician, the founder of information
> theory, derived a concept called mutual information according to which, the
> amount of information contained in a given message transmitted from a source
> to a destination database, is not fixed but depends on the information
> already available at the destination database. In other words information is
> a relative quantity. For example, if you are told that "a day on earth has 24
> hours," the amount of information transmitted to you with 27 characters,
> including spaces, is zero because I have not added anything new to your
> knowledge base. However if I tell you "a day on Mars has 23 hours," the
> amount of information carried by these 25 characters is probably significant
> since you are less likely to know this fact. Thus, the information transfer
> between us is relative to our mutual states of mind. By the same token,
> perception of the world is relative to our frame of mind..."
> >>
> End Of Quote
>
> How does this affect our concept of consciousness? Each self can be viewed as
> a "Godelian?" machine with his own set of axioms and rules. When the behavior
> of a "thinking" entity, A, is predictable, from the point of view of a second
> "thinking" entity, B, then from B's point of view, A has no free will.
> Otherwise, if A is not predicatble, then A has free will. The degenerate case
> occurs when A looks at himself. Can A predict his own thoughts (or his own
> actions)? Obviously yes and no, for as soon as he makes the prediction he
> also has the thought!
>
> So my point of view, I guess, is not that computationalism, (if I understand
> the term) causes consciousness, but it is the BREAKDOWN of computationalism
> that does. Consciousness arises at the border between what can be
> computationally known and what is, for Godelian reasons, beyond computation.
> It requires one to REFLECT on oneself, in a kind of infinite recursion, to
> experience the Self (Le Moi). It is kind of a logical black hole, a blind
> spot of the mind, as I have explained earlier.

I just don't buy this at all. Whenever I read things like this, hinting
at some sort of magic produced by self-reference, I get uneasy. The way
I understand it is that for any intelligent, information processing
system, thinking about something involves making internal models of some
(perhaps external) apparition. The models are necessarily elided, or
simplified. While it might be possible for an AI to know, in principle,
all of its own internal workings down to the lowest level, it would
nevertheless be impossible for it to be able to "simulate itself".
Using
a simple von Neumann description, a perfect fidelity self-simulation
would involve emulating its own program, *and duplicating all of its
internal data*. The latter is obviously impossible.

So when we're thinking about ourselves, we're just manipulating and
mangling a model, the same as any other model of any external object
like
a cat or a poem.

(More below.)


> Consciousness is a consequence
> of the infinity of the MW.... it is also related to the experience of the
> divine (for the religious among you). EACH SELF IS HIS OWN PERSPECTIVE OF THE
> MW.
>
> Here is a little story that extends Newcomb's paradox, again an excerpt from
> my book (excuse its lack of conciseness - Also, please keep in mind that this
> is Copyrighted material)
>

Bravo on the following quote (and on the one above, BTW). I really like
your writing style! If you published your book, I would definitely get
a copy! This story reminded me of "The Princess Ineffebelle", by
Raymond Smullyan, published in "The Mind's I". Also, when I think about
it, there's another story (I think called "An Epistemological
Nightmare")
that it's similar to.

Yet it reinforces, for me, what I wrote above. That the only way to get
a perfect simulation is to actually reproduce the conscious process.
Thus
a conscious entity can never perfectly know itself. Thus, subjectively,
there is no problem with free will.



> Beginning of Quote
> <<
> Relativity of Free Will
> A wonderful way to illustrate the relativity of free will is the Newcomb
> paradox, named after its originator, William A Newcomb, a theoretical
> physicist at the University of California, Lawrence Livermore Laboratory.
> This paradox was published for the first time by Robert Nozick, a philosopher
> at Harvard university, and reprinted by Martin Gardner in the Scientific
> American (July 1973 and March 1974).
>
> [snip]


-- 
Chris Maloney
http://www.chrismaloney.com
"Knowledge is good"
-- Emil Faber
Received on Fri Jul 09 1999 - 20:40:20 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST