Fwd: COUNTERFACTUALS/Implementations

From: <GSLevy.domain.name.hidden>
Date: Sat, 3 Jul 1999 18:11:19 EDT

Marshal writes:
<<Physical realities are inside-(relative)-views by machines belonging
to a vast collection of -implementation independent- sharable dreams
(computationnal histories).>>

and
<<You could have said <<How do we make precise what amounts to
philosophical ramble. And how do we leave the field of Metaphysics and enter
the field of ...Information Science, or Engineering ...>>>>

and
<<- 2) ... untill you understand that with comp there is just no physical
activities at all, only relative believes in *physical activities* by
kind of *mean* machines (and the means are taken on the whole set of
infinite computationnal histories, ...>>

As I said, I am an engineer, and am not too familiar with philosophical
terms. My approach as been to apply to its ultimate conclusion, Galileo's and
Einstein's principle of relativity. Each "self" is an observer, unique in his
or her perspective on the MW. I do not agree at all with your term <<"*mean*
machines (and the means are taken on the whole set of
infinite computationnal histories, ..">> This smack's of Wheeler's
participatory universe with which I do not agree at all. It's like saying
that there is an Ether which defines absolute motion in the Universe. In
contrast, I believe that each "beholder" or observer carries his own frame of
reference. Each Self has his own perspective, or IS HIS OWN, PERSPECTIVE of
the MW.

Going back to relativity theory. Its principles can be expanded to the domain
of information theory and consciousness through Shannon's concept of "mutual
information" which I like to call "relative information." Taking an excerpt
from my book:

Beginning of Quote
<<
""Claude E. Shannon, an American mathematician, the founder of information
theory, derived a concept called mutual information according to which, the
amount of information contained in a given message transmitted from a source
to a destination database, is not fixed but depends on the information
already available at the destination database. In other words information is
a relative quantity. For example, if you are told that "a day on earth has 24
hours," the amount of information transmitted to you with 27 characters,
including spaces, is zero because I have not added anything new to your
knowledge base. However if I tell you "a day on Mars has 23 hours," the
amount of information carried by these 25 characters is probably significant
since you are less likely to know this fact. Thus, the information transfer
between us is relative to our mutual states of mind. By the same token,
perception of the world is relative to our frame of mind..."
>>
End Of Quote

How does this affect our concept of consciousness? Each self can be viewed as
a "Godelian?" machine with his own set of axioms and rules. When the behavior
of a "thinking" entity, A, is predictable, from the point of view of a second
"thinking" entity, B, then from B's point of view, A has no free will.
Otherwise, if A is not predicatble, then A has free will. The degenerate case
occurs when A looks at himself. Can A predict his own thoughts (or his own
actions)? Obviously yes and no, for as soon as he makes the prediction he
also has the thought!

So my point of view, I guess, is not that computationalism, (if I understand
the term) causes consciousness, but it is the BREAKDOWN of computationalism
that does. Consciousness arises at the border between what can be
computationally known and what is, for Godelian reasons, beyond computation.
It requires one to REFLECT on oneself, in a kind of infinite recursion, to
experience the Self (Le Moi). It is kind of a logical black hole, a blind
spot of the mind, as I have explained earlier. Consciousness is a consequence
of the infinity of the MW.... it is also related to the experience of the
divine (for the religious among you). EACH SELF IS HIS OWN PERSPECTIVE OF THE
MW.

Here is a little story that extends Newcomb's paradox, again an excerpt from
my book (excuse its lack of conciseness - Also, please keep in mind that this
is Copyrighted material)

Beginning of Quote
<<
Relativity of Free Will
A wonderful way to illustrate the relativity of free will is the Newcomb
paradox, named after its originator, William A Newcomb, a theoretical
physicist at the University of California, Lawrence Livermore Laboratory.
This paradox was published for the first time by Robert Nozick, a philosopher
at Harvard university, and reprinted by Martin Gardner in the Scientific
American (July 1973 and March 1974).

This paradox involves a God-like or extraterrestrial Being with an
extraordinarily precise knowledge of human psychology, capable of predicting
choices that humans make. His ability to second guess people has been
unquestionably proven in numerous tests involving large numbers of people
making choices. Never once has this Being been wrong. He presents a human,
that we’ll call Alice, with two boxes B1 and B2 and explains to her that she
will be given the opportunity to choose either B2 only, or both B1 and B2. B1
will contain $1,000 and B2, will either contain $1 million or nothing. He
also explains that he already knows what her decision will be. If she decides
to select only B2, he will, before she actually makes her choice, deposit
$1,000 in B1 and $1 million in B2. If she decides to pick both B1 and B2 he
will deposit $1,000 in B1 and nothing in B2. He then ask Alice to leave the
room temporarily so that he can deposit money in the boxes. When Alice
returns, he asks her to make her choice. What should she do?

If she decide to take both boxes, the Being would, most probably, have left
B2 empty and she will only get the $1,000 in B1. Contrariwise, If she
decides to take only B2, the Being would have put the million in it. This
argument appears to favor choosing only B2.

On the other hand, what the Being has done cannot be undone. Reverse
causality does not exist and present actions cannot influence the past. If
Alice were to select only B2, it would not change the fact that B1 contains
$1,000. So why not take it too? Clearly it is in the advantage of Alice to
take both boxes.

Alice’s first reaction is to question the validity of the test and the
credibility of the Being. Given the laws of nature as she knows them, there
is no possible way for such a being to exist she argues. She is then assailed
by self doubts. Is it possible, after all, that her own consciousness and
free will is just a figment of somebody’s imagination? Is it possible that
the Being is God? Unable to resolve these issues, she confides in her
co-workers, her friends and her clergyman. What could possibly motivate God
to give her such a test she asks....

Implicit in the statement of this paradox is that your brain operates
according to purely deterministic rules and that the Being uses this fact to
predict Alice’s decision. Also, implicit is the fact that Alice is making
conscious decisions. In fact both of these assumption could be challenged. As
explained previously, the brain may not operate according to purely
deterministic rules and such a perfectly prescient Being may be a physical
impossibility. In addition Alice’s consciousness may exist only in
relativistic terms. What appears to be a free choice for Alice may not be for
the super Being. How is this possible? We can illustrate these ideas by
extending Newcomb’s paradox.

As it turns out, Alice is a very good AI engineer. Her human simulation
program, the Advanced Digital Automatic Machine or ADAM has won the Turing
Olympiads several years in a row, having repeatedly convinced or fooled the
judges into believing that it is human. Whether the judges were "convinced"
or "fooled," of course, depended on the frame of reference of the observer
since consciousness is relative, as was already explained in the second
section. Alice was of the firm opinion that the judges were fooled, but her
boss and coworkers were more kind. They believed that she had truly created a
conscious machine.

Having unsuccessfully grappled with questions such as who the Being is and
why he gave her this test, she decides to conduct an experiment and to find
out how ADAM would react to the same test. After examining ADAM’s programming
very carefully, she finds out that one of the parameter used by ADAM
subroutines has a high value. As a result, ADAM has the tendency to assert
his independence from authority and therefore, if ADAM is ever faced with the
Newcomb test, it would pick both boxes.

She then gives the Newcomb test to ADAM, playing herself the role of the
super Being. The results are interesting. First ADAM goes into a flurry of
digital activity. It questions the validity of the test itself and the
credibility of Alice, claiming that a being such as Alice is physically
impossible. It then commiserates that its own consciousness may be an
illusion. It even links to chat rooms on the Internet to discuss with real
humans the theological implications of Alice being God and what could
possibly be her motive in giving this test. Finally, ADAM makes its choice.
And as Alice has predicted, it picks both boxes…

Annoyed with the predictability of her own program, Alice turns the computer
off. After pondering for five minutes how to make her program unpredictable
she shrugs her shoulders and walks away. She then goes back to the super
Being and picks both boxes…

This example illustrates the relativity of free will. ADAM is a very good
human simulation program which has passed numerous Turing tests. From the
point of view of the judges, there is no question that ADAM is conscious and
has free will. From the point of view of Alice, however, ADAM is very
predictable and has fooled the judges into believing that it is human. It
consists of 5,000,000 lines of code which she has spent the past ten years of
her life writing. By the same token, from the point of view of her
colleagues, Alice appears to be conscious and to have free will, but from the
point of view of the Being, she is a very predictable ensemble of atoms.

Consciousness and free will are relativistic concepts that depend on the
frame of reference of the observer and in this case, the frame of reference
is the mind of the observer. When the observer is a superior being in
comparison with the observed, the observed is perceived as having no free
will. When the observer is an inferior being, the observed is perceived as
having undisputed free will. When the observer observes himself, then the
perception of free will becomes undecidable. This is precisely the situation
where consciousness arises.

>>
End of Quote



George Levy

attached mail follows:



George Levy wrote:

> How do we make precise what amounts to
>philosophical ramble. And how do we leave the field of Metaphysics and enter
>the field of Physics.


You are not helping me very much, here, George, :-(

You could have said <<How do we make precise what amounts to
philosophical
ramble. And how do we leave the field of Metaphysics and enter
the field of ...Information Science, or Engineering ...>>

I guess it's my fault. I should have read more carefully one of your
post
where you ask for an explanation. Here is your post :

>In a message dated 99-06-30 11:20:07 EDT, marchal.domain.name.hidden writes:
>
><< Precisely: Maudlin and me have proved that:
>
> NOT comp OR NOT sup-phys
>
> i.e. computationalism and physical supervenience thesis are incompatible. >>
>
>Forgive me for I am only a lowly engineer. Does the above mean that
>according
>to Marchal and Maudlin consciousness is either due to "software" or
>"hardware" but not both? Using these terms would make it much easier for me
>to understand.

Put in these termes + simplifying a bit,
what Maudlin and me have showed is that

EITHER the appearance of hardware and consciousness
is explain(able) by the theory of possible softwares (computer science,
.)

OR the computationalist hypothesis is false.

That is why I ask for, ultimately, a serious consideration on Church's
thesis.

I agree with Wheeler that physics cannot explain the origin of the
physical
laws, by itself. Physics inspired without doubt our issues, but physics
per se
doesn't help for in some sense physics IS the issue.
See also my post to Jacques M Mallah.

Bruno
Received on Sat Jul 03 1999 - 15:13:32 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:06 PST