Re: Consciousness is information?

From: Bruno Marchal <marchal.domain.name.hidden>
Date: Sun, 24 May 2009 07:54:29 +0200

OK. So, now, Kelly, just to understand what you mean by your theory, I
have to ask you what your theory predicts in case of self-
multiplication.
You have to see that, personally, I don't have a theory other than the
assumption that the brain is emulable by a Turing machine, and by
brain I mean any portion of my local neighborhood needed for surviving
the comp functional substitution. This is the comp hypothesis.

Because we are both modal realist(*), and, true, worlds (histories)
with white rabbit exists, and from inside are as actual as our present
state. But then, I say that, as a consequence of the comp hyp, there
is a relative probability or credibility measure on those histories.
To see where does those probabilities come from, you have to
understand that 1) you can be multiplied (that is read, copy (cut) and
pasted in Washington AND Moscow (say)), and 2) you are multiplied (by
2^aleph_zero, at each instant, with a comp definition of instant not
related in principle with any form of physical time).

What does your theory predicts concerning your expectation in such an
experience/experiment.

The fact is that your explanation, that we are in an typical universe,
because those exist as well, just does not work with the comp hyp. It
does not work, because it does not explain why we REMAIN in that
typical worlds. It seems to me that, as far as I can put meaning on
your view, the probability I will see a white rabbit in two seconds is
as great than the probability I will see anything else, and this is in
contradiction with the fact. What makes us staying in apparent lawful
histories?

What does you theory predict about agony and death, from the first
person point of view? This is an extreme case where comp is sensibly
in opposition with "Aristotelian naturalism".

May be you could study the UDA, and directly tell me at which step
your "theory" departs from the comp hyp. It has to depart, because you
say below that we are in a quantum reality by chance, where the comp
hyp explains why we have to be (even after death) in a quantum reality.

Bruno

(*) Once and for all, when I say I am a modal realist, I really mean
this "I have an argument showing that the comp theory imposes modal
realism". I am really not defending any theory. I am just showing
that the comp theory leads to precise and verifiable/refutable facts.
I am a logician: all what I show to people is that IF you believe this
THEN you have to believe that. It is part of my personal religion that
my personal religion is personal and private (and evolvable).



On 23 May 2009, at 23:56, Kelly Harmon wrote:

>
> On Sat, May 23, 2009 at 8:47 AM, Bruno Marchal <marchal.domain.name.hidden>
> wrote:
>>
>>
>>> To repeat my
>>> earlier Chalmers quote, "Experience is information from the inside;
>>> physics is information from the outside." It is this subjective
>>> experience of information that provides meaning to the otherwise
>>> completely abstract "platonic" symbols.
>>
>>
>> I insist on this well before Chalmers. We are agreeing on this.
>> But then you associate consciousness with the experience of
>> information.
>> This is what I told you. I can understand the relation between
>> consciousness and information content.
>
> Information. Information content. Hmmmmmmmm. Well, I'm not entirely
> sure what you're saying here. Maybe I don't have a problem with this,
> but maybe I do. Maybe we're really saying the same thing here, but
> maybe we're not. Hmmmmm.
>
>
>>> Note that I don't have Bruno's fear of white rabbits.
>>
>> Then you disagree with all reader of David Lewis, including David
>> lewis himself who recognizes this inflation of to many realities as a
>> weakness of its modal realism. My point is that the comp constraints
>> leads to a solution of that problem, indeed a solution close to the
>> quantum Everett solution. But the existence of white rabbits, and
>> thus
>> the correctness of comp remains to be tested.
>
> True, Lewis apparently saw it as a cost, BUT not so high a cost as to
> abandon modal realism. I don't even see it as a high cost, I see it
> as a logical consequence. Again, it's easy to imagine a computer
> simulation/virtual reality in which a conscious observer would see
> disembodied talking heads and flying pigs. So it certainly seems
> possible for a conscious being to be in a state of observing an
> unattached talking head.
>
> Given that it's possible, why wouldn't it be actual?
>
> The only reason to think that it wouldn't be actual is that our
> external objectively existing physical universe doesn't have physical
> laws that can lead easily to the existance of such talking heads to be
> observed. But once you've abandoned the external universe and
> embraced platonism, then where does the constraint against observing
> talking heads come from?
>
> Assuming platonism, I can explain why "I" don't see talking heads:
> because every possible Kelly is realized, and that includes a Kelly
> who doesn't observe disembodied talking heads and who doesn't know
> anyone who has ever seen such a head.
>
> So given that my observations aren't in conflict with my theory, I
> don't see a problem. The fact that nothing that I could observe would
> ever conflict with my theory is also not particularly troubling to me
> because I didn't arrive at my theory as means of explaining any
> particular observed fact about the external universe.
>
> My theory isn't intended to explain the contingent details of what I
> observe. It's intended to explain the fact THAT I subjectively
> observe anything at all.
>
> Given that it seems theoretically possible to create a computer
> simulation that would manifest any imaginable conscious being
> observing any imaginable "world", including schizophrenic beings
> observing psychodelic realities, I don't see why you are trying to
> constrain the platonic realities that can be experienced to those that
> are extremely similar to ours.
>
>
>> It is just a question of testing a theory. You seem to say something
>> like "if the theory predict that water under fire will typically
>> boil,
>> and that experience does not confirm that typicality (water froze
>> regularly) then it means we are just very unlucky". But then all
>> theories are correct.
>
> I say there is no water. There is just our subjective experience of
> observing water. Trying to constrain a Platonic theory of
> consciousness so that it matches a particular observed physical
> reality seems like a mistake to me.
>
> Is there a limit to what we could experience in a computer simulated
> reality? If not, why would there be a limit to what we could
> experience in Platonia?
>
>
>>> The double-aspect principle stems from the observation that there
>>> is a
>>> direct isomorphism between certain physically embodied information
>>> spaces and certain phenomenal (or experiential) information spaces.
>>
>> This can be shown false in Quantum theory without collapse, and more
>> easily with the comp assumption.
>> No problem if you tell me that you reject both Everett and comp.
>> Chalmers seems in some place to accept both Everett and comp, indeed.
>> He explains to me that he stops at step 3. He believes that after a
>> duplication you feel to be simultaneously at the both place, even
>> assuming comp. I think and can argue that this is non sense. Nobody
>> defends this on the list. Are you defending an idea like that?
>
> I included the Chalmers quote because I think it provides a good image
> of how abstract information seems to supervene on physical systems.
> BUT by quoting the passage I'm not saying that I think that this
> appearance of supervenience is the source of consciousness. I still
> buy into the putnam mapping view that there is no 1-to-1 mapping from
> information or computation to any physical system, which of course
> makes physicalism untenable as an explanation for consciousness.
>
> As for Everettian MWI, I don't think that quantum mechanics has
> anything to do with conscious experience. The fact that we see a
> world which is apparently quantum mechanical in nature is a
> coincidence. A fluke. In keeping with an unconstrained platonic
> theory of consciousness, I would expect that there are other conscious
> observers who experience other very different worlds where they make
> observations that are not consistent with quantum mechanics.
>
>
>> Perhaps. I don't see the relevance. It is quite coherent with comp
>> that some form of meaning can be approached in this or similar ways.
>> Assuming comp, what can be considered as lacking is the self-
>> reference
>> of the universal machine involved in the attribution of meaning.
>
> I included the LSA discussion because I think it gives a good image of
> how I see information being structured in a platonic sense, as
> relationships between symbols, and also because it said some
> interesting things about the symbol grounding problem.
>
>
>> With comp, those other "sensory modalities" are coded before being
>> processed by the brain, or the universal machine under consideration.
>
> I agree, other sensory modalities are just more ungrounded tokenized
> information that is included in the web of relationships which
> ultimately, when consciously experienced "from the inside", provides
> meaning to the otherwise purely abstract "ungrounded" platonic
> symbols.
>
>
>> Kelly, the question is: do we disagree?
>
> I've wondered that too. It could be that we only differ on a few
> relatively minor points.
>
>
>> I criticize your statement
>> "consciousness = information" for vagueness, but only BECAUSE you
>> have
>> oppose it to the computationalist hypothesis,
>
> I don't deny that there are computational/arithmetical descriptions of
> how instances of consciousness can be related. We agree on that. I
> just question what role, OTHER than describing the possible
> relationships between sets of information, that computation plays.
> Given that many algorithms can produce the same output from the same
> input, I am inclined to say that it's the output that matters for
> consciousness, not the algorithm.
>
> It seems to me that connections between instances of consciousness are
> implied, but that there's nothing "real" actually binding these
> instances together other than the subjective feelings of continuity
> arising from the memory each instant has of previous instances. But
> "memory" would seem to be a informational/data related concept I'd
> think, not an algorithmic one.
>
> If an algorithm results in the overwriting or erasure of memory, then
> there is no longer the flow of conscious experience. The algorithm
> doesn't provide that "subjective" connection between instances of
> consciousness. The information stored in memory does.
>
> >

http://iridia.ulb.ac.be/~marchal/




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun May 24 2009 - 07:54:29 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST