Brent Meeker writes:
> > A non-conscious computation cannot be *useful* without the manual/interpretation,
> > and in this sense could be called just a potential computation, but a conscious
> > computation is still *conscious* even if no-one else is able to figure this out or
> > interact with it. If a working brain in a vat were sealed in a box and sent into
> > space, it could still be dreaming away even after the whole human race and all
> > their information on brain function are destroyed in a supernova explosion. As far
> > as any alien is concerned who comes across it, the brain might be completely
> > inscrutable, but that would not make the slightest difference to its conscious
> > experience.
>
> Suppose the aliens re-implanted the brain in a human body so they could interact with
> it. They ask it what is was "dreaming" all those years? I think the answer might
> be, "Years? What years? It was just a few seconds ago I was in the hospital for an
> appendectomy. What happened? And who are you guys?"
Maybe so; even more likely, the brain would just die. But these are contingent facts about
human brains, while thought experiments rely on theoretical possibility.
> >>>>> then it can be seen as implementing more than one computation
> >>>>> simultaneously during the given interval.
> >>>>
> >>>> AFAICS that is only true in terms of dictionaries.
> >>>
> >>> Right: without the dictionary, it's not very interesting or relevant to *us*.
> >>> If we were to actually map a random physical process onto an arbitrary
> >>> computation of interest, that would be at least as much work as building and
> >>> programming a conventional computer to carry out the computation. However,
> >>> doing the mapping does not make a difference to the *system* (assuming we
> >>> aren't going to use it to interact with it). If we say that under a certain
> >>> interpretation - here it is, printed out on paper - the system is implementing
> >>> a conscious computation, it would still be implementing that computation if we
> >>> had never determined and printed out the interpretation.
>
> And if you added the random values of the physical process as an appendix in the
> manual, would the manual itself then be a computation (the record problem)? If so
> how would you tell if it were a conscious computation?
The actual physical process becomes almost irrelevant. In the limiting case, all of the
computation is contained in the manual, the physical existence of which makes no
difference to whether or not the computation is implemented, since it makes no difference
to the actual physical activity of the system and the theory under consideration is that
consciousness supervenes on this physical activity. If we get rid of the qualifier "almost"
the result is close to Bruno's theory, according to which the physical activity is irrelevant
and the computation is "run" by virtue of its status as a Platonic object. As I understand
it, Bruno arrives at this idea because it seems less absurd than the idea that consciousness
supervenes on any and every physical process, while Maudlin finds both ideas absurd and
thinks there is something wrong with computationalism.
> >> The problem remains that the system's own self awareness, or lack thereof, is
> >> not observer-relative. something has to give.
> >
> >
> > Self-awareness is observer-relative with the observer being oneself. Where is the
> > difficulty?
>
> Self-awareness is awareness of some specific aspect of a construct called "myself".
> It is not strictly reflexive awareness of the being aware of being aware... So in
> the abstract computation it is just this part of a computation having some relation
> we identify as "awareness" relative to some other part of the computation. I think
> it is a matter of constructing a narrative for memory in which "I" is just another
> player.
I don't think "self-awareness" captures the essence of consciousness. We commonly think
that consciousness is associated with intelligence, which is perhaps why it is often stated
that a recording cannot be conscious, since a recording will not adapt to its environment in
the manner we normally expect of intelligent agents. However, consider the experience of
pain when you put your hand over a flame. There is certainly intelligent behaviour associated
with this experience - learning to avoid it - but there is nothing "intelligent" about the raw
experience of pain itself. It simply seems that when certain neurons in the brain fire, you
experience a pain, as reliably and as stupidly as flicking a switch turns on a light. When an
infant or an animal screams in agony it is not engaging in self-reflection, and for that matter
neither is a philosopher: acute pain usually displaces every other concurrent conscious
experience. A being played a recording of a painful experience over and over into the relevant
neural pathways may not be able to meaningfully interact with its environment, but it will
be hellishly conscious nonetheless.
Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Fri Sep 08 2006 - 03:18:54 PDT