Re: computationalism and supervenience

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Wed, 06 Sep 2006 21:22:15 -0700

Stathis Papaioannou wrote:
> Peter Jones writes:
>
>
>>>>>> But if implementing a particular computation depends on an observer, or
>>>>>> a dicitonary, or somesuch, it is not the case that everything implements
>>>>>> every computation unless it can be shown that evey dictionary somehow
>>>>>> exists as well.
>>>>>
>>>>> The computation provides its own observer if it is conscious, by
>>>>> definition.
>>>>
>>>> But "providing its own observer", if computationalism is true, must be a
>>>> computational property, ie. a property possesed only by particular
>>>> programmes. However, if any system can be interpreted as running every
>>>> programme, everysystems has the self-observation property, if interpretedt
>>>> he right way.
>>>>
>>>> IOW, one you introduce interpretation-dependence, you can't get away from
>>>> it.
>>>
>>> That's right: if there is at least one physical system, then every computation
>>> is implemented, although we can only interact with them at our level if they
>>> are implemented on a conventional brain or computer, which means we have the
>>> means to interpret them at hand. The non-conscious computations are "there" in
>>> the trivial sense that a block of marble contains every possible statue of a
>>> given size.
>>
>> All the computations are merely potential, in the absence of interpreters and
>> dictionaries, whether conscious or not.
>>
>>
>>> The conscious computations, on the other hand, are there and self-aware
>>
>> Not really. They are just possibilities.
>>
>>
>>> even though we cannot interact with them, just as all the statues in a block
>>> of marble would be conscious if statues were conscious and being embedded in
>>> marble did not render them unconscious.
>>
>> But that gets to the heart of the paradox. You are suggesting that conscious
>> computations are still conscious even thought hey don't exst and are mere
>> possiiblities! That is surely a /reductio/ of one of your premisses
>
>
> A non-conscious computation cannot be *useful* without the manual/interpretation,
> and in this sense could be called just a potential computation, but a conscious
> computation is still *conscious* even if no-one else is able to figure this out or
> interact with it. If a working brain in a vat were sealed in a box and sent into
> space, it could still be dreaming away even after the whole human race and all
> their information on brain function are destroyed in a supernova explosion. As far
> as any alien is concerned who comes across it, the brain might be completely
> inscrutable, but that would not make the slightest difference to its conscious
> experience.

Suppose the aliens re-implanted the brain in a human body so they could interact with
it. They ask it what is was "dreaming" all those years? I think the answer might
be, "Years? What years? It was just a few seconds ago I was in the hospital for an
appendectomy. What happened? And who are you guys?"

>
>>>>> then it can be seen as implementing more than one computation
>>>>> simultaneously during the given interval.
>>>>
>>>> AFAICS that is only true in terms of dictionaries.
>>>
>>> Right: without the dictionary, it's not very interesting or relevant to *us*.
>>> If we were to actually map a random physical process onto an arbitrary
>>> computation of interest, that would be at least as much work as building and
>>> programming a conventional computer to carry out the computation. However,
>>> doing the mapping does not make a difference to the *system* (assuming we
>>> aren't going to use it to interact with it). If we say that under a certain
>>> interpretation - here it is, printed out on paper - the system is implementing
>>> a conscious computation, it would still be implementing that computation if we
>>> had never determined and printed out the interpretation.

And if you added the random values of the physical process as an appendix in the
manual, would the manual itself then be a computation (the record problem)? If so
how would you tell if it were a conscious computation?

>>
>> The problem remains that the system's own self awareness, or lack thereof, is
>> not observer-relative. something has to give.
>
>
> Self-awareness is observer-relative with the observer being oneself. Where is the
> difficulty?

Self-awareness is awareness of some specific aspect of a construct called "myself".
It is not strictly reflexive awareness of the being aware of being aware... So in
the abstract computation it is just this part of a computation having some relation
we identify as "awareness" relative to some other part of the computation. I think
it is a matter of constructing a narrative for memory in which "I" is just another
player.

Brent Meeker



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Thu Sep 07 2006 - 00:24:13 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST