Hal,
Here I agree with everything you say. Functionalism presupposes
computationalism, but computationalism makes computationalism false.
exit functionnalism. Even maudlin makes the confusion. I repeat that
both thought experiments and Godel's incompleteness show that if we are
machine then we cannot know which machine we are, nor can we know for
sure our substitution level. We can bet empirically (and religiously!)
only. We can deduce also that our experiences supervene on the
continuum of computational histories appearing below our substitution
level (those comp histories that we cannot distinguish). This explain
qualitatively quantum facts, i.e. why matter will behave like if it was
emerging from an infinity of "parallel" computations.
Now I cannot take seriously the Mallah-Chalmers problem of rock
instantiating finite state automata, given that with comp consciousness
arises from the possible behavior of an infinity of non finite state
automata (universal machine). Mallah-Chalmers are adding a naïve
"solution of the mind body problem" (where mind are attached to
concrete computations) on a naïve, aristotelian, view of matter which
already contradict comp (by UDA).
Bruno
Le 21-juin-06, à 08:11, Hal Finney a écrit :
>
> Russell Standish <r.standish.domain.name.hidden> writes:
>> On Tue, Jun 20, 2006 at 09:35:12AM -0700, "Hal Finney" wrote:
>>> I think that one of the fundamental principles of your COMP
>>> hypothesis
>>> is the functionalist notion, that it does not matter what kind of
>>> system
>>> instantiates a computation. However I think this founders on the
>>> familiar
>>> paradoxes over what counts as an instantiation. In principle we can
>>> come up with a continuous range of devices which span the
>>> alternatives
>>> from non-instantiation to full instantiation of a given computation.
>>> Without some way to distinguish these, there is no meaning to the
>>> question
>>> of when a computation is instantiated; hence functionalism fails.
>>>
>>
>> I don't follow your argument here, but it sounds interesting. Could
>> you
>> expand on this more fully? My guess is that ultimately it will depend
>> on an assumption like the ASSA.
>
> I am mostly referring to the philosophical literature on the problems
> of
> what counts as an instantiation, as well as responses considered here
> and elsewhere. One online paper is Chalmers' "Does a Rock Implement
> Every Finite-State Automaton?", http://consc.net/papers/rock.html.
> Jacques Mallah (who seems to have disappeared from the net) discussed
> the issue on this list several years ago.
>
> Now, Chalmers (and Mallah) claimed to have a solution to decide when
> a physical system implements a calculation. But I don't think they
> work; at least, they admit gray areas. In fact, I think Mallah came
> up with the same basic idea I am advocating, that there is a degree of
> instantiation and it is based on the Kolmogorov complexity of a program
> that maps between physical states and corresponding computational
> states.
>
> For functionalism to work, though, it seems to me that you really need
> to be able to give a yes or no answer to whether something implements a
> given calculation. Fuzziness will not do, given that changing the
> system
> may kill a conscious being! It doesn't make sense to say that someone
> is
> "sort of" there, at least not in the conventional functionalist view.
>
> A fertile source of problems for functionalism involves the question
> of whether playbacks of passive recordings of brain states would be
> conscious. If not (as Chalmers and many others would say, since they
> lack the proper counterfactual behavior), this leads to a machine with
> a
> dial which controls the percentage of time its elements behave
> according
> to a passive playback versus behaving according to active computational
> rules. Now we can turn the knob and have the machine gradually move
> from
> unconsciousness to full consciousness, without changing its behavior in
> any way as we twiddle the knob. This invokes Chalmers' "fading qualia"
> paradox and is again fatal for functionalism.
>
> Maudlin's machines, which we have also mentioned on this list from time
> to time, further illustrate the problems in trying to draw a bright
> line
> between implementations and clever non-implementations of computations.
>
> In short I view functionalism as being fundamentally broken unless
> there
> is a much better solution to the implementation question than I am
> aware
> of. Therefore we cannot assume a priori that a brain implementation
> and a
> computational implementation of mental states will be inherently the
> same.
> And I have argued in fact that they could have different properties.
>
> Hal Finney
>
> >
>
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Thu Jun 22 2006 - 09:51:21 PDT