2009/9/2 Brent Meeker <meekerdb.domain.name.hidden>:
> But the physical implementation (cause?) is invariant in it's functional
> relations. That's why two physical implementations which are different
> at some lower level can be said to implement the same computation at a
> higher level. I see nothing incoherent is saying that two physically
> different computers perform the same computation. So if mental states
> are certain kinds of computations (either physically realized or in
> Platonia) they can be realized on different, i.e. non-invariant physical
> processes. What's incoherent about that?
I wonder what you mean by "either physically realized or in Platonia"?
ISTM that there is not one assumption here, but two. If computation
is restricted to the sense of physical realisation, then there is
indeed nothing problematic in saying that "two physically different
computers perform the same computation". We can understand what is
meant without ambiguity; 'different' is indeed different, and any
identity is thus non-physical (i.e. relational). But 'realisation' of
such relational identity in Platonia in the form of an invariant
experiential state is surely something else entirely: i.e. if it is a
supplementary hypothesis to PM it is dualism. The point of Bruno's
argument is to force a choice between the attachment of experience to
physical process or computation; but not both at the same time.
> And that's where my idea that the context/environment is essential. It
> defines the level at which functions must be the same; in other words
> when we say yes to the doctor we are assuming that he will replace our
> brain so that it has the same input/ouput at the level of our afferent
> and efferent nerves and hormones (roughly speaking). Then we would
> continue to exist in and experience this world. This is why we would
> hesitate to say yes to the doctor if he proposed to also simulate the
> rest of world with which we interact, e.g. in a rock, because it would
> mean our consciousness would be in a different world - not this one,
> which due to it's much greater complexity would not be emulable.
Yes, this could make sense. But what you're saying is that if we knew
the correct substitution level, be it "at the level of our afferent
and efferent nerves and hormones", or some different or finer
analysis, we would in effect have reproduced whatever is 'physically'
relevant to consciousness. Whether this is indeed possible at any
functional level above the atomic may in the end be resolvable
empirically, it can't simply be assumed a priori on the basis of
computational theory. In point of fact you haven't actually appealed
to software here, but rather to highly specific details of physical
implementation, and this is a hardware issue, as we computer
programmers are wont to say. But I guess the 'yes doctor' is really
about where the distinction between hardware and software merges
experientially. And then the import of MGA is that if the gap closes
at any level above atom-for-atom substitution, any attachment of
experience to 'PM' below that level becomes spurious for CMT.
David
>
> David Nyman wrote:
>> 2009/9/2 Brent Meeker <meekerdb.domain.name.hidden>
>>
>>
>>>> I'm afraid that still doesn't work. I realise it's counter intuitive,
>>>> but this is the point - to recalibrate the intuitions. 'Standard' CTM
>>>> postulates that the mind is a computation implemented by the brain,
>>>> and hence in principle implementable by any physical process capable
>>>> of instantiating the equivalent computation. Bruno's 'version' starts
>>>> with this postulate and then shows that the first part of the
>>>> hypothesis - i.e. that the mind is computational - is incompatible
>>>> with the second part - i.e. that it is implemented by some
>>>> specifically distinguishable non-computational process.
>>>>
>>> That's the step I don't grasp. I see that the MGA makes it plausible
>>> that the mind could be a computation divorced from all physical
>>> processes - but not that it must be. Maybe you can explain it.
>>>
>>
>> Well, I'll recapitulate what insight I possess.
>>
>> As I see it, both MGA and Olympia are intended to show how
>> postulating, on the basis of PM, that invariant mental states
>> supervene qua computatio, as Bruno would say, on non-invariant
>> physical causes is flatly incoherent - i.e. it leads to absurd
>> consequences.
> But the physical implementation (cause?) is invariant in it's functional
> relations. That's why two physical implementations which are different
> at some lower level can be said to implement the same computation at a
> higher level. I see nothing incoherent is saying that two physically
> different computers perform the same computation. So if mental states
> are certain kinds of computations (either physically realized or in
> Platonia) they can be realized on different, i.e. non-invariant physical
> processes. What's incoherent about that?
>
> And that's where my idea that the context/environment is essential. It
> defines the level at which functions must be the same; in other words
> when we say yes to the doctor we are assuming that he will replace our
> brain so that it has the same input/ouput at the level of our afferent
> and efferent nerves and hormones (roughly speaking). Then we would
> continue to exist in and experience this world. This is why we would
> hesitate to say yes to the doctor if he proposed to also simulate the
> rest of world with which we interact, e.g. in a rock, because it would
> mean our consciousness would be in a different world - not this one,
> which due to it's much greater complexity would not be emulable.
>
>> This, as you know, has always been the brunt of my own
>> argument: i.e. that *any* plausible ascription of 'state' under PM
>> must be justified physically and hence the postulates are prima facie
>> self-contradictory.
>
> I have no idea what that means. What postulates? What does it mean for
> a state to be ascribed and for that to justified physically?
>
>> The whole notion of computational invariance in a
>> physical, as opposed to mathematical, sense seems to me a confusion
>> arising from the failure to distinguish outcomes from processes.
>>
> That's certainly a confusion - but not one I've heard on this list. I
>
>
>> Anyway, MGA/Olympia proceed by reducing the foregoing to a
>> demonstration that a formally invariant computation putatively
>> implementing a correspondingly invariant mental state can ex hypothesi
>> be shown to supervene on either minimal or zero physical activity.
>>
> But only by isolating a bit of computation from the rest of universe.
> And it doesn't show that a computation supervenes on zero physical
> activity. And even if it did show that, it would not follow that mental
> computation *does* supervene on computation realized in Platonia with
> zero physical activity.
>
>
>> This is an absurd conclusion, so the hypothesis that motivates it -
>> i.e. CTM+PM - is thus shown to be contradictory and must be abandoned,
>> not merely in this case, but in general: i.e. the exception has broken
>> the rule. This is forced unless you can show where the logic goes
>> wrong.
>>
> No, even if the conclusion is wrong that only shows that *some* step in
> the argument is wrong NOT that the conjunction of the computationalist
> theory of mind and primary matter is self contradictory. I don't even
> see where the argument uses PM to reach its conclusion. Maybe CTM+UD is
> a simpler explanation of the world, a return to Platonic idealism, but I
> don't see that its contrary is contrdictory.
>
> Brent
>
>> Given the foregoing, the contradiction can in principle be resolved by
>> abandoning one or the other component of the conjunction. That is, we
>> can retain PM, but with the proviso that mind can no longer be
>> attached to matter qua computatio. Alternatively, we can retain CTM,
>> explicitly extended to a comp theory of mind-body, but with the
>> realisation that it can't be justified as such qua PM. Your preferred
>> choice is not forced by the argument, but the choice itself is forced
>> else the contradiction can't be resolved. That's it.
>>
>> Actually, because of my prior queasiness about CTM+PM, I don't find
>> this dichotomy so very surprising, because CTM+PM always struck me as
>> an unjustifiable attempt to conjure a ghost from the machine to stand
>> in for mind. It only seems odd because the coherence of the a priori
>> assumption of CTM in the face of PM is not usually challenged and
>> destroyed in so explicit a manner. Nonetheless, if it's a ghost we're
>> after, we can still snare one by abandoning any appeal to the machine
>> (the physical one that is). And in so doing, we can if we like, and
>> in denial of Occam, go on imagining a primitively physical machine out
>> there somewhere, but since ghosts and machines can't interact, this
>> turns out to be the sort of difference that makes no difference.
>>
>> David
>>
>>
>
>
> >
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Sep 02 2009 - 16:56:25 PDT