Colin Geoffrey Hales wrote:
> >
> >
> > Quentin Anciaux writes:
> >
> >> But the point is to assume this "nonsense" to take a "conclusion", to
> >> see
> >> where it leads. Why imagine a "possible" zombie which is functionnally
> >> identical if there weren't any dualistic view in the first place ! Only
> >> in
> >> dualistic framework it is possible to imagine a functionnally equivalent
> >> to
> >> human yet lacking consciousness, the other way is that functionnally
> >> equivalence *requires* consciousness (you can't have functionnally
> >> equivalence without consciousness).
> >
> > I think it is logically possible to have functional equivalence but
> > structural
> > difference with consequently difference in conscious state even though
> > external behaviour is the same.
> >
> > Stathis Papaioannou
>
> Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
> Replace every neuron with a silicon "functional equivalent" and (b) hold
> the external behaviour identical.
>
> If the 'structural difference' (accounting for consciousness) has a
> critical role in function then the assumption of identical external
> behaviour is logically flawed.
Chalmers argues in the opposite direction; that if
the external behaviour is the same, the PC must
be in sync, because it is absurd to go
through he motions
of laughing and grimacing without feeling anything.
> This is the 'philosophical zombie'. Holding
> the behaviour to be the same is a meaninglesss impossibility in this
> circumstance.
How do you know?
> In the case of Chalmers silicon replacement it assumes that everything
> that was being done by the neuron is duplicated.
Everything relevant, anyway.
> What the silicon model
> assumes is a) that we know everything there is to know and b) that silicon
> replacement/modelling/representation is capable of delivering everything,
> even if we did 'know everything' and put it in the model. Bad, bad,
> arrogant assumptions.
a) Chalmers is as entitled to assume complete understanding of
neurology
as Frank Jackson is in the Mary story.
b) It doesn't have to be silicon. The assumption is that
there is something other than a neuron which can replicate
the relevant functioning. Well, maybe there isn't. chalmers
doesn't know there is. You don't know there isn't.
> This is the endless loop that comes about when you make two contradictory
> assumptions without being able to know that you are, explore the
> consequences and decide you are right/wrong, when the whole scenario is
> actually meaningless because the premises are flawed. You can be very
> right/wrong in terms of the discussion (philosophy) but say absolutely
> nothing useful about anything in the real world (science).
It's a thought experiment.
> So you've kind of hit upon the real heart of the matter.
>
> Colin Hales
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Nov 27 2006 - 21:00:39 PST