RE: UDA revisited and then some

From: Stathis Papaioannou <stathispapaioannou.domain.name.hidden>
Date: Wed, 29 Nov 2006 16:33:59 +1100

Colin Hales writes:
> > I think it is logically possible to have functional equivalence but
> > structural
> > difference with consequently difference in conscious state even though
> > external behaviour is the same.
> >
> > Stathis Papaioannou
>
> Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
> Replace every neuron with a silicon "functional equivalent" and (b) hold
> the external behaviour identical.
I would guess that such a 1-for-1 replacement brain would in fact have the same
PC as the biological original, although this si not a logical certainty. But what I was
thinking of was the equivalent of copying the "look and feel" of a piece of software
without having access to the source code. Computers may one day be able to copy
the "look and feel" of a human not by directly modelling neurons but by completely
different mechanisms. Even if such computers were conscious, there seems no good
reason to assume that their experiences would be similar to those of a similarly
behaving human.
 
> If the 'structural difference' (accounting for consciousness) has a
> critical role in function then the assumption of identical external
> behaviour is logically flawed. This is the 'philosophical zombie'. Holding
> the behaviour to be the same is a meaninglesss impossibility in this
> circumstance.
We can assume that the structural difference makes a difference to consciousness but
not external behaviour. For example, it may cause spectrum reversal.
 
> In the case of Chalmers silicon replacement it assumes that everything
> that was being done by the neuron is duplicated. What the silicon model
> assumes is a) that we know everything there is to know and b) that silicon
> replacement/modelling/representation is capable of delivering everything,
> even if we did 'know everything' and put it in the model. Bad, bad,
> arrogant assumptions.
Well, it might just not work, and you end up with an idiot who slobbers and stares into
space. Or you might end up with someone who can do calculations really well but displays
no emotions. But it's a thought experiment: suppose you use whatever advanced technology
it takes to create a being with *exactly* the same behaviours as a biological human. Can
you be sure that this being would be conscious? Can you be sure that this being would be
conscious in the same way you and I are conscious?
 
> This is the endless loop that comes about when you make two contradictory
> assumptions without being able to know that you are, explore the
> consequences and decide you are right/wrong, when the whole scenario is
> actually meaningless because the premises are flawed. You can be very
> right/wrong in terms of the discussion (philosophy) but say absolutely
> nothing useful about anything in the real world (science).
I agree that the idea of a zombie identical twin (i.e. same brain, same behaviour but no PC)
is philosophically dubious, but I think it is theoretically possible to have a robot twin which is
if not unconscious at least differently conscious.
Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Nov 29 2006 - 00:34:17 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST