Re: QTI & euthanasia (brouillon)

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Sun, 09 Nov 2008 09:56:30 -0800

Kory Heath wrote:
>
> On Nov 7, 2008, at 9:34 AM, Brent Meeker wrote:
>> I think I agree with Bruno that it is *logically* possible, e.g.
>> accidental zombies. It's just not nomologically possible.
>
> I'm not sure what counts as an "accidental zombie". Do you mean
> something like the following:
>
> I can write a very short computer program that accepts ascii
> characters as input, and then spews out a random series of characters
> as output, and then accepts more input, etc. It's logically possible
> for me to have a "conversation" with this program in which the program
> just happens (by accident) to pass the Turing Test with flying colors.
>
> Is this what you mean by an "accidental" zombie? If so, it's important
> to understand that this is not a zombie at all by Dennett's definition
> (unless I've really misunderstood Dennett). A zombie is something
> that's physically indistinguishable from a physical conscious entity
> and yet isn't conscious.

It's sort of what I meant; except I imagined a kind of robot that, like your
Turing test program, had it's behavior run by a random number generator but just
happened to behave as if it were conscious. I'm not sure where you would draw
the line between the accidentally convincing conversation and the accidentally
behaving robot to say one was a philosophical zombie and the other wasn't.
Since the concept is just a hypothetical it's a question of semantics.

>That program might be accidentally behaving
> as if it were conscious, but if you had the proper instruments to
> examine it physically, you would be able to conclude exactly that:
> it's a random number generator that's accidentally behaving as though
> it were conscious. Dennett would claim that a random number generator
> that passes a Turing Test is logically possible (but extraordinarily
> unlikely), and he'd happily claim that it's not conscious. He'd claim
> that zombies are something different, and that they're logically
> impossible. (He's also used words like "unimaginable" and "incoherent".)

OK. It's just that the usual definition in strictly in terms of behavior and
doesn't consider inner workings.

My own view is that someday we will understand a lot about the inner workings of
brains; enough that we can tell what someone is thinking by monitoring the
firing of neurons and that we will be able to build robots that really do
exhibit conscious behavior (although see John McCarthy's website for why we
shouldn't do this). When we've reached this state of knowledge, questions about
qualia and what is consciousness will be seen to be the wrong questions. They
will be like asking where is life located in an animal.

Brent Meeker

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Nov 09 2008 - 12:56:36 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST