Re: MGA 1 bis (exercise)

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Thu, 20 Nov 2008 10:38:18 -0800

Kory Heath wrote:
>
> On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
>> So I'm puzzled as to how answer Bruno's question. In general I
>> don't believe in
>> zombies, but that's in the same way I don't believe my glass of
>> water will
>> freeze at 20degC. It's an opinion about what is likely, not what is
>> possible.
>
> I take this to mean that you're uncomfortable with thought experiments
> which revolve around logically possible but exceedingly unlikely
> events.

I think you really you mean nomologically possible. I'm not uncomfortable with
them, I just maintain a little skepticism. For one thing what is nomologically
possible or impossible is often reassessed. Less than a century ago the
experimental results Elizer, Vaidman, Zeilenger, et al, on delayed choice,
non-interaction measurement, and other QM phenomena would all have been
dismissed in advance as "logically" impossible.

>I think that's understandable, but ultimately, I'm on the
> philosopher's side. It really is logically possible - although
> exceedingly unlikely - for a random-number-generator to cause a robot
> to walk around, talk to people, etc. It really is logically possible
> for a computer program to use a random-number-generator to generate a
> lattice of changing bits that "follows" Conway's Life rule. Mechanism
> and materialism needs to answer questions about these scenarios,
> regardless of how unlikely they are.

I don't disagree with that. My puzzlement about how to answer Bruno's question
comes from the ambiguity as to what we mean by a philosophical zombie. Do we
mean its outward actions are the same as a conscious person? For how long?
Under what circumstances? I can easily make a robot that acts just like a
sleeping person. I think Dennett changes the question by referring to
neurophysiological "actions". Does he suppose wetware can't be replaced by
hardware?

In general when I'm asked if I believe in philosophical zombies, I say no,
because I'm thinking that the zombie must outwardly behave like a conscious
person in all circumstances over an indefinite period of time, yet have no inner
experience. I rule out an accidental zombie accomplishing this as to improbable
- not impossible. In other words if I were constructing a robot that had to act
as a conscious person would over a long period of time in a wide variety of
circumstances, I would have to build into the robot some kind of inner attention
module that selected what was important to remember, compressed into short
representation, linked it to other memories. And this would be an inner
narrative. Similary for the other "inner" processes. I don't know if that's
really what it takes to build a conscious robot, but I'm pretty sure it's
something like that. And I think once we understand how to do this, we'll stop
worrying about "the hard problem of consciousness". Instead we'll talk about
how efficient the inner narration module is or the memory confabulation module
or the visual imagination module. Talk about consciousness will seem as quaint
as talk about the elan vital does now.

Brent


>
> -- Kory
>
>
> >
>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Thu Nov 20 2008 - 13:38:25 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST