Re: QTI & euthanasia

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Sun, 02 Nov 2008 12:51:33 -0800

I disagree with the first, but I agree with the second. I don't think
qualia (which are conscious by definition) form a system. This seems to
be the case in logical inferences. Each thought follows from the
previous by some rules of inference. And we have abstracted that system
- it's called "logic" or "mathematics". But it doesn't include the
feelings or qualia about the inferences: "that's interesting", "that's
trivial", etc And I don't think those can form a system because they
come from unconscious brain processes that have been built into us by
evolution. There's no abstract system of conscious thought that
produces curiosity, fear, lust, etc. To take those into account we need
to consider unconscious, i.e. physical processes, which are not qualia.

On you second point I agree. The conscious meaning of neurons lighting
up depends on a huge context (though probably not the whole universe)
including evolution.

Brent

Michael Rosefield wrote:
> But I do think the nature of conscious qualia, as an abstract system,
> is interesting and non-trivial. Each person is their own universe -
> there is something more to feelings than just a neuron lighting up,
> they are part of an integrated dynamic.
>
>
> 2008/11/2 Brent Meeker <meekerdb.domain.name.hidden
> <mailto:meekerdb.domain.name.hidden>>
>
>
> Michael Rosefield wrote:
> > I think there's so many different questions involved in this topic
> > it's going to be hard to sort them out. There's 'what produces our
> > sense of self', 'how can continuity of identity be quantified', 'at
> > what point do differentiated substrates produce different
> > consciousnesses', 'can the nature of consciousness be captured
> through
> > snapshots of mental activity, or only through a dynamic
> interpretation
> > taken over a period of time?'... and it's far too late for me to
> > attempt to unravel all that!
> >
> > My feeling, though, is that once you've managed to assign some
> > informational entity as being a conscious mind, then you could track
> > it through time.
> But notice that everything you say about paths and variables and
> measure, apply to any system. Saying it is a conscious process
> doesn't
> change anything.
>
> My guess is that eventually we'll be able to create AI/robots that
> seem
> as intelligent and conscious as, for example, dogs seem. We'll also be
> able to partially map brains so that we can say that when these
> neurons
> do this the person is thinking thus and so. Once we have this
> degree of
> understanding and control, questions about "consciousness" will no
> longer seem relevant. They'll be like the questions that philosophers
> asked about life before we understood the molecular functions of
> living
> systems. They would ask:Where is the life? Is a virus alive?
> How does
> life get passed from parent to child? The questions won't get
> answered; they'll just be seen as the wrong questions.
>
> Brent
> "One cannot guess the real difficulties of a problem before having
> solved it."
> --- Carl Ludwig Siegel
>
> > If you tweaked some physical variables, then much like a monte carlo
> > simulation you could see potential paths it could follow. Given
> enough
> > variables and tweaking, you might be able to fully populate the
> > state-space according to what question we're asking, and it
> would seem
> > to me to be all about measure theory. Of course, this doesn't say
> > anything yet about any characteristics of the conscious mind itself,
> > which is undoubtedly of importance.
> >
> >
> >
> > 2008/11/2 Brent Meeker <meekerdb.domain.name.hidden
> <mailto:meekerdb.domain.name.hidden>
> > <mailto:meekerdb.domain.name.hidden <mailto:meekerdb.domain.name.hidden.com>>>
> >
> >
> > What are you calling "the process" when you've made two
> copies of it?
> >
> > Bretn
> >
> > Michael Rosefield wrote:
> > > But, given that they are processes, then by definition
> they are
> > > characterised by changing states. If we have some uncertainty
> > > regarding the exact mechanics of that process, or the external
> > input,
> > > then we can draw an extradimensional state-space in which the
> > degrees
> > > of uncertainty correspond to new variables. If we can try
> and place
> > > bounds on the uncertainty then we can certainly produce a
> kind of
> > > probability mapping as to future states of the process.
> > >
> > >
> > > 2008/11/2 Brent Meeker <meekerdb.domain.name.hidden
> <mailto:meekerdb.domain.name.hidden>
> > <mailto:meekerdb.domain.name.hidden
> <mailto:meekerdb.domain.name.hidden>>
> > > <mailto:meekerdb.domain.name.hidden
> <mailto:meekerdb.domain.name.hidden> <mailto:meekerdb.domain.name.hidden.com
> <mailto:meekerdb.domain.name.hidden>>>>
> > >
> > >
> > > Kory Heath wrote:
> > > > On Oct 31, 2008, at 1:58 PM, Brent Meeker wrote:
> > > >
> > > >
> > > >> I think this problem is misconceived as being about
> > probability of
> > > >> survival.
> > > >>
> > > >
> > > > In the case of simple teleportation, I agree. If I
> step into a
> > > > teleporter, am obliterated at one end, and come out the
> > other end
> > > > "changed" - missing a bunch of memories, personality
> traits,
> > > etc., it
> > > > doesn't seem quite correct to ask the question,
> "what's the
> > > > probability that that person is me?" It seems more
> correct
> > to ask
> > > > something like "what percentage of 'me' is that person?"
> > And in
> > > fact,
> > > > this is the point I've been trying to make all along
> - that we
> > > have to
> > > > accept some spectrum of cases between "the collection of
> > molecules
> > > > that came out is 100% me" and "the collection of
> molecules
> > that came
> > > > out is 0% me".
> > > >
> > > > The idea of probability enters the picture (or seems to)
> > when we
> > > start
> > > > talking about multiple copies. If I step into a
> teleporter, am
> > > > obliterated, and out of teleporter A steps a copy that's
> > 100% me and
> > > > out of teleporter B steps a copy that's 10% me,
> what's the
> > best
> > > way to
> > > > view this situation? Subjectively, what should I believe
> > that I'm
> > > > about to experience as I step into that teleporter? It's
> > hard for me
> > > > not to think about this situation in terms of
> probability
> > - to think
> > > > that I'm more likely to find myself at A than B. It's
> > especially
> > > hard
> > > > for me not to think in these terms when I consider that,
> > in the case
> > > > when the thing that ends up in teleporter A is 100%
> me and
> > the thing
> > > > that ends up in teleporter B is 0% me, the answer is
> > unambiguous: I
> > > > should simply believe that I'm going to subjectively
> > experience
> > > ending
> > > > up in teleporter A.
> > > >
> > > > I'm sympathetic to the argument that it's still not
> > correct to frame
> > > > this problem in terms of probability. But I don't
> > understand how
> > > else
> > > > to frame it. How do you (Brent) frame the problem?
> > Subjectively,
> > > what
> > > > should I expect to experience (or feel that I'm most
> likely to
> > > > experience) when I step into a teleporter, and I
> know that
> > the thing
> > > > that's going to come out Receiver A will be 100% me and
> > the thing
> > > > that's going to come out of Receiver B will be 10% me?
> > > >
> > > > -- Kory
> > > >
> > > The way I look at it, there is no "I". Kory-A and Kory-B
> > are just two
> > > different processes. We can ask how similar each one
> is to
> > the Kory
> > > that stepped into the teleporter, but there's no fact
> of the
> > matter
> > > about which one is *really* Kory. And there's no
> sense to the
> > > question
> > > of what "I should expect to experience" because "I" is
> > nothing but a
> > > process of experiencing anyway. We could make up some
> legal
> > rule
> > > (which
> > > we would need if there really were teleporters) but it
> would
> > have
> > > to be
> > > based on it's social utility, not ontology.
> > >
> > > Brent
> > >
> > >
> > >
> > >
> > > >
> >
> >
> >
> >
> >
> > >
>
>
>
>
>
> >


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Nov 02 2008 - 15:51:52 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST