Dear David.
do not expect from me the theoretical level of technicality-talk er get
from Bruno: I talk (and think) common sense (my own) and if the
theoretical technicalities sound strange, I return to my thinking.
That's what I got, that's what I use (plagiarized from the Hungarian commi
joke: what is the difference between the peoples' democracy and a wife?
Nothing: that's what we got that's what we love)
When I read your "questioning" the computer, i realized that you are
in the ballpark of the AI people (maybe also AL - sorry, Russell)
who select machine-accessible aspects for comparing.
You may ask about prejudice, shame (about goofed situations), humor (does a
computer laugh?) boredom or preferential topics (you push for an
astronomical calculation and the computer says: I rather play some Bach
music now)
Sexual preference (even disinterestedness is slanted), or laziness.
If you add untruthfulness in risky situations, you really have a human
machine
with consciousness (whatever people say it is - I agree with your evading
that unidentified obsolete noumenon as much as possible).
I found Bruno's post well fitting - if i have some hint what
"...inner personal or self-referential modality..." may mean.
I could not 'practicalize' it.
I still frown when "abondoning (the meaning of) something but consider
items as pertaining to it" - a rough paraphrasing, I admit. To what?.
I don't feel comfortable to borrow math-methods for nonmath explanations
but that is my deficiency.
Now that we arrived at thequestion I replied-added (sort of) to Colin's
question I -
let me ask it again: how would YOU know if you are conscious?
(Conscious is more meaningful than cc-ness). Or rather: How would
you know if you are NOT conscious? Well, you wouldn't. If you can,
you are conscious. Computers?????
Have a good weekend
John Mikes
On 6/20/07, David Nyman <david.nyman.domain.name.hidden> wrote:
>
>
> On Jun 5, 3:12 pm, Bruno Marchal <marc....domain.name.hidden> wrote:
>
> > Personally I don' think we can be *personally* mistaken about our own
> > consciousness even if we can be mistaken about anything that
> > consciousness could be about.
>
> I agree with this, but I would prefer to stop using the term
> 'consciousness' at all. To make a decision (to whatever degree of
> certainty) about whether a machine possessed a 1-person pov analogous
> to a human one, we would surely ask it the same sort of questions one
> would ask a human. That is: questions about its personal 'world' -
> what it sees, hears, tastes (and perhaps extended non-human
> modalitiies); what its intentions are, and how it carries them into
> practice. From the machine's point-of-view, we would expect it to
> report such features of its personal world as being immediately
> present (as ours are), and that it be 'blind' to whatever 'rendering
> mechanisms' may underlie this (as we are).
>
> If it passed these tests, it would be making similar claims on a
> personal world as we do, and deploying this to achieve similar ends.
> Since in this case it could ask itself the same questions that we can,
> it would have the same grounds for reaching the same conclusion.
>
> However, I've argued in the other bit of this thread against the
> possibility of a computer in practice being able to instantiate such a
> 1-person world merely in virtue of 'soft' behaviour (i.e.
> programming). I suppose I would therefore have to conclude that no
> machine could actually pass the tests I describe above - whether self-
> administered or not - purely in virtue of running some AI program,
> however complex. This is an empirical prediction, and will have to
> await an empirical outcome.
>
> David
>
> On Jun 5, 3:12 pm, Bruno Marchal <marc....domain.name.hidden> wrote:
> > Le 03-juin-07, à 21:52, Hal Finney a écrit :
> >
> >
> >
> > > Part of what I wanted to get at in my thought experiment is the
> > > bafflement and confusion an AI should feel when exposed to human ideas
> > > about consciousness. Various people here have proffered their own
> > > ideas, and we might assume that the AI would read these suggestions,
> > > along with many other ideas that contradict the ones offered here.
> > > It seems hard to escape the conclusion that the only logical response
> > > is for the AI to figuratively throw up its hands and say that it is
> > > impossible to know if it is conscious, because even humans cannot
> agree
> > > on what consciousness is.
> >
> > Augustin said about (subjective) *time* that he knows perfectly what it
> > is, but that if you ask him to say what it is, then he admits being
> > unable to say anything. I think that this applies to "consciousness".
> > We know what it is, although only in some personal and uncommunicable
> > way.
> > Now this happens to be true also for many mathematical concept.
> > Strictly speaking we don't know how to define the natural numbers, and
> > we know today that indeed we cannot define them in a communicable way,
> > that is without assuming the auditor knows already what they are.
> >
> > So what can we do. We can do what mathematicians do all the time. We
> > can abandon the very idea of *defining* what consciousness is, and try
> > instead to focus on principles or statements about which we can agree
> > that they apply to consciousness. Then we can search for (mathematical)
> > object obeying to such or similar principles. This can be made easier
> > by admitting some theory or realm for consciousness like the idea that
> > consciousness could apply to *some* machine or to some *computational
> > events" etc.
> >
> > We could agree for example that:
> > 1) each one of us know what consciousness is, but nobody can prove
> > he/she/it is conscious.
> > 2) consciousness is related to inner personal or self-referential
> > modality
> > etc.
> >
> > This is how I proceed in "Conscience et Mécanisme". ("conscience" is
> > the french for consciousness, "conscience morale" is the french for the
> > english "conscience").
> >
> >
> >
> > > In particular I don't think an AI could be expected to claim that it
> > > knows that it is conscious, that consciousness is a deep and intrinsic
> > > part of itself, that whatever else it might be mistaken about it could
> > > not be mistaken about being conscious. I don't see any logical way it
> > > could reach this conclusion by studying the corpus of writings on the
> > > topic. If anyone disagrees, I'd like to hear how it could happen.
> >
> > As far as a machine is correct, when she introspects herself, she
> > cannot not discover a gap between truth (p) and provability (Bp). The
> > machine can discover correctly (but not necessarily in a completely
> > communicable way) a gap between provability (which can potentially
> > leads to falsities, despite correctness) and the incorrigible
> > knowability or knowledgeability (Bp & p), and then the gap between
> > those notions and observability (Bp & Dp) and sensibility (Bp & Dp &
> > p). Even without using the conventional name of "consciousness",
> > machines can discover semantical fixpoint playing the role of non
> > expressible but true statements.
> > We can *already* talk with machine about those true unnameable things,
> > as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.
> >
> >
> >
> > > And the corollary to this is that perhaps humans also cannot
> > > legitimately
> > > make such claims, since logically their position is not so different
> > > from that of the AI. In that case the seemingly axiomatic question of
> > > whether we are conscious may after all be something that we could be
> > > mistaken about.
> >
> > This is an inference from "I cannot express p" to "I can express not
> > p". Or from ~Bp to B~p. Many atheist reason like that about the
> > concept of "unameable" reality, but it is a logical error.
> > Even for someone who is not willing to take the comp hyp into
> > consideration, it is a third person communicable fact that
> > self-observing machines can discover and talk about many non 3-provable
> > and sometimes even non 3-definable true "statements" about them. Some
> > true statements can only be interrogated.
> > Personally I don' think we can be *personally* mistaken about our own
> > consciousness even if we can be mistaken about anything that
> > consciousness could be about.
> >
> > Bruno
> >
> > http://iridia.ulb.ac.be/~marchal/
>
>
> >
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Jun 22 2007 - 22:22:30 PDT