Le 28-juin-07, à 17:56, David Nyman a écrit :
> On 28/06/07, Bruno Marchal <marchal.domain.name.hidden> wrote:
>
> Hi Bruno
>
> The remarks you comment on are certainly not the best-considered or
> most cogently expressed of my recent posts. However, I'll try to
> clarify if you have specific questions. As to why I said I'd rather
> not use the term 'consciousness', it's because of some recent
> confusion and circular disputes ( e.g. with Torgny, or about whether
> hydrogen atoms are 'conscious').
I am not sure that in case of disagreement (like our "disagreement"
with Torgny), changing the vocabulary is a good idea. This will not
make the problem going away, on the contrary there is a risk of
introducing obscurity.
> Some of the sometimes confused senses (not by you, I hasten to add!)
> seem to be:
>
> 1) The fact of possessing awareness
> 2) The fact of being aware of one's awareness
> 3) the fact of being aware of some content of one's awareness
So just remember that in a first approximation I identify this with
1) being conscious (Dt?) .... for those who have
followed the modal posts. (Dx is for ~ Beweisbar (~x))
2) being self-conscious (DDt?)
3) being conscious of # (Dp?)
You can also have:
4) being self-conscious of something (DDp?).
Dp is really an abbreviation of the arithmetical proposition
~beweisbar ( '~p'). 'p' means the godel number describing p in the
language of the machine (by default it is the first order arithmetic
language).
>
> So now I would prefer to talk about self-relating to a 1-personal
> 'world', where previously I might have said 'I am conscious', and that
> such a world mediates or instantiates 3-personal content.
This is ambiguous. The word 'world' is a bit problematic in my setting.
> I've tried to root this (in various posts) in a logically or
> semantically primitive notion of self-relation that could underly 0,
> 1, or 3-person narratives, and to suggest that such self-relation
> might be intuited as 'sense' or 'action' depending on the narrative
> selected.
OK.
> But crucially such nuances would merely be partial takes on the
> underlying self-relation, a 'grasp' which is not decomposable.
Actually the elementary grasp are decomposable (into number relations)
in the comp setting.
>
> So ISTM that questions should attempt to elicit the machine's
> self-relation to such a world and its contents: i.e. it's 'grasp' of a
> reality analogous to our own. And ISTM the machine could also ask
> itself such questions, just as we can, if indeed such a world existed
> for it.
OK, but the machine cannot know that. As we cannot know that).
>
> I realise of course that it's fruitless to try to impose my jargon on
> anyone else, but I've just been trying to see whether I could become
> less confused by expressing things in this way. Of course, a
> reciprocal effect might just be to make others more confused!
It is the risk indeed.
Best regards,
Bruno
>
> David
>>
>>
>> Le 21-juin-07, à 01:07, David Nyman a écrit :
>>
>> >
>> > On Jun 5, 3:12 pm, Bruno Marchal <marc....domain.name.hidden> wrote:
>> >
>> >> Personally I don' think we can be *personally* mistaken about our
>> own
>> >> consciousness even if we can be mistaken about anything that
>> >> consciousness could be about.
>> >
>> > I agree with this, but I would prefer to stop using the term
>> > 'consciousness' at all.
>>
>>
>> Why?
>>
>>
>>
>> > To make a decision (to whatever degree of
>> > certainty) about whether a machine possessed a 1-person pov
>> analogous
>> > to a human one, we would surely ask it the same sort of questions
>> one
>> > would ask a human. That is: questions about its personal 'world' -
>> > what it sees, hears, tastes (and perhaps extended non-human
>> > modalitiies); what its intentions are, and how it carries them into
>> > practice. From the machine's point-of-view, we would expect it to
>> > report such features of its personal world as being immediately
>> > present (as ours are), and that it be 'blind' to whatever 'rendering
>> > mechanisms' may underlie this (as we are).
>> >
>> > If it passed these tests, it would be making similar claims on a
>> > personal world as we do, and deploying this to achieve similar ends.
>> > Since in this case it could ask itself the same questions that we
>> can,
>> > it would have the same grounds for reaching the same conclusion.
>> >
>> > However, I've argued in the other bit of this thread against the
>> > possibility of a computer in practice being able to instantiate
>> such a
>> > 1-person world merely in virtue of 'soft' behaviour (i.e.
>> > programming). I suppose I would therefore have to conclude that no
>> > machine could actually pass the tests I describe above - whether
>> self-
>> > administered or not - purely in virtue of running some AI program,
>> > however complex. This is an empirical prediction, and will have to
>> > await an empirical outcome.
>>
>>
>> Now I have big problems to understand this post. I must think ... (and
>> go).
>>
>> Bye,
>>
>> Bruno
>>
>>
>>
>> >
>> >
>> > On Jun 5, 3:12 pm, Bruno Marchal <marc....domain.name.hidden> wrote:
>> >> Le 03-juin-07, à 21:52, Hal Finney a écrit :
>> >>
>> >>
>> >>
>> >>> Part of what I wanted to get at in my thought experiment is the
>> >>> bafflement and confusion an AI should feel when exposed to human
>> >>> ideas
>> >>> about consciousness. Various people here have proffered their
>> own
>> >>> ideas, and we might assume that the AI would read these
>> suggestions,
>> >>> along with many other ideas that contradict the ones offered here.
>> >>> It seems hard to escape the conclusion that the only logical
>> response
>> >>> is for the AI to figuratively throw up its hands and say that it
>> is
>> >>> impossible to know if it is conscious, because even humans cannot
>> >>> agree
>> >>> on what consciousness is.
>> >>
>> >> Augustin said about (subjective) *time* that he knows perfectly
>> what
>> >> it
>> >> is, but that if you ask him to say what it is, then he admits being
>> >> unable to say anything. I think that this applies to
>> "consciousness".
>> >> We know what it is, although only in some personal and
>> uncommunicable
>> >> way.
>> >> Now this happens to be true also for many mathematical concept.
>> >> Strictly speaking we don't know how to define the natural numbers,
>> and
>> >> we know today that indeed we cannot define them in a communicable
>> way,
>> >> that is without assuming the auditor knows already what they are.
>> >>
>> >> So what can we do. We can do what mathematicians do all the time.
>> We
>> >> can abandon the very idea of *defining* what consciousness is, and
>> try
>> >> instead to focus on principles or statements about which we can
>> agree
>> >> that they apply to consciousness. Then we can search for
>> >> (mathematical)
>> >> object obeying to such or similar principles. This can be made
>> easier
>> >> by admitting some theory or realm for consciousness like the idea
>> that
>> >> consciousness could apply to *some* machine or to some
>> *computational
>> >> events" etc.
>> >>
>> >> We could agree for example that:
>> >> 1) each one of us know what consciousness is, but nobody can prove
>> >> he/she/it is conscious.
>> >> 2) consciousness is related to inner personal or self-referential
>> >> modality
>> >> etc.
>> >>
>> >> This is how I proceed in "Conscience et Mécanisme". ("conscience"
>> is
>> >> the french for consciousness, "conscience morale" is the french for
>> >> the
>> >> english "conscience").
>> >>
>> >>
>> >>
>> >>> In particular I don't think an AI could be expected to claim that
>> it
>> >>> knows that it is conscious, that consciousness is a deep and
>> >>> intrinsic
>> >>> part of itself, that whatever else it might be mistaken about it
>> >>> could
>> >>> not be mistaken about being conscious. I don't see any logical
>> way
>> >>> it
>> >>> could reach this conclusion by studying the corpus of writings on
>> the
>> >>> topic. If anyone disagrees, I'd like to hear how it could happen.
>> >>
>> >> As far as a machine is correct, when she introspects herself, she
>> >> cannot not discover a gap between truth (p) and provability (Bp).
>> The
>> >> machine can discover correctly (but not necessarily in a completely
>> >> communicable way) a gap between provability (which can potentially
>> >> leads to falsities, despite correctness) and the incorrigible
>> >> knowability or knowledgeability (Bp & p), and then the gap between
>> >> those notions and observability (Bp & Dp) and sensibility (Bp & Dp
>> &
>> >> p). Even without using the conventional name of "consciousness",
>> >> machines can discover semantical fixpoint playing the role of non
>> >> expressible but true statements.
>> >> We can *already* talk with machine about those true unnameable
>> things,
>> >> as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.
>> >>
>> >>
>> >>
>> >>> And the corollary to this is that perhaps humans also cannot
>> >>> legitimately
>> >>> make such claims, since logically their position is not so
>> different
>> >>> from that of the AI. In that case the seemingly axiomatic
>> question
>> >>> of
>> >>> whether we are conscious may after all be something that we could
>> be
>> >>> mistaken about.
>> >>
>> >> This is an inference from "I cannot express p" to "I can express
>> not
>> >> p". Or from ~Bp to B~p. Many atheist reason like that about the
>> >> concept of "unameable" reality, but it is a logical error.
>> >> Even for someone who is not willing to take the comp hyp into
>> >> consideration, it is a third person communicable fact that
>> >> self-observing machines can discover and talk about many non
>> >> 3-provable
>> >> and sometimes even non 3-definable true "statements" about them.
>> Some
>> >> true statements can only be interrogated.
>> >> Personally I don' think we can be *personally* mistaken about our
>> own
>> >> consciousness even if we can be mistaken about anything that
>> >> consciousness could be about.
>> >>
>> >> Bruno
>> >>
>> >> http://iridia.ulb.ac.be/~marchal/
>> >
>> >
>> > >
>> >
>> http://iridia.ulb.ac.be/~marchal/
>>
>> >>
>>
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Jun 29 2007 - 08:41:36 PDT