Re: How would a computer know if it were conscious?

From: David Nyman <david.nyman.domain.name.hidden>
Date: Fri, 29 Jun 2007 16:17:38 +0100

On 29/06/07, Bruno Marchal <marchal.domain.name.hidden> wrote:
BM: I am not sure that in case of disagreement (like our "disagreement"
with Torgny), changing the vocabulary is a good idea. This will not
make the problem going away, on the contrary there is a risk of
introducing obscurity.

DN: Yes. this seems to be the greater risk. OK, in general I'll try to
avoid it where possible. I've taken note of the correspondences you
provided for the senses of 'consciousness' I listed, and the additional one.


BM: Actually the elementary grasp are decomposable (into number relations)
in the comp setting.

DN: Then are you saying that 'action' can occur without 'sense' - i.e. that
'zombies' are conceivable? This is what I hoped was avoided in the
intuition that 'sense' and 'action' are, respectively, 1-p and 3-p aspects
abstracted from a 0-p decomposable self-relation. The zombie then becomes
merely a category error. I thought that in COMP, number relations would be
identified with this decomposable self-relation. Ah.....but by
'decomposable', I think perhaps you mean that there are of course
*different* number relations, so that this would then entail that there is a
set of such fundamental relations such that *each* relation is individually
decomposable, yes?

BM: OK, but the machine cannot know that. As we cannot know that).

DN: Do you mean that the machine can't know for sure the correspondence
between its conscious world and the larger environment in which this is
embedded and to which it putatively relates? Then I agree of course, and as
you say, neither can we, for the sufficient reasons you have articulated.
So what I meant was that it would simply be in the same position that we
are, which seems self-evident.

Anyway, as I said, the original post was probably ill advised, and I retract
my quibbles about your terminology.

As to my point about whether such an outcome is likely vis-a-vis an AI
program, it wasn't of course because you made any claims on this topic, but
stimulated by another thread. My thought goes as follows. I seem to have
convinced myself that, on the COMP assumption that *I* am such a machine, it
is possible for other machines to instantiate conscious computations.
Therefore it would be reasonable for me to attribute consciousness to a
machine that passed certain critical tests, though not such that I could
definitely know or prove that it was conscious. Nonetheless, such quibbles
don't stop us from undertaking some empirical effort to develop machines
with consciousness. Two ways of doing this seem apparent. First, to copy
an existing such system (e.g. a human) at an appropriate substitution level
(as in your notorious gedanken experiment). Second, to arrange for some
initial system to undergo a process of 'psycho-physical' evolution (as
humans have done) such that its 'sense' and 'action' narratives
'self-converge' on a consistent 1p-3p interface, as in our own case.

In either of these cases, 'sense' and 'action' narratives 'self-converge',
rather than being 'engineered', and any imputation of consciousness ( i.e.
the attribution of semantics to the computation) continues to be 1p
*self-attribution*, not a provable or definitely knowable 3p one. The
problem then seems to be: is there in fact a knowable method to 'design' all
this into a system from the outside: i.e. a way to start from an external
semantic attribution (e.g. an AI program) and then 'engineer' the sense and
action syntactics of the instantiation in such a way that they converge on a
consistent semantic interpretation from either 1p or 3p pov? IOW, so that a
system thus engineered would be capable of passing the same critical tests
achievable by the first two types. I can't see that we possess even a
theory of how this could be done, and as somebody once said, there's nothing
so practical as a good theory. This is why I expressed doubt in the
empirical outcome of any AI programme approached in this manner. ISTM that
references to Moore's Law etc. in this context are at present not much more
than promissory notes written in invisible ink on transparent paper.

David.

Le 28-juin-07, à 17:56, David Nyman a écrit :
>
> > On 28/06/07, Bruno Marchal < marchal.domain.name.hidden> wrote:
> >
> > Hi Bruno
> >
> > The remarks you comment on are certainly not the best-considered or
> > most cogently expressed of my recent posts. However, I'll try to
> > clarify if you have specific questions. As to why I said I'd rather
> > not use the term 'consciousness', it's because of some recent
> > confusion and circular disputes ( e.g. with Torgny, or about whether
> > hydrogen atoms are 'conscious').
>
>
> I am not sure that in case of disagreement (like our "disagreement"
> with Torgny), changing the vocabulary is a good idea. This will not
> make the problem going away, on the contrary there is a risk of
> introducing obscurity.
>
>
>
>
>
> > Some of the sometimes confused senses (not by you, I hasten to add!)
> > seem to be:
> >
> > 1) The fact of possessing awareness
> > 2) The fact of being aware of one's awareness
> > 3) the fact of being aware of some content of one's awareness
>
>
> So just remember that in a first approximation I identify this with
>
> 1) being conscious (Dt?) .... for those who have
> followed the modal posts. (Dx is for ~ Beweisbar (~x))
> 2) being self-conscious (DDt?)
> 3) being conscious of # (Dp?)
>
> You can also have:
>
> 4) being self-conscious of something (DDp?).
>
> Dp is really an abbreviation of the arithmetical proposition
> ~beweisbar ( '~p'). 'p' means the godel number describing p in the
> language of the machine (by default it is the first order arithmetic
> language).
>
>
> >
> > So now I would prefer to talk about self-relating to a 1-personal
> > 'world', where previously I might have said 'I am conscious', and that
> > such a world mediates or instantiates 3-personal content.
>
> This is ambiguous. The word 'world' is a bit problematic in my setting.
>
>
> > I've tried to root this (in various posts) in a logically or
> > semantically primitive notion of self-relation that could underly 0,
> > 1, or 3-person narratives, and to suggest that such self-relation
> > might be intuited as 'sense' or 'action' depending on the narrative
> > selected.
>
> OK.
>
>
> > But crucially such nuances would merely be partial takes on the
> > underlying self-relation, a 'grasp' which is not decomposable.
>
>
> Actually the elementary grasp are decomposable (into number relations)
> in the comp setting.
>
>
> >
> > So ISTM that questions should attempt to elicit the machine's
> > self-relation to such a world and its contents: i.e. it's 'grasp' of a
> > reality analogous to our own. And ISTM the machine could also ask
> > itself such questions, just as we can, if indeed such a world existed
> > for it.
>
> OK, but the machine cannot know that. As we cannot know that).
>
> >
> > I realise of course that it's fruitless to try to impose my jargon on
> > anyone else, but I've just been trying to see whether I could become
> > less confused by expressing things in this way. Of course, a
> > reciprocal effect might just be to make others more confused!
>
> It is the risk indeed.
>
>
> Best regards,
>
> Bruno
>
>
>
>
> >
> > David
> >>
> >>
> >> Le 21-juin-07, à 01:07, David Nyman a écrit :
> >>
> >> >
> >> > On Jun 5, 3:12 pm, Bruno Marchal < marc....domain.name.hidden> wrote:
> >> >
> >> >> Personally I don' think we can be *personally* mistaken about our
> >> own
> >> >> consciousness even if we can be mistaken about anything that
> >> >> consciousness could be about.
> >> >
> >> > I agree with this, but I would prefer to stop using the term
> >> > 'consciousness' at all.
> >>
> >>
> >> Why?
> >>
> >>
> >>
> >> > To make a decision (to whatever degree of
> >> > certainty) about whether a machine possessed a 1-person pov
> >> analogous
> >> > to a human one, we would surely ask it the same sort of questions
> >> one
> >> > would ask a human. That is: questions about its personal 'world' -
> >> > what it sees, hears, tastes (and perhaps extended non-human
> >> > modalitiies); what its intentions are, and how it carries them into
> >> > practice. From the machine's point-of-view, we would expect it to
> >> > report such features of its personal world as being immediately
> >> > present (as ours are), and that it be 'blind' to whatever 'rendering
> >> > mechanisms' may underlie this (as we are).
> >> >
> >> > If it passed these tests, it would be making similar claims on a
> >> > personal world as we do, and deploying this to achieve similar ends.
> >> > Since in this case it could ask itself the same questions that we
> >> can,
> >> > it would have the same grounds for reaching the same conclusion.
> >> >
> >> > However, I've argued in the other bit of this thread against the
> >> > possibility of a computer in practice being able to instantiate
> >> such a
> >> > 1-person world merely in virtue of 'soft' behaviour (i.e.
> >> > programming). I suppose I would therefore have to conclude that no
> >> > machine could actually pass the tests I describe above - whether
> >> self-
> >> > administered or not - purely in virtue of running some AI program,
> >> > however complex. This is an empirical prediction, and will have to
> >> > await an empirical outcome.
> >>
> >>
> >> Now I have big problems to understand this post. I must think ... (and
> >> go).
> >>
> >> Bye,
> >>
> >> Bruno
> >>
> >>
> >>
> >> >
> >> >
> >> > On Jun 5, 3:12 pm, Bruno Marchal <marc....domain.name.hidden> wrote:
> >> >> Le 03-juin-07, à 21:52, Hal Finney a écrit :
> >> >>
> >> >>
> >> >>
> >> >>> Part of what I wanted to get at in my thought experiment is the
> >> >>> bafflement and confusion an AI should feel when exposed to human
> >> >>> ideas
> >> >>> about consciousness. Various people here have proffered their
> >> own
> >> >>> ideas, and we might assume that the AI would read these
> >> suggestions,
> >> >>> along with many other ideas that contradict the ones offered here.
> >> >>> It seems hard to escape the conclusion that the only logical
> >> response
> >> >>> is for the AI to figuratively throw up its hands and say that it
> >> is
> >> >>> impossible to know if it is conscious, because even humans cannot
> >> >>> agree
> >> >>> on what consciousness is.
> >> >>
> >> >> Augustin said about (subjective) *time* that he knows perfectly
> >> what
> >> >> it
> >> >> is, but that if you ask him to say what it is, then he admits being
> >> >> unable to say anything. I think that this applies to
> >> "consciousness".
> >> >> We know what it is, although only in some personal and
> >> uncommunicable
> >> >> way.
> >> >> Now this happens to be true also for many mathematical concept.
> >> >> Strictly speaking we don't know how to define the natural numbers,
> >> and
> >> >> we know today that indeed we cannot define them in a communicable
> >> way,
> >> >> that is without assuming the auditor knows already what they are.
> >> >>
> >> >> So what can we do. We can do what mathematicians do all the time.
> >> We
> >> >> can abandon the very idea of *defining* what consciousness is, and
> >> try
> >> >> instead to focus on principles or statements about which we can
> >> agree
> >> >> that they apply to consciousness. Then we can search for
> >> >> (mathematical)
> >> >> object obeying to such or similar principles. This can be made
> >> easier
> >> >> by admitting some theory or realm for consciousness like the idea
> >> that
> >> >> consciousness could apply to *some* machine or to some
> >> *computational
> >> >> events" etc.
> >> >>
> >> >> We could agree for example that:
> >> >> 1) each one of us know what consciousness is, but nobody can prove
> >> >> he/she/it is conscious.
> >> >> 2) consciousness is related to inner personal or self-referential
> >> >> modality
> >> >> etc.
> >> >>
> >> >> This is how I proceed in "Conscience et Mécanisme". ("conscience"
> >> is
> >> >> the french for consciousness, "conscience morale" is the french for
> >> >> the
> >> >> english "conscience").
> >> >>
> >> >>
> >> >>
> >> >>> In particular I don't think an AI could be expected to claim that
> >> it
> >> >>> knows that it is conscious, that consciousness is a deep and
> >> >>> intrinsic
> >> >>> part of itself, that whatever else it might be mistaken about it
> >> >>> could
> >> >>> not be mistaken about being conscious. I don't see any logical
> >> way
> >> >>> it
> >> >>> could reach this conclusion by studying the corpus of writings on
> >> the
> >> >>> topic. If anyone disagrees, I'd like to hear how it could happen.
> >> >>
> >> >> As far as a machine is correct, when she introspects herself, she
> >> >> cannot not discover a gap between truth (p) and provability (Bp).
> >> The
> >> >> machine can discover correctly (but not necessarily in a completely
> >> >> communicable way) a gap between provability (which can potentially
> >> >> leads to falsities, despite correctness) and the incorrigible
> >> >> knowability or knowledgeability (Bp & p), and then the gap between
> >> >> those notions and observability (Bp & Dp) and sensibility (Bp & Dp
> >> &
> >> >> p). Even without using the conventional name of "consciousness",
> >> >> machines can discover semantical fixpoint playing the role of non
> >> >> expressible but true statements.
> >> >> We can *already* talk with machine about those true unnameable
> >> things,
> >> >> as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.
> >> >>
> >> >>
> >> >>
> >> >>> And the corollary to this is that perhaps humans also cannot
> >> >>> legitimately
> >> >>> make such claims, since logically their position is not so
> >> different
> >> >>> from that of the AI. In that case the seemingly axiomatic
> >> question
> >> >>> of
> >> >>> whether we are conscious may after all be something that we could
> >> be
> >> >>> mistaken about.
> >> >>
> >> >> This is an inference from "I cannot express p" to "I can express
> >> not
> >> >> p". Or from ~Bp to B~p. Many atheist reason like that about the
> >> >> concept of "unameable" reality, but it is a logical error.
> >> >> Even for someone who is not willing to take the comp hyp into
> >> >> consideration, it is a third person communicable fact that
> >> >> self-observing machines can discover and talk about many non
> >> >> 3-provable
> >> >> and sometimes even non 3-definable true "statements" about them.
> >> Some
> >> >> true statements can only be interrogated.
> >> >> Personally I don' think we can be *personally* mistaken about our
> >> own
> >> >> consciousness even if we can be mistaken about anything that
> >> >> consciousness could be about.
> >> >>
> >> >> Bruno
> >> >>
> >> >> http://iridia.ulb.ac.be/~marchal/<http://iridia.ulb.ac.be/%7Emarchal/>
> >> >
> >> >
> >> > >
> >> >
> >> http://iridia.ulb.ac.be/~marchal/ <http://iridia.ulb.ac.be/%7Emarchal/>
> >>
> >> >>
> >>
> http://iridia.ulb.ac.be/~marchal/ <http://iridia.ulb.ac.be/%7Emarchal/>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Jun 29 2007 - 11:18:02 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST