Re: How would a computer know if it were conscious?

From: David Nyman <david.nyman.domain.name.hidden>
Date: Tue, 26 Jun 2007 22:10:49 +0100

On 26/06/07, John Mikes <jamikes.domain.name.hidden> wrote:

JM: You mean a hallucination of x, when you * 'I just see x, hear x, feel
x' and so forth' *. is included in your knowledge? or even substitutes for
it? Maybe yes...

DN: "I am conscious of knowing x" is distinguishable from "I know x". The
former has already differentiated 'knowing x' and so now "I know [knowing
x]". And so forth. So knowing in this sense stands for a direct or
unmediated 'self-relation', a species of unity between knower and known -
hence its notorious 'incorrigibility'.

JM: But then can you differentiate? (or this is no reasonable question?)

DN: It seems that in the development of the individual at first there is no
such differentiation; then we find that we are 'thrown' directly into a
'world' populated with 'things' and 'other persons'; later, we differentiate
this from a distal 'real world' that putatively co-varies with it. Now we
are in a position to make a distinction between 'plural' or 'rational' modes
of knowing, and solipsistic or 'crazy' ones. But then it dawns that it's
*our world* - not the 'real' one, that's the 'hallucination'. No wonder
we're crazy! This evolutionarily-directed stance towards what we 'know' is
of course so pervasive that it's only a minority (like the lost souls on
this list!) who harbour any real concern about the precise status of such
correlations. Hence, I suppose, our continual state of confusion.

JM: The classic question: "Am I? and the classical answer: "Who is
asking?"

DN: Just so. Crazy, like I say.

JM: Are you including 'humans' into the machines or the computers? And dogs?
Amoebas?

DN: Actually, I just meant to distinguish between 'machines' considered
physically and computational processes. I really have no idea of course
whether any non-human artefact will ever come to know and act in the sense
that a human does. My point was only to express my logical doubts that it
would ever do so in virtue of its behaving in a way that merely represents
*to us* a process of computation. However, the more I reason about this the
stranger it gets, so I guess I really 'dunno'.

JM: Bruno is right: accepting that 'any machine' is part of its "outside(?)
totality", i.e. embedded into its ambiance, I would be scared to
differentiate myself. There is no hermetic 'skin' - it is transitional
effects transcending back and forth, we just do not observe those outside
the 'topical boundaries' of our actual observation (model, as I call it).

DN: Yes: all is relation (ultimately self-relation, IMO), and 'boundaries'
merely delimit what is 'observable'. In this context, what do you think
about Colin's TPONOG post?

Regards

David


On 6/23/07, David Nyman <david.nyman.domain.name.hidden> wrote:
> >
> > Hi John....
>
> ....(just your Italics par-s quoted in this reply. Then "JM: means present
> text)):
>
> *DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps
> re-phrase this as just: 'how do you know x?' And then the answers are of
> the type 'I just see x, hear x, feel x' and so forth. IOW, 'knowing x' is
> unmediated** - 'objects' like x are just 'embedded' in the structure of
> the 'knower', and this is recursively related to more inclusive structures
> within which the knower and its environment are in turn embedded.
> *
> JM: You mean a hallucination of x, when you * 'I just see x, hear x, feel
> x' and so forth'*.
> is included in your knowledge? or even substitutes for it? Maybe yes...
> But then can you differentiate? (or this is no reasonable question?)
> *
>
>
> *((to JM: ...know if you are NOT conscious? Well, you wouldn't.))
> DN: Agreed. If we 'delete the noumenon' we get: "How would you know if
> you are NOT?" or: "How would you know if you did NOT (know)?". To which we
> might indeed respond: "You would not know, if you were NOT", or: "You would
> not know, if you did NOT (know)". *
> JM: The classic question: "Am I? and the classical answer: "Who is
> asking?"
> *
>
> *DN: I think we need to distinguish between 'computers' and 'machines'. I
> can see no reason in principle why an artifact could not 'know', and be
> motivated by such knowing to interact with the human world: humans are of
> course themselves 'natural artifacts'. * itself embedded.
>
> JM: Are you including 'humans' into the machines or the computers? And
> dogs? Amoebas?
> The main difference I see here is the 'extract' of the "human world" (or:
> "world, as humans can interpret what they learned") downsized to our
> choice of necessity which WE liked to design into an artifact. (motors,
> cellphones, AI, AL). Yes, we (humans etc.) are artefacts but 'use' a lot of
> capabilities (mental etc. gadgets) we either don't know at all, or just
> accept them as 'being human' (or an extract of human traits as 'being dog')
> with no urge to build such into a microwave oven or an AI.
> But then we are SSOO smart when we draw conclusions!!!!!
> *
> *DN:
> Bruno's approach is to postulate the whole 'ball of wax' as computation,
> so that any 'event' whether 'inside' or 'outside' the machine is 'computed'
> *.
> JM:
> Bruno is right: accepting that 'any machine' is part of its "outside(?)
> totality", i.e. embedded into its ambiance, I would be scared to
> differentiate myself. There is no hermetic 'skin' - it is transitional
> effects transcending back and forth, we just do not observe those outside
> the 'topical boundaries' of our actual observation (model, as I call it).
>
> *DN:*
> *The drift of my recent posts has been that even in this account, 'worlds'
> can emerge 'orthogonally' to each other, such that from their reciprocal
> perspectives, 'events' in their respective worlds will be 'imaginary'. *
> JM:
> I can't say: I have no idea how the world works, except for that little I
> interpreted into my 1st person narrative. I accept "maybe"-s.
> And I have a way to 'express' myself: I use "I dunno".
>
> Have fun
>
> John
>
>
>
> David
>
> >
> >
> > Dear David.
> > > do not expect from me the theoretical level of technicality-talk er
> > > get
> > > from Bruno: I talk (and think) common sense (my own) and if the
> > > theoretical technicalities sound strange, I return to my thinking.
> > >
> > > That's what I got, that's what I use (plagiarized from the Hungarian
> > > commi
> > > joke: what is the difference between the peoples' democracy and a
> > > wife?
> > > Nothing: that's what we got that's what we love)
> > >
> > > When I read your "questioning" the computer, i realized that you are
> > > in the ballpark of the AI people (maybe also AL - sorry, Russell)
> > > who select machine-accessible aspects for comparing.
> > > You may ask about prejudice, shame (about goofed situations), humor
> > > (does a
> > > computer laugh?) boredom or preferential topics (you push for an
> > > astronomical calculation and the computer says: I rather play some Bach
> > > music now)
> > > Sexual preference (even disinterestedness is slanted), or laziness.
> > > If you add untruthfulness in risky situations, you really have a human
> > > machine
> > > with consciousness (whatever people say it is - I agree with your
> > > evading
> > > that unidentified obsolete noumenon as much as possible).
> > >
> > > I found Bruno's post well fitting - if i have some hint what
> > > "...inner personal or self-referential modality..." may mean.
> > > I could not 'practicalize' it.
> > > I still frown when "abondoning (the meaning of) something but consider
> > >
> > > items as pertaining to it" - a rough paraphrasing, I admit. To
> > > what?.
> > > I don't feel comfortable to borrow math-methods for nonmath
> > > explanations
> > > but that is my deficiency.
> > >
> > > Now that we arrived at thequestion I replied-added (sort of) to
> > > Colin's question I -
> > > let me ask it again: how would YOU know if you are conscious?
> > > (Conscious is more meaningful than cc-ness). Or rather: How would
> > > you know if you are NOT conscious? Well, you wouldn't. If you can,
> > > you are conscious. Computers?????
> > >
> > > Have a good weekend
> > >
> > > John Mikes
> > >
> > >
> > >
> > > On 6/20/07, David Nyman < david.nyman.domain.name.hidden > wrote:
> > > >
> > > >
> > > > On Jun 5, 3:12 pm, Bruno Marchal < marc....domain.name.hidden > wrote:
> > > >
> > > > > Personally I don' think we can be *personally* mistaken about our
> > > > own
> > > > > consciousness even if we can be mistaken about anything that
> > > > > consciousness could be about.
> > > >
> > > > I agree with this, but I would prefer to stop using the term
> > > > 'consciousness' at all. To make a decision (to whatever degree of
> > > > certainty) about whether a machine possessed a 1-person pov
> > > > analogous
> > > > to a human one, we would surely ask it the same sort of questions
> > > > one
> > > > would ask a human. That is: questions about its personal 'world' -
> > > > what it sees, hears, tastes (and perhaps extended non-human
> > > > modalitiies); what its intentions are, and how it carries them into
> > > > practice. From the machine's point-of-view, we would expect it to
> > > > report such features of its personal world as being immediately
> > > > present (as ours are), and that it be 'blind' to whatever 'rendering
> > > >
> > > > mechanisms' may underlie this (as we are).
> > > >
> > > > If it passed these tests, it would be making similar claims on a
> > > > personal world as we do, and deploying this to achieve similar ends.
> > > > Since in this case it could ask itself the same questions that we
> > > > can,
> > > > it would have the same grounds for reaching the same conclusion.
> > > >
> > > > However, I've argued in the other bit of this thread against the
> > > > possibility of a computer in practice being able to instantiate such
> > > > a
> > > > 1-person world merely in virtue of 'soft' behaviour (i.e.
> > > > programming). I suppose I would therefore have to conclude that no
> > > > machine could actually pass the tests I describe above - whether
> > > > self-
> > > > administered or not - purely in virtue of running some AI program,
> > > > however complex. This is an empirical prediction, and will have to
> > > > await an empirical outcome.
> > > >
> > > > David
> > > >
> > > > On Jun 5, 3:12 pm, Bruno Marchal < marc....domain.name.hidden> wrote:
> > > > > Le 03-juin-07, à 21:52, Hal Finney a écrit :
> > > > >
> > > > >
> > > > >
> > > > > > Part of what I wanted to get at in my thought experiment is the
> > > > > > bafflement and confusion an AI should feel when exposed to human
> > > > ideas
> > > > > > about consciousness. Various people here have proffered their
> > > > own
> > > > > > ideas, and we might assume that the AI would read these
> > > > suggestions,
> > > > > > along with many other ideas that contradict the ones offered
> > > > here.
> > > > > > It seems hard to escape the conclusion that the only logical
> > > > response
> > > > > > is for the AI to figuratively throw up its hands and say that it
> > > > is
> > > > > > impossible to know if it is conscious, because even humans
> > > > cannot agree
> > > > > > on what consciousness is.
> > > > >
> > > > > Augustin said about (subjective) *time* that he knows perfectly
> > > > what it
> > > > > is, but that if you ask him to say what it is, then he admits
> > > > being
> > > > > unable to say anything. I think that this applies to
> > > > "consciousness".
> > > > > We know what it is, although only in some personal and
> > > > uncommunicable
> > > > > way.
> > > > > Now this happens to be true also for many mathematical concept.
> > > > > Strictly speaking we don't know how to define the natural numbers,
> > > > and
> > > > > we know today that indeed we cannot define them in a communicable
> > > > way,
> > > > > that is without assuming the auditor knows already what they are.
> > > > >
> > > > > So what can we do. We can do what mathematicians do all the time.
> > > > We
> > > > > can abandon the very idea of *defining* what consciousness is, and
> > > > try
> > > > > instead to focus on principles or statements about which we can
> > > > agree
> > > > > that they apply to consciousness. Then we can search for
> > > > (mathematical)
> > > > > object obeying to such or similar principles. This can be made
> > > > easier
> > > > > by admitting some theory or realm for consciousness like the idea
> > > > that
> > > > > consciousness could apply to *some* machine or to some
> > > > *computational
> > > > > events" etc.
> > > > >
> > > > > We could agree for example that:
> > > > > 1) each one of us know what consciousness is, but nobody can prove
> > > >
> > > > > he/she/it is conscious.
> > > > > 2) consciousness is related to inner personal or self-referential
> > > > > modality
> > > > > etc.
> > > > >
> > > > > This is how I proceed in "Conscience et Mécanisme". ("conscience"
> > > > is
> > > > > the french for consciousness, "conscience morale" is the french
> > > > for the
> > > > > english "conscience").
> > > > >
> > > > >
> > > > >
> > > > > > In particular I don't think an AI could be expected to claim
> > > > that it
> > > > > > knows that it is conscious, that consciousness is a deep and
> > > > intrinsic
> > > > > > part of itself, that whatever else it might be mistaken about it
> > > > could
> > > > > > not be mistaken about being conscious. I don't see any logical
> > > > way it
> > > > > > could reach this conclusion by studying the corpus of writings
> > > > on the
> > > > > > topic. If anyone disagrees, I'd like to hear how it could
> > > > happen.
> > > > >
> > > > > As far as a machine is correct, when she introspects herself, she
> > > > > cannot not discover a gap between truth (p) and provability (Bp).
> > > > The
> > > > > machine can discover correctly (but not necessarily in a
> > > > completely
> > > > > communicable way) a gap between provability (which can potentially
> > > >
> > > > > leads to falsities, despite correctness) and the incorrigible
> > > > > knowability or knowledgeability (Bp & p), and then the gap between
> > > > > those notions and observability (Bp & Dp) and sensibility (Bp & Dp
> > > > &
> > > > > p). Even without using the conventional name of "consciousness",
> > > > > machines can discover semantical fixpoint playing the role of non
> > > > > expressible but true statements.
> > > > > We can *already* talk with machine about those true unnameable
> > > > things,
> > > > > as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.
> > > > >
> > > > >
> > > > >
> > > > > > And the corollary to this is that perhaps humans also cannot
> > > > > > legitimately
> > > > > > make such claims, since logically their position is not so
> > > > different
> > > > > > from that of the AI. In that case the seemingly axiomatic
> > > > question of
> > > > > > whether we are conscious may after all be something that we
> > > > could be
> > > > > > mistaken about.
> > > > >
> > > > > This is an inference from "I cannot express p" to "I can express
> > > > not
> > > > > p". Or from ~Bp to B~p. Many atheist reason like that about the
> > > > > concept of "unameable" reality, but it is a logical error.
> > > > > Even for someone who is not willing to take the comp hyp into
> > > > > consideration, it is a third person communicable fact that
> > > > > self-observing machines can discover and talk about many non
> > > > 3-provable
> > > > > and sometimes even non 3-definable true "statements" about them.
> > > > Some
> > > > > true statements can only be interrogated.
> > > > > Personally I don' think we can be *personally* mistaken about our
> > > > own
> > > > > consciousness even if we can be mistaken about anything that
> > > > > consciousness could be about.
> > > > >
> > > > > Bruno
> > > > >
> > > > > http://iridia.ulb.ac.be/~marchal/<http://iridia.ulb.ac.be/%7Emarchal/>
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
> >
> >
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Tue Jun 26 2007 - 17:11:07 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST