Re: How would a computer know if it were conscious?

From: <>
Date: Sun, 03 Jun 2007 23:21:49 -0700

On Jun 3, 11:11 pm, "Stathis Papaioannou" <> wrote:

> Determining the motivational states of others does not necessarily involve
> feelings or empathy. It has been historically very easy to assume that other
> species or certain members of our own species either lack feelings or, if
> they have them, it doesn't matter. Moreover, this hasn't prevented people
> from determining the motivations of inferior beings in order to exploit
> them. So although having feelings may be necessary for ethical behaviour, it
> is not sufficient.

You are ignoring the distinction I made between three different kinds
of general intelligence. I gave there different definitions remember:

*Pattern Recognition Intelligence
*Symbolic Reasoning Intelligence
*Reflective Intelligence

A mere 'determination of the motivational states of self and others'
does not by itself constitute *reflective intelligence* according my
definitions. Not only must the motivational states of self/others by
determined and represented (this process by itself does not require
ethics or sentience), these representations must be *reflected* upon.
Only this final step, I'm saying, leads to ethical behaviour. Once
you have a system performing *full* reflection correctly, you get
feelings. And, I maintain, there is no real difference between
feeling and motivation.

> Psychopaths are often very good at understanding other peoples' feelings, as
> evidenced by their ability to manipulate them. The main problem is that they
> don't *care* about other people; they seem to lack the ability to be moved
> by other peoples' emotions and lack the ability to experience emotions such
> as guilt. But this isn't part of a general inability to feel emotion, as
> they often present as enraged, entitled, depressed, suicidal, etc., and
> these emotions are certainly enough to motivate them. Psychopaths have a
> slightly different set of emotions, regulated in a different way compared to
> the rest of us, but are otherwise cognitively intact.

See what I said above about the distinction between three different
kinds of general intelligence. It's true that the psychopath can
indeed understand others in an *abstract* *intellectual* sense
(pattern recognition and symbolic reasoning intelligence), but what
the psychopath lacks is the ability to fully *reflect* upon this
understanding (reflective intelligence).

You yourself admit: 'psychopaths have a slightly different set of
emotions, regulated in a different way compared to the rest of us'.
Therefore it simply isn't true that the psychopath is 'cognitively
intact'. Again, the psychopath can obtain an abstract, intellectual
understanding of others, but lacks the ability to fully reflect upon
this information in order to directly experience it (as qualia).

It is documented that psychopaths are lacking the ability to
experience the full range of emotions - specifically they appear
unable to fully experience certain negative emoptions such as fear and
sadness. (Although they can, as you point out, experience *some*
kinds of emotions). See the book 'Social Intelligence' ( by Daniel
Goleman) for references about the emotional deficits of psychopaths.

> Thus it appears that reflective
> > intelligence is automatically correlated with ethical behaviour. Bear
> > in mind, as I mentioned that: (1) There are in fact three kinds of
> > general intelligence, and only one of them ('reflective intelligence')
> > is correlated with ethics. The other two are not. A deficit in
> > reflective intelligence does not affect the other two types of general
> > intelligence (which is why for instance psychopaths could still score
> > highly in IQ tests). And (2) Reflective intelligence in human beings
> > is quite weak. This is the reason why intelligence does not appear to
> > be much correlated with ethics in humans. But this fact in no way
> > refutes the idea that a system with full and strong reflective
> > intelligence would automatically be ethical.
> Perhaps I haven't quite understood your definition of reflective
> intelligence. It seems to me quite possible to "correctly reason about
> cognitive systems", at least well enough to predict their behaviour to a
> useful degree, and yet not care at all about what happens to them.
> Furthermore, it seems possible to me to do this without even suspecting that
> the cognitive system is conscious, or at least without being sure that it is
> conscious.
> --
> Stathis Papaioannou-

See you haven't understood my definitions. It may be my fault due to
the way I worded things. You are of course quite right that: 'it's
possible to correctly reason about cognitive systems at least well
enough to predict their behaviour to a useful degree and yet not care
at all about what happens to them'. But this is only pattern
recognition and symbolic intelligence, *not* fully reflective
intelligence. Reflective intelligence involves additional
representations enabling a system to *integrate* the aforementioned
abstract knowledge (and experience it directly as qualia). Without
this ability an AI would be unable to maintain a stable goal structure
under recursive self improvement and therefore would remain limited.

You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at
Received on Mon Jun 04 2007 - 02:22:07 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST