Re: How would a computer know if it were conscious?

From: <marc.geddes.domain.name.hidden>
Date: Tue, 05 Jun 2007 03:16:40 -0000

On Jun 4, 11:15 pm, "Stathis Papaioannou" <stath....domain.name.hidden> wrote:
> On 04/06/07, marc.ged....domain.name.hidden <marc.ged....domain.name.hidden.com> wrote:
>
> See you haven't understood my definitions. It may be my fault due to
>
> > the way I worded things. You are of course quite right that: 'it's
> > possible to correctly reason about cognitive systems at least well
> > enough to predict their behaviour to a useful degree and yet not care
> > at all about what happens to them'. But this is only pattern
> > recognition and symbolic intelligence, *not* fully reflective
> > intelligence. Reflective intelligence involves additional
> > representations enabling a system to *integrate* the aforementioned
> > abstract knowledge (and experience it directly as qualia). Without
> > this ability an AI would be unable to maintain a stable goal structure
> > under recursive self improvement and therefore would remain limited.
>
> Are you saying that a system which has reflective intelligence would be able
> to in a sense emulate the system it is studying, and thus experience a very
> strong form of empathy?

Yes

>That's an interesting idea, and it could be that
> very advanced AI would have this ability; after all, humans have the ability
> for abstract reasoning which other animals almost completely lack, so why
> couldn't there be a qualitative (or nearly so) rather than just a
> quantitative difference between us and super-intelligent beings?

But I don't think this is qualitatively different to what humans do
already. It does seem that our ability to feel does in part involve
emulating other people's inner motivational states. See the research
on 'Mirror Neurons' . Or again, Daniel Goleman's 'Social
Intelligence' talks about this.

http://en.wikipedia.org/wiki/Mirror_neurons

It seems that we humans are already pretty good at reflection on
motivation already. Certainly reflection on motivation gives rise to
feelings. Emotions are the human strength. Our 'cutting edge' so to
speak.

But remember that 'reflection on motivation' is only one kind of
reflection. There are other kinds of reflection that we humans are
not nearly so good at. I listed three general classes of reflection
above - one type of reflection we humans seem to be very poor at is
'reflection on abstract reasoning' (or reflection on logic/
mathematics). With regard to this type of reflection we rather in an
analogous position to the emotional retard. We have symbolic/abstract
knowlege of mathematics (symbolic and pattern recognition
intelligence), but this is not directly reflected in our conscious
experience (or at least it is only in our conscious awareness very
weeakly). For example, you may know (intellectually) that 2+2=4 but
you do not *consciously experience* this information. You are
suffering from 'mathematical blind sight'. Now giving a super-human a
strong ability to reflect on math/logic *would* definitely be a
qualitative difference between us and super-intellects.

But here is something really cool: By intensely forcing yourself and
training yourself to think constantly about math/logic, it may be
possible for a human to partially draw math/logic into actual
conscious awareness! I can tell you here that in fact I claim to have
done just that.... and the result is.... very interesting ;) Suffice
it to say that I believe that math/logic knowledge appears in
consciousness as a sort of 'Ontology-Scape'. Just as the ability to
reflect on motivation gives rise to emotional experience, so I believe
that the ability to reflect on math/logic gives rise to a new kind of
conscious experience... what I call 'the ontology scape'. As I said,
I am of the opinion that if you really force yourself and train
yourself, it's possible to partially draw this 'Ontology scape' into
your own conscious awareness.

>
> However, what would be wrong with a super AI that just had large amounts of
> pattern recognition and symbolic reasoning intelligence, but no emotions at
> all? It could work as the ideal disinterested scientist, doing theoretical
> physics without regard for its own or anyone else's feelings. You would
> still have to say that it was super-intelligent, even though it it is an
> idiot from the reflective intelligence perspective. It also would pose no
> threat to anyone because all it wants to do and all it is able to do is
> solve abstract problems, and in fact I would feel much safer around this
> sort of AI than one that has real power and thinks it has my best interests
> at heart.

 As I said Intelligence has three parts: Pattern Recognition, Symbolic
Reasoning and Reflective. You can't cut out 1/3rd of real
intelligence and still expect your system to still function
effectively! ;) A system mssing reflective intelligence would have
serious cognitive deficits. (in fact , for the reasons I explain
below, I believe such a system would be unable to improve itself).

>
> Secondly, I don't see how the ability to fully empathise would help the AI
> improve itself or maintain a stable goal structure. Adding memory and
> processing power would bring about self-improvement, perhaps even recursive
> self-improvement if it can figure out how to do this more effectively with
> every cycle, and yet it doesn't seem that this would require the presence of
> any other sentient beings in the universe at all, let alone the ability to
> empathise with them.

Self-improvement requires more than just extra hardware. It also
requires the ability to integrate new knowledge with an existing
knowledge base in order to create truly orginal (novel) knowledge.
But this appears to be precisely the definition of reflective
intelligence! Thus, it seems that a system missing reflective
intelligence simply cannot improve itself in an ordered way. To
improve, a current goal structure has to be 'extrapolated' into a new
novel goal structure which none-the-less does not conflict with the
spirit of the old goal structure. But nothing but a *reflective*
intelligence can possibly make an accurate assessment of whether a new
goal structure is compatible with the old version! This stems from
the fact that comparison of goal structures requires a *subjective*
value judgement and it appears that only a *sentient* system can make
this judgement (since as far as we know, ethics/morality is not
objective). This proves that only a *sentient* system (a *reflective
intelligence*) can possibly maintain a stable goal structure under
recursive self-improvement.

>
> Finally, the majority of evil in the world is not done by psychopaths, but
> by "normal" people who are aware that they are causing hurt, may feel guilty
> about causing hurt, but do it anyway because there is a competing interest
> that outweighs the negative emotions.
>
> --
> Stathis Papaioannou

Yes true. But see what I said about there being more than one kind of
reflection. Strong empathy and feelings alone (caused by reflection
on motivation) is not enough. The human brain is not functioning as a
fully reflective intelligence, since as I pointed out, we don't have
much ability to reflect on math/logic.

Incidentally, as regards our debate yesterday on psychopaths, there
appears to be a some basis for thinking that the psychopath *does*
have a general inability to feel emotions. On the wiki:

http://en.wikipedia.org/wiki/Psychopath

"Their emotions are thought to be superficial and shallow, if they
exist at all."

"It is thought that any emotions which the primary psychopath exhibits
are the fruits of watching and mimicking other people's emotions."

So the supposed emotional displays could be faked. Thus it could well
be the case that there is a lack inability to 'reflect on
motivation' (to feel).



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Jun 04 2007 - 23:16:51 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST