Re: How would a computer know if it were conscious?

From: Stathis Papaioannou <stathisp.domain.name.hidden>
Date: Mon, 4 Jun 2007 21:15:54 +1000

On 04/06/07, marc.geddes.domain.name.hidden <marc.geddes.domain.name.hidden> wrote:

See you haven't understood my definitions. It may be my fault due to
> the way I worded things. You are of course quite right that: 'it's
> possible to correctly reason about cognitive systems at least well
> enough to predict their behaviour to a useful degree and yet not care
> at all about what happens to them'. But this is only pattern
> recognition and symbolic intelligence, *not* fully reflective
> intelligence. Reflective intelligence involves additional
> representations enabling a system to *integrate* the aforementioned
> abstract knowledge (and experience it directly as qualia). Without
> this ability an AI would be unable to maintain a stable goal structure
> under recursive self improvement and therefore would remain limited.


Are you saying that a system which has reflective intelligence would be able
to in a sense emulate the system it is studying, and thus experience a very
strong form of empathy? That's an interesting idea, and it could be that
very advanced AI would have this ability; after all, humans have the ability
for abstract reasoning which other animals almost completely lack, so why
couldn't there be a qualitative (or nearly so) rather than just a
quantitative difference between us and super-intelligent beings?

However, what would be wrong with a super AI that just had large amounts of
pattern recognition and symbolic reasoning intelligence, but no emotions at
all? It could work as the ideal disinterested scientist, doing theoretical
physics without regard for its own or anyone else's feelings. You would
still have to say that it was super-intelligent, even though it it is an
idiot from the reflective intelligence perspective. It also would pose no
threat to anyone because all it wants to do and all it is able to do is
solve abstract problems, and in fact I would feel much safer around this
sort of AI than one that has real power and thinks it has my best interests
at heart.

Secondly, I don't see how the ability to fully empathise would help the AI
improve itself or maintain a stable goal structure. Adding memory and
processing power would bring about self-improvement, perhaps even recursive
self-improvement if it can figure out how to do this more effectively with
every cycle, and yet it doesn't seem that this would require the presence of
any other sentient beings in the universe at all, let alone the ability to
empathise with them.

Finally, the majority of evil in the world is not done by psychopaths, but
by "normal" people who are aware that they are causing hurt, may feel guilty
about causing hurt, but do it anyway because there is a competing interest
that outweighs the negative emotions.


-- 
Stathis Papaioannou
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Jun 04 2007 - 07:16:15 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST