On 03/06/07, marc.geddes.domain.name.hidden <marc.geddes.domain.name.hidden> wrote:
The third type of conscious mentioned above is synonymous with
> 'reflective intelligence'. That is, any system successfully engaged
> in reflective decision theory would automatically be conscious.
> Incidentally, such a system would also be 'friendly' (ethical)
> automatically. The ability to reason effectively about ones own
> cognitive processes would certainly enable the ability to elaborate
> precise definitions of consciousness and determine that the system was
> indeed conforming to the aforementioned definitions.
How do you derive (a) ethics and (b) human-friendly ethics from reflective
intelligence? I don't see why an AI should decide to destroy the world,
save the world, or do anything at all to the world, unless it started off
with axioms and goals which pushed it in a particular direction.
--
Stathis Papaioannou
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Jun 03 2007 - 05:20:58 PDT