Re: How would a computer know if it were conscious?

From: Jason <jasonresch.domain.name.hidden>
Date: Sun, 03 Jun 2007 19:09:27 -0000

What do others on this list think about Mark Tegmark's definition of
consciousness:

"I believe that consciousness is, essentially, the way information
feels when being processed. Since matter can be arranged to process
information in numerous ways of vastly varying complexity, this
implies a rich variety of levels and types of consciousness."

Source: http://www.edge.org/q2007/q07_7.html

Jason

On Jun 3, 6:11 am, "Stathis Papaioannou" <stath....domain.name.hidden> wrote:
> On 03/06/07, marc.ged....domain.name.hidden <marc.ged....domain.name.hidden.com> wrote:
>
>
>
> > How do you derive (a) ethics and (b) human-friendly ethics from reflective
> > > intelligence? I don't see why an AI should decide to destroy the world,
> > > save the world, or do anything at all to the world, unless it started
> > off
> > > with axioms and goals which pushed it in a particular direction.
>
> > When reflective intelligence is applied to cognitive systems which
> > reason about teleological concepts (which include values, motivations
> > etc) the result is conscious 'feelings'. Reflective intelligence,
> > recall, is the ability to correctly reason about cognitive systems.
> > When applied to cognitive systems reasoning about teleological
> > concepts this means the ability to correctly determine the
> > motivational 'states' of self and others - as mentioned - doing this
> > rapidly and accuracy generates 'feelings'. Since, as has been known
> > since Hume, feelings are what ground ethics, the generation of
> > feelings which represent accurate tokens about motivational
> > automatically leads to ethical behaviour.
>
> Determining the motivational states of others does not necessarily involve
> feelings or empathy. It has been historically very easy to assume that other
> species or certain members of our own species either lack feelings or, if
> they have them, it doesn't matter. Moreover, this hasn't prevented people
> from determining the motivations of inferior beings in order to exploit
> them. So although having feelings may be necessary for ethical behaviour, it
> is not sufficient.
>
> Bad behaviour in humans is due to a deficit in reflective
>
> > intelligence. It is known for instance, that psychopaths have great
> > difficulty perceiving fear and sadness and negative motivational
> > states in general. Correct representation of motivational states is
> > correlated with ethical behaviour.
>
> Psychopaths are often very good at understanding other peoples' feelings, as
> evidenced by their ability to manipulate them. The main problem is that they
> don't *care* about other people; they seem to lack the ability to be moved
> by other peoples' emotions and lack the ability to experience emotions such
> as guilt. But this isn't part of a general inability to feel emotion, as
> they often present as enraged, entitled, depressed, suicidal, etc., and
> these emotions are certainly enough to motivate them. Psychopaths have a
> slightly different set of emotions, regulated in a different way compared to
> the rest of us, but are otherwise cognitively intact.
>
> Thus it appears that reflective
>
> > intelligence is automatically correlated with ethical behaviour. Bear
> > in mind, as I mentioned that: (1) There are in fact three kinds of
> > general intelligence, and only one of them ('reflective intelligence')
> > is correlated with ethics. The other two are not. A deficit in
> > reflective intelligence does not affect the other two types of general
> > intelligence (which is why for instance psychopaths could still score
> > highly in IQ tests). And (2) Reflective intelligence in human beings
> > is quite weak. This is the reason why intelligence does not appear to
> > be much correlated with ethics in humans. But this fact in no way
> > refutes the idea that a system with full and strong reflective
> > intelligence would automatically be ethical.
>
> Perhaps I haven't quite understood your definition of reflective
> intelligence. It seems to me quite possible to "correctly reason about
> cognitive systems", at least well enough to predict their behaviour to a
> useful degree, and yet not care at all about what happens to them.
> Furthermore, it seems possible to me to do this without even suspecting that
> the cognitive system is conscious, or at least without being sure that it is
> conscious.
>
> --
> Stathis Papaioannou


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Jun 03 2007 - 15:09:43 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST