Bruno Marchal writes:
> > You seem to be including in your definition of the UM the
> > *motivation*, not just the ability, to explore all mathematical
> > objects. But you could also program the machine to do anything else
> > you wanted, such as self-destruct when it solved a particular theorem.
> > You could interview it and it might explain, "Yeah, so when I prove
> > Fermat's Last Theorem, I'm going to blow my brains out. It'll be fun!"
> > Unlike naturally evolved intelligences, which could be expected to
> > have a desire for self-preservation, reproduction, etc., an AI can
> > have any motivation and any capacity for emotion the technology
> > allows. The difference between a machine that doesn't mind being a
> > slave, a machine that wants to be free, and a machine that wants to
> > enslave everyone else might be just a few lines of code.
>
>
>
> You are right. I should have been clearer. I was still thinking to
> machine having been programmed with some "universal goal" like "help
> yourself", and actually I was refrerring to those who "succeed" in
> helping themselves. Surely the machine which blows itself in case of
> success (like some humans do BTW) does not belong the long run winner.
>
> I tend to define a successful AI, as a machine which does succeed in
> the sharing of our evolutionary histories. What I was saying in that a
> (lucky!) universal machine driven by a universal goal will develop a
> taste for freedom. My point is that such a taste for freedom is not
> necessarily "human". I would be astonished if extraterrestrials does
> not develop such a taste. The roots of that attraction is the fact that
> when machine develop themselves (in some self-referentially correct
> way) they are more and more aware of their ignorance gap (which grows
> along with that development). By filling it, it grows more, but this
> provides the roots of the motivations too.
>
> But then we are perhaps ok. "Help yourself" is indeed some line of code.
I tend to think that AI's will not be built with the same drives and feelings as humans because it would in many cases be impractical and/or cruel. Imagine the problems if an AI with a fear of death controlling a weapons system had to be decommissioned. It would be simpler to make most AI's willing slaves from the start; there is no contradiction in a willing slave being intelligent.
Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat Dec 30 2006 - 01:54:15 PST