marc.geddes.domain.name.hidden wrote:
>
>
> On Oct 31, 3:28 pm, "Wei Dai" <wei....domain.name.hidden> wrote:
>
>> 4. For someone on a practical mission to write an AI that makes sensible
>> decisions, perhaps the model can serve as a starting point and as
>> illustration of how far away we still are from that goal.
>
> Heh. Yes, very interesting indeed. But a huge body of knowledge and
> a great deal of smartness is needed to even begin to grasp all that
> stuff ;)
>
> As regards AI I gotta wonder whether that 'Decision Theory' stuff is
> really 'the heart of the matter' - perhaps its the wrong level of
> abstraction for the problem. That is it say, it would be great if the
> AI could work out all the decision theory for itself, rather than
> having us trying to program it in (and probably failing miserably).
> Certainly, I'm sure as hell not smart enough to come up with a working
> model of decisions. So, rather than trying to do the impossible,
> better to search for a higher level of abstraction. Look for the
> answers in communication theory/ontology, rather than decision
> theory. Decision theory would be derivative of an effective ontology
> - that saves me the bother of trying to work it out ;)
Decisions require some value structure. To get values from an ontology you'd have to get around the Naturalistic fallacy.
Brent Meeker
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Oct 31 2007 - 02:40:42 PDT