Jef Allbright wrote:
> 
> Immediately upon hitting Send on the previous post, I noticed that I had
> failed to address a remaining point, below.
>> Brent Meeker wrote:
>> > > Stathis Papaioannou wrote:
>> >>  >> Jef Allbright writes:
>>
>> <snip>
>>
>> >>> Further, from this theory of metaethics we can derive a practical 
>> >>> >>> system of social decision-making based on (1) increasing >>> 
>> fine-grained knowledge of shared values, and (2) application of >>> 
>> increasingly effective principles, selected with regard to models of 
>> >>> probable outcomes in a Rawlsian mode of broad rather than narrow 
>> >>> self-interest.
>> >> >> This is really quite a good proposal for building better 
>> societies, >> and one that I would go along with, but meta-ethical 
>> problems arise >> if someone simply rejects that shared values are 
>> important (eg. >> believes that the values of the strong outweigh 
>> those of the weak),
>> > > Historically this problem has been dealt with by those who think > 
>> shared values are important ganging up on those who don't.
>> > >> and ethical
>> >> problems arise when it is time to decide what exactly these shared 
>> >> values are and how they should best be promoted.
>> > > Aye, there's the rub.
>>
>> Because any decision-making is done within a limited context, but the 
>> consequences arise within a necessarily larger (future) context, we 
>> can never be sure of the exact consequences of our decisions.  
>> Therefore, we should strive for decision-making that is increasingly 
>> *right-in-principle*, given our best knowledge of the situation at the 
>> time. Higher-quality principles can be recognized by their greater 
>> scope of applicability and subtlety (more powerful but relatively 
>> fewer side-effects).
>>
> 
> It's an interesting question as to how we might best know our
> fine-grained human values across an entire population, given that we can
> hardly begin to express them ourselves, let alone their complex internal
> and external relationships and dependencies.  There's also the question
> of sufficient motivation, since very few of us would want to spend a
> great deal of time answering (and later updating) questionnaires.
> 
> The best (possibly) workable idea I have is to use story-telling.  It
> might be done in the form of a game of collaborative story-telling where
> people would contribute short scenarios where the actions and
> interactions of the characters would encode systems of values. Then,
> software could analyze the text, extract significant features into a
> high-dimensional array of vectors, and from there, principle component
> analysis, clustering, rankings of association and similarity could be
> done mathematically via unsupervised software with the higher level
> information available for visualization. This idea needs more fleshing
> out and it might be possible to perform limited validation of the
> concept using the existing (and growing) corpus of fictional literature
> available in digital form.
When people tell me, in defense of an omnibenevolent God, that this is the best of all possible worlds, I point out to them that in Hollywood movies, good always triumphs over evil...and these movies are widely recognized as unrealistic.
Brent Meeker
"No good deed goes unpunished."
        --- Claire Booth Luce, U.S. Senator
--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Dec 22 2006 - 16:55:43 PST