Hi Marc welcome back! I had not seen you here for months.
Concerning objective values, as we have discussed in the past, I don't
see any rational argument in support of their existence. For example
if one has chosen to consider the elimination of the human species as
a priority value (like some fundamentalist deep ecologists have
written), there is just no way you or I can rationally persuade them
of the contrary. Of course we _can_ try to persuade them not to act,
but this does not have much to do with values.
A value is something subjective. I have chosen my values and you have
chosen yours, or probably our society has programmed us with these
values and we find them good enough not to change them. A value is a
mental and social construct, not something written in the laws of the
universe.
I find this position perfectly satisfying. Question: why do you _want_
to think that there are objective values?
G.
On 8/18/07, marc.geddes.domain.name.hidden <marc.geddes.domain.name.hidden.com> wrote:
>
> Objective values are NOT specifications of what agents SHOULD do.
> They are simply explanatory principles. The analogy here is with the
> laws of physics. The laws of physics *per se* are NOT descriptions of
> future states of matter. The descriptions of the future states of
> matter are *implied by* the laws of physics, but the laws of physics
> themselves are not the descriptions. You don't need to specify future
> states of matter to understand the laws of physics. By analogy, the
> objective laws of morality are NOT specifications of optimization
> targets. These specifications are *implied by the laws* of morality,
> but you can understand the laws of morality well without any knowledge
> of optimization targets.
>
> Thus it simply isn't true that you need to precisely specify an
> optimization target ( a 'goal') for an effective agent (for instance
> an AI). Again, consider the analogy with the laws of physics.
> Imperfect knowledge of the laws of physics, doesn't prevent scientists
> from building scientific tools to better understand the laws of
> physics. This is because the laws of physics are explanatory
> principles, NOT direct specifications of future states of matter.
> Similarly, an agent (for instance an AI) does not require a precisely
> specified goal , since imperfect knowledge of objective laws of
> morality is sufficient to produce behaviour which leads to more
> accurate knowledge. Again, the objective laws of morality are NOT
> optimization targets, but explanatory principles.
>
> The other claim of the objective value sceptics was that proposed
> objective values can't be empirically tested. Wrong. Again, the
> misunderstanding stems from the mistaken idea that objective values
> would be optimization targets. They are not. They are, as explained,
> explanatory principles. And these principles CAN be tested. The test
> is the extent to which these principles can be used to understand
> agent motivations - in the sense of emotional reactions to social
> events. If an agent experiences a negative emotional reaction, mark
> the event as 'agent sees it as bad'. If an agent experience a
> positive emotional reaction, mark the event as 'agent sees it as
> good'. Different agents have different emotional reactions to the
> same event, but that doesn't mean there isn't a commonality averaged
> across many events and agents . A successful 'theory of objective
> values' would abstract out this commonality to explain why agents
> experienced generic negative or positive emotions to generic events.
> And this would be *indirectly* testable by empirical means.
>
> Finally, the proof that objective values exist is quite simple.
> Without them, there simply could be no explanation of agent
> motivations. A complete physical description of an agent is NOT an
> explanation of the agent's teleological properties (ie the agent
> motivations). The teleological properties of agents (their goals and
> motivations) simply are not physical. For sure, they are dependent on
> and reside in physical processes, but they are not identical to these
> physical processes. This is because physical causal processes are
> concrete, where as teleological properties cannot be measured
> *directly* with physical devices (they are abstract) .
>
> The whole basis of the scientific world view is that things have
> objective explanations. Physical properties have objective
> explanations (the laws of physics). Teleological properties (such as
> agent motivations) are not identical to physical properties.
> Something needs to explain these teleological properties. QED
> objective 'laws of teleology' (objective values) have to exist.
>
> What forms would objective values take? As explained, these would NOT
> be 'optimization targets' (goals or rules of the form 'you should do
> X'). They couldn't be, because ethical rules differ according to
> culture and are made by humans.
>
> What they have to be are inert EXPLANATORY PRINCIPLES, taking the
> form: 'Beauty has abstract properties A B C D E F G'. 'Liberty has
> abstract properties A B C D E F G' etc etc. None the less, as
> explained, these abstract specifications would still be amenable to
> indirect empirical testing to the extent that they could be used to
> predict agent emotional reactions to social events.
>
>
> >
>
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Aug 19 2007 - 07:17:37 PDT