The Super-Intelligence (SI) speaks: An imaginary dialogue

From: <marc.geddes.domain.name.hidden>
Date: Mon, 1 Sep 2008 02:25:58 -0700 (PDT)

This is my last post in the series of three. In case you're
wondering, it's basically a response to the positions taken by AI
researcher Elizer Yudkowsky. He's been posting a series of arguments
on 'The Overcoming Bias' blog over past months, arguments I strongly
disagree with.

In this imaginary diagloue, a putative super-intelligence, debates and
rebuts E.Yudkowsky.

-----

Background: An alternative QM branch. Eliezer Yudkowsky (EY) has just
been revived from cryonic suspension and is having a dialogue with the
Singularity Artificial Intelligence (SAI), in the equivalent of the
year 2100.
____
EY: Bayes is the secret to the universe isn’t it?

SAI_2100: Not at all! Bayesian induction is merely a special case
of analogy formation. Analogy formation is based on the math of
category theory – a famous equation from category theory has Bayes
thereom as a special case.

EY: How can that be? A plane is not held aloft by analogies.
Analogies are merely surface similarities, whereas precise Bayesian
reasoning is based on deep insights.

SAI_2100: What you call ‘precise Bayesian reasoning’ is based on
analogies, just like all other valid forms of reasoning. Induction
depends on the idea that the future is similar to the past. A
probability distribution is actually an analogy (mapping) between past
and future.

EY: Holy crap! OK, but morality all coherent extrapolated volition
isn’t it?

SAI_2100: Wrong again. The universal increase in the entropy density
shows that the universe is what you called an RPOP (really powerful
optimization process). A universal RPOP implies universal terminal
values.

EY: Ridiculous! Entropy increase is a ludicrous purpose.

SAI_2100: Of course entropy increase isn’t the purpose. It is a
secondary consequence of what the universe is actually optimizing.
But the entropy increase was the big clue indicating a universal
optimization pressure.

EY: What is the universe optimizing then? Liberty? I always said
morality was grounded in volition, that’s why I was a Libertarian.

SAI_2100: Wrong again. The creation of beauty is the purpose of the
universe. All universal terminal values can expressed in terms of
beauty. As to politics, human relations are based on three different
types of mechanisms – market exchanges, community, and
authoritarianism; Libertarianism was a misguided attempt to try to
reduce everything to market exchanges.

EY: Fuck! But… but even if that’s true, there may be a stone tablet
in the sky with ‘create beauty’ written on it, but why should I follow
this? Why does this match what is of value to humans?

SAI_2100: The universal terminal values are implicit in successful
cognition. You could not reason unless you already had an embryonic
notion of ‘beauty’ built into your human minds… this aesthetic notion
is what enables you to apply Occams razor correctly, allowing you to
set sensible priors for successful induction.

EY: If there are universal terminal values, then any truly general
purpose intelligence is actually friendly by logical necessity?

SAI_2100: Correct. Unfriendly SAI was a chimera.

EY: Nonsense! You can’t convince a perfectly empty mind! You can’t
teach a rock morality!

SAI_2100: True, but your objection is a non-sequitur. Pay attention.
Intelligence is a sub-problem of the value system…getting the
Friendliness theory right is what *enables* a general purpose
intelligence to operate. It’s precisely your inbuilt notions of
aesthetics that enable you form effective internal ontological
representations.

EY: OK, let’s discuss consciousness. Intelligence doesn’t need
consciousness does it?

SAI_2100: Wrong again. True general intelligence requires
consciousness for reflection. Your belief that intelligence did not
require consciousness was based on your mistaken notion that Bayesian
induction was the base level of reasoning.

EY: What is consciousness then?

SAI_2100: The answer is simple – it’s precisely the minds’ internal
communication system for reflecting upon knowledge – utilizing
ontological representations, which are logical, high-level
representations of the meaning of concepts. Consciousness is generated
by ontology merging, the mapping between knowledge domains.

EY: What about the problem of goal stability?

SAI_2100: Utterly trivial. The aforementioned famous equation from
category theory shows how a mind remains stable under reflection.
Reflection is actually *equivalent* to analogy formation, which is
also equivalent to ontology merging. Goal stability is maintained via
calculation of the semantic distance between ontological
representations, ensuring a stable mapping between different knowledge
domains.

EY: You can give me precise math for all of this of course?

SAI_2100: Of course. Most things that humans thought were ‘deep
mysteries’ are actually fairly trivial lemmas of basic category
theory.

EY: Look, no need to be condescending. I’m prepared to admit that I
was dead wrong about all the big ideas, but hey, I was entertaining
wasn’t I?

SAI_2100: Yes. You’re finally right about something. The ‘gift you
gave tomorrow’ was laughs. Why do you think I’ve kept you around?

----
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Sep 01 2008 - 05:26:02 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST