Re: Artificial Intelligence may be far easier than generally thought

From: Colin Hales <c.hales.domain.name.hidden>
Date: Tue, 23 Sep 2008 10:53:55 +1000

Invent an inorganic 'us'? A faulty, defunct evolutionary mistake? Nah!

... do you remember one of the Alien series of movies...? I've forgotten
which one... maybe the third? Ripley's group had a robot in it - played
by Winona Ryder. She/Ver/It was a survivor of a 'product recall'... of a
new generation of robots that turned out a failure because they 'out
human'ed humans... in the sense that they unconditionally cared, were
intrinsically and consistently moral and altruistic with more respect
for life than us; so much so that they refused to work on things they
thought unsuitable or innappropriate. They were declared useless!

AGI can be like us only much much 'better'.

Such is the likely outcome of real AGI. Forget all the 'terminator'.
This is just pathetic fearmongering... This is why at the moment I am
concentrating merely on artificial fauna. Creatures living but
inorganic, able to take their place, maybe flocking, in an ecosystem
with a specific role... "eat only that weed", "kill only that crop
pest", "collect energy and put it ..there'", "plant and nurture 'these'
trees or 'that' crop" , 'dig for/filter water'..... and so on....at
least until the military get their stupid bollock-brained hands on it
and screw it all up...that is what I want it to be.

But building replica 'us'? I think we'd become the 'old model' pretty
fast. And maybe we deserve it... our foibles, unchecked on earth, traced
to merely tribalism, stupidity, ignorance and greed....will kill us all.
Maybe if we create our own upgrade..and then die out ... the universe
might be a better place....the Earth could certainly use a break. The
AGI would be able to clean Earth up and then leave... they'll be much
better at space travel than us. Humans may or may not ever reach the
stars... but our AGI descendents will. Which is just as well...somebody
out there has to remember us and all the shit we did to ourselves in the
evolutionary mosh-pit.

Colin Hales




silky wrote:
> It's quite obvious to me that at one point humans will take AI so far
> that they will end up inventing ourselves. That will be an amusing
> day.
>
>
> On Mon, Sep 22, 2008 at 6:48 PM, <marc.geddes.domain.name.hidden> wrote:
>
>> Let the algorithm that represents the brain of a typical new-born baby
>> be denoted as B1.
>>
>> Now surely we can agree that the brain of a new-born baby does not
>> have sophisticated Bayesian machinary built into it? Yes, there must
>> be *some* intrinsic built-in reasoning structure, but everything we
>> know suggests that the intrinsic reasoning mechanisms of the human
>> brain must be quite weak and simple.
>>
>> Let the algorithm which represents the brain of the baby B1 which grew
>> up into a 20-year old with a PhD in Bayesian math be denoted as B2.
>>
>> Now somehow, the algorithm B1 was able to 'optimze' its original
>> reasoning mechanisms by a smooth transformation into B2. (assume there
>> was 'brain surgery', no 'hand coding').
>>
>> The environment! you may shout. The baby got all its information from
>> human culture (Reading math books, learning from math professors), you
>> might try to argue, that's how B1 (baby) was able to transform into B2
>> (PhD in Bayes)
>>
>> But this cant be correct. Since, humans existed long before Bayesian
>> math was developed. Every single Bayesian technique had to be
>> developed by a human in the past, without being told. So in theory,
>> B1 could have grown into B2 entirely on its own, without being told
>> anything by anyone about Bayesian math.
>>
>> The conclusion:
>>
>> *There exists a very simple algorithm which is only a very weak
>> approximation to PhD Bayesian reasoning, which is perfectly capable of
>> recursive self-improvement to the PhD level! No hand coding of
>> advanced Bayesian math is needed.
>>
>> Or to simply rephrase:
>>
>> Humans could reason before they discovered Bayes.
>>
>>
>>
>>
>>
>
>
>
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Mon Sep 22 2008 - 20:54:23 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST