RE: UDA revisited

From: Colin Geoffrey Hales <c.hales.domain.name.hidden>
Date: Sun, 26 Nov 2006 21:42:35 +1100 (EST)

>
>
> Colin Hales writes:
>
>> > You're being unfair to the poor zombie robots. How could they
>> > possibly tell if they were in the factory or on the benchtop
>> > when the benchtop (presumably) exactly replicates the sensory
>> > feeds they would receive in the factory?
>> > Neither humans nor robots, zombie or otherwise, should be
>> > expected to have ESP.
>>
>> Absolutely! But the humans have phenomenal consciousness in lieu of ESP,
>> which the zombies do not. To bench test "a human" I could not merely
>> replicate sensoiry feeds. I'd have to replicate the factory! The human
>> is
>> connected to the external world (as mysterious as that may be and it's
>> not
>> ESP!). The zombie isn't, so faking it is easy.
>
> I don't understand why you would have to replicate the factory
> rather than just the sensory feeds to fool a human, but not a machine.
> It is part of the definition of a hallucination that it is
> indistinguishable from the the real thing. People have done
> terrible things, including murder and suicide, because of auditory
> hallucinations. The hallucinations are so real to them that
> even when presented with contrary evidence,
> such as someone standing next to them denying that they
> heard anything, they insist
> it is not a hallucination: "I know what I heard, you must
> either be deaf lying".

I don't know how to insert/overwrite the phenomenal scene activity/physics
directly.
Artifical/virtual reality might do it. It works for airline pilots.

But I'd have to create the sensory stimulus sufficiently sophisticated to
be a useful simulation of the world, including temperature, pressure and
other real-world phenomena. Rather more tricky.

Part of my long term strategy for these things in process control products
is to actually eliminate the bench testing! Take the unprogrammed but
intelligent machine that has phenomenal consciousness tailored to suit -
then teach it in situ or teach it how to learn things and leave it to it.
>
>> >> Now think about the touch..the same sensation of touch could
>> >> have been generated by a feather or a cloth or another finger
>> >> or a passing car. That context is what phenomenal
>> >> consciousness provides.
>> >
>> > But it is impossible to differentiate between different sources
>> > of a sensation unless the different sources generate a different
>> > sensation. If you close your eyes and the touch of a feather
>> > and a cloth feel the same, you can't tell which it was.
>> > If you open your eyes, you can tell a difference because
>> > the combined sensation (touch + vision) is different in the
>> > two cases. A machine that has touch receptors alone might not
>> > be able to distinguish between them, but a machine that has
>> > touch + vision receptors would be able to.
>> >
>>
>> Phenomenal scenes can combine to produce masterful, amazing
>> discriminations. But how does the machine, without being told already by
>> a
>> human, know one from the other? Having done that how can it combine and
>> contextualise that joint knowledge? You have to tell it how to learn.
>> Again a-priori knowledge ...
>
> A machine can tell one from the other because they are different.
> If they were the same, it would not be able to tell one from
> the other, and neither would a human, or a paramecium.
> As for combining, contextualising etc., that is what
> the information processing hardware + software does.
> In living organisms the hardware + software has evolved
> naturally while in machines it is artificial.
>
> I think it is possible that any entity, whether living or artificial,
> which processes sensory data and is able to learn and interact
> with its environment has a basic
> consciousness.

You can call it a form of 'consciousness' I suppose... but unless it has
some physics of phenomenal consciousness happeneing in there to apprehend
the external world - it's a zombie and will be fundamentally limited to
handling all novelty with it's unconscious reflexes, inclusing whatever
a-prori adaptive behaviour it happens to have.

> This would be consistent with your idea
> that zombies can't be scientists. What I cannot understand
> is your claim that machines are necessarily
> zombies.

No phenomenality = Zombie. Simple.
This does not mean it is incapable of functioning successfully in a
certain habitat. What it does mean is that it cannot be a scientist
because it's habits/reflexes are all fixed. It's adaptation is a-priori
fixed.

> Machines and living organisms are just special
> arrangements of matter following the laws of physics.
> What is the fundamental difference between them
> which enables one to be conscious and the other not?
>

 = The unknown physics of phenomenal consciousness.

....which none of the existing 'laws of physics' have ever predicted and
never will because they were derived using it and all they predict is how
it will appear in phenomenality when we look. Very useful but....impotent
... predictably impotent when it comes to understanding the appearance
(phenomenality) generator. You can explain houses with 'the natural law of
bricks'. But you can't use that law to explain bricks.

They fact that we don't know this physics does not negate/deny the reality
of it's role or assume that something else can duplicate it. We have it
because it we didn't we wouldn't be here ("outta the gene pool, folks!").

Colin
(getting worn out!)


--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Nov 26 2006 - 05:43:10 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST