Re: How would a computer know if it were conscious?

From: Russell Standish <lists.domain.name.hidden>
Date: Sat, 23 Jun 2007 12:28:26 +1000

On Sat, Jun 23, 2007 at 03:58:39PM +0100, David Nyman wrote:
> On 23/06/07, Russell Standish <lists.domain.name.hidden> wrote:
>
> RS: I don't think I ever really found myself in
> disagreement with you. Rather, what is happening is symptomatic of us
> trying to reach across the divide of JP Snow's two cultures. You are
> obviously comfortable with the world of literary criticism, and your
> style of writing reflects this. The trouble is that to someone brought
> up on a diet of scientific and technical writing, the literary paper
> may as well be written in ancient greek. Gibberish doesn't mean
> rubbish or nonsense, just unintelligible.
>
> DN: It's interesting that you should perceive it in this way: I hadn't
> thought about it like this, but I suspect you're not wrong. I haven't
> consumed very much of your 'diet', and I have indeed read quite a lot of
> stuff in the style you refer to, although I often find it rather
> indigestible! But on the other hand, much of my professional experience has
> been in the world of computer programming, right back to machine code days,
> so I'm very aware of the difference between 'syntax' and 'semantics', and I
> know too well how consequences can diverge wildly from a difference of a
> single bit. How often have I heard the beleaguered self-tester wail "I
> didn't *mean* that!"

Interesting indeed. I wouldn't have guessed you to have been a
programmer. Perhaps you are one of those rare souls with a foot in
each camp. That could be be very productive!

...

>
> However, in the spirit of the original topic of the thread, I would prefer
> to ask you directly about the plausibility (which, unless I've
> misunderstood, you support?) of an AI-program being in principle
> 'conscious'. I take this to entail that instantiating such a program
> thereby implements an 'observer' that can respond to and share a reality, in
> broadly the same terms, with human 'observers'. (I apologise in advance if
> any paraphrase or short-hand I adopt misrepresents what you say in TON):
>

It seems plausible, certainly.

> TON, as you comment in the book, takes the 'idealist' stance that 'concrete'
> notions emerge from observation. Our own relative status as observers
> participating in 'worlds' is then dependent on computational 'emergence'
> from the plenitude of all possible bit-strings. Let's say that I'm such an
> observer and I observe a 'computer' like the one I'm using now. The
> 'computer' is a 3-person 'concrete emergent' in my 1-person world, and that
> of the 'plurality' of observers with whom I'm in relation: we can 'interact'
> with it. Now, we collectively *impute* that some aspect of its 3-person
> behaviour (e.g. EM phenomena in its internal circuitry) is to be regarded as
> 'running an AI program' (i.e. ISTM that this is what happens when we
> 'compile and run' a program). In what way does such imputation entail the
> evocation - despite the myriad possible 'concrete' instantiations that might
> represent it - of a *stable* observer capable of participating in our shared
> '1-person plural' context? IOW, I'm concerned that two different categories
> are being conflated here: the 'world' at the 'observer level' that includes
> me and the computer, and the 'world' of the program, which is 'nested'
> inside this. How can this 'nested' world get any purchase on 'observables'
> that are 'external' to it?
>

It is no different to a conscious being instantiated in a new-born
baby (or 18 month old, or whenever babies actually become
conscious). In some Platonic sense, all possible observers are already
out there, but by physically instantiating it in our world, we are in
effect opening up a communication channel between ourselves and the
new consciousness.

> As I re-read this question, I wonder whether I've already willy-nilly fallen
> into the '2-cultures' gap again. But what I've asked seems to be directly
> related to the issues raised by 'Olympia and Klara', and by the substitution
> level dilemma posed by 'yes doctor'. Could you show me where - or if - I go
> wrong, or does the 'language game' make our views forever mutually
> unintelligible?
>
> David
>

This last post is perfectly lucid to me. I hope I've answered it
adequately.

Cheers


-- 
----------------------------------------------------------------------------
A/Prof Russell Standish                  Phone 0425 253119 (mobile)
Mathematics                         	 
UNSW SYDNEY 2052         	         hpcoder.domain.name.hidden
Australia                                http://www.hpcoders.com.au
----------------------------------------------------------------------------
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat Jun 23 2007 - 20:57:03 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST