Re: How would a computer know if it were conscious?

From: David Nyman <david.nyman.domain.name.hidden>
Date: Sat, 23 Jun 2007 18:50:41 +0100

On 23/06/07, Brent Meeker <meekerdb.domain.name.hidden> wrote:

BM: But he could also switch from an account in terms of the machine level
causality to an account in terms of the computed 'world'. In fact he could
switch back and forth. Causality in the computed 'world' would have it's
corresponding causality in the machine and vice versa. So I don't see why
they should be regarded as "orthogonal".

DN: Because the 'computational' description is arbitrary with respect to
the behaviour of the hardware. It's merely an imputation, one of an
infinite set of such descriptions that could be imputed to the same hardware
behaviour.


David Nyman wrote:
> > Hi John
> >
> > JM: You may ask about prejudice, shame (about goofed situations), humor
> > (does a
> > computer laugh?) boredom or preferential topics (you push for an
> > astronomical calculation and the computer says: I rather play some Bach
> > music now)
> > Sexual preference (even disinterestedness is slanted), or laziness.
> > If you add untruthfulness in risky situations, you really have a human
> > machine
> > with consciousness
> >
> > DN: All good, earthy, human questions. I guess my (not very exhaustive)
> > examples were motivated by some general notion of a 'personal world'
> > without this necessarily being fully human. A bit like 'Commander
> > Data', perhaps.
> >
> > JM: Now that we arrived at the question I replied-added (sort of) to
> > Colin's question I -
> > let me ask it again: how would YOU know if you are conscious?
> >
> > DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps
> > re-phrase this as just: 'how do you know x?' And then the answers are
> > of the type 'I just see x, hear x, feel x' and so forth. IOW, 'knowing
> > x' is unmediated - 'objects' like x are just 'embedded' in the structure
> > of the 'knower', and this is recursively related to more inclusive
> > structures within which the knower and its environment are in turn
> > embedded.
> >
> > JM: Or rather: How would you know if you are NOT conscious? Well, you
> > wouldn't.
> >
> > DN: Agreed. If we 'delete the noumenon' we get: "How would you know if
> > you are NOT?" or: "How would you know if you did NOT (know)?". To which
> > we might indeed respond: "You would not know, if you were NOT", or: "You
> > would not know, if you did NOT (know)".
> >
> > JM: If you can, you are conscious.
> >
> > DN: Yes, If you know, then you know.
> >
> > JM: Computers?????
> >
> > DN: I think we need to distinguish between 'computers' and 'machines'.
> > I can see no reason in principle why an artefact could not 'know', and
> > be motivated by such knowing to interact with the human world: humans
> > are of course themselves 'natural artefacts'. The question is whether a
> > machine can achieve this purely in virtue of instantiating a 'Universal
> > Turing Machine'. For me the key is 'interaction with the human world'.
> > It may be possible to conceive that some machine is computing a 'world'
> > with 'knowers' embedded in an environment to which they respond
> > appropriately based on what they 'know'. However such a world is
> > 'orthogonal' to the 'world' in which the machine that instantiates the
> > program is itself embedded. IOW, no 'event' as conceived in the
> > 'internal world' has any causal implication to any 'event' in the
> > 'external world', or vice versa.
> >
> > We can see this quite clearly in that an engineer could in principle
> > give a reductive account of the entire causal sequence of the machine's
> > internal function and interaction with the environment without making
> > any reference whatsoever to the programming, or 'world', of the UTM.
>
> But he could also switch from an account in terms of the machine level
> causality to an account in terms of the computed 'world'. In fact he could
> switch back and forth. Causality in the computed 'world' would have it's
> corresponding causality in the machine and vice versa. So I don't see why
> they should be regarded as "orthogonal".
>
> Brent Meeker
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat Jun 23 2007 - 13:51:00 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:14 PST