Stathis:
I am not 'debating' your position, just musing about expressions.
You made a very interesting passage below:
SP:
>...Perhaps there is a difference between intelligence and consciousness.
Intelligence must be defined operationally, as you have suggested, which
involves the intelligent agent interacting with the environment. A computer
hardwired with "input" is not a very useful device from the point of view of
an observer, displaying no more intelligence than a film of the screen
would....<
JM:
What I sense in your discussion with Peter, a certain group of qualia has
been picked (computer input) and argued about it being consciousness.
Irrespective of other qualia findable in systems outside that circle, which
e.g. in 'human consciousness have their input. A limited model quality is
matched to a wider background of interactions and assigned to the
generalized concept.
Speaking about intelligence may be an improvement: in my wording it requires
(beside considerable knowledge-base - memory) an "elasticity" of the mind,
to ponder the features according to (counterfactual? I am not so familiar
with the term) contradictory 'arguments' and finding one outcome, not
necessarily the obvious. In this activity the 'mind' includes 'more' than
just the 'data fed into a computer' and may provide a different entailment
from a (limitedly) 'conscious' (Turing?) machine.
In your earlier post you wrote:
SP:
>.There are those who argue that human cognition is fundamentally different
from classical computers due to quantum randomness, but even if this is the
case there is no reason to believe that it is necessarily the case. Brains
would have evolved to give rise to appropriate survival-enhancing behaviour,
which precludes random or erratic behaviour. A degree of unpredictability
would have to be present in order to avoid predators or catch prey, but
unpredictable does not necessarily mean random: it just has to be beyond the
capabilities of the predators or prey to predict. The unpredictability could
result from the effect of classical chaos, or simply from the complexity of
the behaviour which is in fact perfectly deterministic. No true randomness
is needed....<
JM:
I dislike the term 'Q-randomness' for 2 reasons:
1. randomness is not part of a totally interconnected deterministic world in
which every change is triggered by the movement of the totality (my vision),
and
2. the "quantum" refers to a linear reductionist mathematical science in
which no randomness is feasible and nonlinear counterfactuals are not
contemplated (In My Unprofessional Opinion) as ARE included in the (live?)
human cognition.
Unpredictability by whom? you mention the participants, but it may be a
characteristic theoretically noted. Read on.
(Classical?) chaos IMO is a feature not (yet?) explained by our cognition in
the reductionist sciences. Like :emergence. Once we learn more, it becomes
unchaos. (Or: the emergence: a regular result).
So some "model"-terms we use are ambiguous and incomplete, yet we draw
'definite' (generalized) conclusions from them.
(cf my previous post to Brent about 'model').
The best
John Mikes
----- Original Message -----
From: "Stathis Papaioannou" <stathispapaioannou.domain.name.hidden>
To: "Brent Meeker" <everything-list.domain.name.hidden>
Sent: Saturday, August 26, 2006 7:17 AM
Subject: RE: computationalism and supervenience
Brent Meeker writes:
> > What I meant was, if a computer program can be associated with
> > consciousness, then a rigid and deterministic computer program can also
> > be associated with consciousness - leaving aside the question of how
> > exactly the association occurs. For example, suppose I have a
conversation
> > with a putatively conscious computer program as part of a Turing test,
and
> > the program passes, convincing me and everyone else that it has been
> > conscious during the test. Then, I start up the program again with no
memory
> > saved from the first run, but this time I play it a recording of my
voice from
> > the first test. The program will go through exactly the same resposes as
> > during the first run, but this time to an external observer who saw the
first
> > run the program's responses will be no more surprising that my questions
> > on the recording of my voice. The program itself won't know what's
coming
> > and it might even think it is being clever by throwing is some
"unpredictable"
> > answers to prove how free and human-like it really is. I don't think
there is any
> > basis for saying it is conscious during the first run but not during the
second. I
> > also don't think it helps to say that its responses *would* have been
different
> > even on the second run had its input been different, because that is
true of
> > any record player or automaton.
>
> I think it does help; or at least it makes a difference. I think you
illegitmately
> move the boundary between the thing supposed to be conscious (I'd prefer
> "intelligent", because I think intelligence requires counterfactuals, but
I'm not
> sure about consciousness) and its environment in drawing that conclusion.
The
> question is whether the *recording* is conscious. It has no input. But
then you say
> it has counterfactuals because the output of a *record player* would be
different
> with a different input. One might well say that a record player has
intelligence -
> of a very low level. But a record does not.
Perhaps there is a difference between intelligence and consciousness.
Intelligence
must be defined operationally, as you have suggested, which involves the
intelligent
agent interacting with the environment. A computer hardwired with "input" is
not a
very useful device from the point of view of an observer, displaying no more
intelligence
than a film of the screen would. However, useless though it might be, I
don't see why
the computer should not be conscious with the hardwired input if it is
conscious with the
same input on a particular run from a variable environment. If the
experiment were set
up properly, it would be impossible for the computer to know where the input
was
coming from. Another way to look at it would be to say that intelligence is
relative to
an environment but consciousness is absolute. This is in keeping with the
fact that
intelligent behaviour is third person observable but consciousness is not.
Stathis Papaioannou
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Sat Aug 26 2006 - 16:00:07 PDT