On Thu, Mar 27, 2008 at 8:16 PM, Russell Standish <lists.domain.name.hidden> wrote:
>
> The situation is surely more subtle. To recognise a physical process
> as a computation requires an observer to interpret it as such. One of
> the key features of conscious is the ability to recognise a certain
> process as self, so that (assuming comp) we can objectively say that
> some processes are conscious, because they recognise themselves as
> computations. Otherwise computation is just in the eye of the
> beholder, and so would consciousness be, which is absurd.
>
> I think it unlikely that the entire universe is conscious.
>
I think we agree in some sense with the self-interpretation. Let me
explain what I believe for the given thought experiment proposed by
John Serle.
From:
http://en.wikipedia.org/wiki/John_Searle#Artificial_intelligence
"Since then, Searle has come up with another argument against strong
AI. Strong AI proponents claim that anything that carries out the same
informational processes as a human is also conscious. Thus, if we
wrote a computer program that was conscious, we could run that
computer program on, say, a system of ping-pong balls and beer cups
and the system would be equally conscious, because it was running the
same information processes.
Searle argues that this is impossible, since consciousness is a
physical property, like digestion or fire. No matter how good a
simulation of digestion you build on the computer, it will not digest
anything; no matter how well you simulate fire, nothing will get
burnt. By contrast, informational processes are observer-relative:
observers pick out certain patterns in the world and consider them
information processes, but information processes are not
things-in-the-world themselves. Since they do not exist at a physical
level, Searle argues, they cannot have causal efficacy and thus cannot
cause consciousness. There is no physical law, Searle insists, that
can see the equivalence between a personal computer, a series of
ping-pong balls and beer cans, and a pipe-and-water system all
implementing the same program."
I am in complete disagreement with Searle's assertion that
consciousness is a physical property. I further disagree with his
assertion that a computer based on pipes and water or ping pong balls
could not be conscious. I think you would agree, saying that
observers within those computed realities can interpret the
computations that create their realities. When something is burned in
a simulation, the heat of the fire and smell of the smoke can be felt
by observers within that simulated reality. Where you and I might
diverge in opinion is that I think something still burns in a
simulated reality even if there are no observers within that reality
to sense it. It's the basic "If a tree falls in the woods.." idea. I
would say the simulation of the tree falling doesn't make a sound
without an observer in the simulation to hear it, but I would say a
tree still falls, in that simulation even without there being an
interpreter at that level of simulation. I am interested to know your
opinion on this and how if at all it differs from mine.
Regards,
Jason
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri Mar 28 2008 - 01:56:55 PDT