Brent Meeker writes:
> >> > I make the claim that a rock can be conscious assuming that
> >> computationalism > is true; it may not be true, in which case neither
> >> a rock nor a computer may be > conscious. There is no natural syntax
> >> or semantics for a computer telling us > what should count as a "1" or
> >> a "0", what should count as a red perception, and > so on. These
> >> things are determined by how the computer is designed to interact >
> >> with its environment, whether that mean outputting the sum of two
> >> numbers to > a screen or interacting with a human to convince him that
> >> it is conscious. But what > if the environment is made part of the
> >> computer? The constraint on meaning and > syntax would then go, and
> >> the vibration of atoms in a rock could be implementing > any
> >> computation, including any conscious computation, if such there are.
> >> > > John Searle, among others, believes this is absurd, and that
> >> therefore it disproves > computationalism. Another approach is that it
> >> shows that it is absurd that consciousness > supervenes on physical
> >> activity of any sort, but we can keep computationalism and > drop the
> >> physical supervenience criterion, as Bruno has.
> >> > > Stathis Papaioannou
> >>
> >> I have a view that seems to me to be slightly different.
> >> Consciousness requires interaction with an environment; consciousness
> >> implicitly requires a distinction between "I" and "the world". So
> >> when you attribute consciousness to a rock, incorporating "the world"
> >> as part of the rock, while the remainder of the rock is "conscious"
> >> that raises problems. We can say that this part of the rock is
> >> conscious of that part; making some arbitrary division of the rock.
> >> But then it's not conscious in/of our universe.
> >
> > That's right: if it's conscious, then it's conscious in its own isolated
> > virtual universe. It's another means to a many worlds theory.
> >
> >> When you say there is no canonical syntax, which is what allows
> >> anything to be a computation of anything else, I think that is
> >> overstates the case. Suppose a particular pair of iron atoms in the
> >> rock are magnetically aligned and the syntax counts that as "0" while
> >> anti-aligned counts as "1". Then what computation is implemented by
> >> "0000000..."? The arbitrariness of syntax supposedly allows this to
> >> be translated into "27" or some other number. But then the
> >> translation has to have all possible words in it and the relational
> >> meanings of those words; including the words for all the numbers in
> >> that world. This places a pretty strong restriction on the size of
> >> the rock-world - there are only some 10^25 atoms to do all this
> >> representing.
> >
> > The rock can make numbers and universes as big as you want through the
> > method of parallel computing. Suppose the rock had only a few distinct
> > physical states. If we place no restriction on what these states can
> > represent, then they can represent multiple binary strings or finite
> > numbers or sentences or whatever - all in parallel. Any serial
> > computation can be made up of multiple parallel computations,
>
> I find that doubtful - do you have a reference? Isn't it the definition of "incompressible" computation that there is no way faster than executing each step in sequence.
I'm not referring to speed, just to doing it. For example, a serial stream of consciousness
can be emulated by multiple shorter parallel streams; there is no way of knowing whether
you're being run in serial, parallel, how fast the real world clock is running, etc.
> >and vice
> > versa. You can't say, aha, we've used that string for "dog" so we can't
> > now use it for "cat", because who is going to patrol the universe to
> > enforce this rule? This is what you are left with if you eliminate the
> > constraint that the computation has to interact with an external observer.
>
> I think my objection is different. You are assuming there is a "we" too whom these strings represent something, but in the closed rock-world there is no "we". The representation must be this conscious-part of the rock representing that other-part of the rock. But how can one part represent the other - by having a dictionary that translates states of one into the other. But how is the dictionary encoded? It seems there's an infinite regress.
>
> How is this infinite regress avoided in our world? By consciousness not representing the rest of the world. The world is what it is and representation is not essential. I suppose this is somewhat like Peter's "primitive substance" whose only function is to distinguish things that exist from their representation.
Then there must be a way to distinguish true reality from virtual reality. This
is something like Colin Hale's idea that the environment participates in the
brain process that produces consciousness (which he aims to prove by
experiment, a staggering achievent if he succeeds), so that an isolated
virtual reality is impossible.
Of course, in the evolution of brains and in the design of digital computers,
the semantics is provided by the real world; similarly in human language, the
semantics is provided by the language speakers. Without the provision of
a semantic context it's gibberish. But there is a special quality that consciousness
has which other systems lack: it is conscious in and of itself. Take the following
steps:
1. A conscious computer is built to interact with its environment, say digging holes.
2. A co-processor is added (which need itself not be conscious) to provide the
computer with a virtual environment in which to dig the holes, so that all the input
that came from the real environment comes from this co-processor and all the
motor output from the computer goes to the coprocessor.
3. The computer and coprocessor are connected together and sealed in a capsule
with power input from a solar panel.
4. Everybody dies and there is no record left which will give a clue as to what the
original computer was meant to do or how it was designed.
I claim than at step 4 it is impossible to know that the computer was designed to dig
holes, but that it will still be digging holes in its own mind (maybe the same holes
after a while) until it breaks down. Do you think it stops being conscious at one of
the steps 2 to 4?
Stathis Papaioannou
> > I am aware that this is a very strange idea, perhaps even an absurd
> > idea, but I don't see any way out of it without ruining
> > computationalism, as by saying that it's all bunk, or only computations
> > that can interact with the environment at the level of their
> > implementation can be conscious. Because if you insist on the latter, it
> > implies something like ESP: the computer will know the difference
> > between a false sensory stimulus and one emanating from the
> > environment... possible, but not very Turing-emulable.
> >
> > Stathis Papaioannou
>
>
>
> >
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Jan 14 2007 - 21:35:58 PST