Re: Consciousness is information?

From: Bruno Marchal <marchal.domain.name.hidden>
Date: Wed, 29 Apr 2009 22:19:56 +0200

Maudlin's point is that the causal structure has no physical role, so
if you maintain the association of consciousness with the causal,
actually computational structure, you have to abandon the physical
supervenience. Or you reintroduce some magic, like if neurons have
some knowledge of the absence of some other neurons, to which they are
not related, during some computations.
But read the movie graph which shows the same thing without going
through the question of the counterfactuals. If you believe that
consciousness supervene on the physical implementation, or even just
one universal machine computation, then you will associate
consciousness to a description of that computation. but the
description, although containing the genuine information is just not a
computation at all. It miss the logical relation between the steps,
made possible by the universal machine. So you can keep on with
mechanism only by associating consciousness with the logical,
immaterial, relation between the states. from inside they are
infinitely many such relations, and this means the physical has to
supervene on the sum of those relations "as seen from inside". By
Church thesis and self-reference logic, they have a non trivial,
redundant, structure.

Bruno


On 29 Apr 2009, at 21:16, Jesse Mazer wrote:

> Bruno wrote:
>
>
> On 29 Apr 2009, at 00:25, Jesse Mazer wrote:
>
> and I think it's also the idea behind Maudlin's Olympia thought
> experiment as well.
>
>
> >Maudlin's Olympia, or the Movie Graph Argument are completely
> different. Those are arguments showing that computationalism is
> incompatible with the physical supervenience thesis. They show that
> consciousness are not related to any physical activity at all.
> Together with UDA1-7, it shows that physics has to be reduced to a
> theory of consciousness based on a purely mathematical (even
> arithmetical) theory of computation, which exists by Church Thesis.
> The movie graph argument was originally only a tool for explaining
> how difficult the mind-body problem is, once we assume mechanism.
>
>
>
>
> OK, I hadn't been able to find Maudlin's paper online, but I finally
> located a pdf copy in a post from this list at http://www.mail-archive.com/everything-list.domain.name.hidden/msg07657.html
> ...now that I read it I see the argument is distinct from Chalmers'
> "Does a Rock Implement Every Finite-State Automaton", although they
> are thematically similar in that they both deal with difficulties in
> defining what it means for a given physical system to "implement" a
> given computation. Chalmers' idea was that the idea of a rock
> implementing every possible computer program could be avoided if we
> defined an "implementation" in terms of counterfactuals, but Maudlin
> argues that this contradicts the "supervenience thesis" which says
> that "the presence or absence of inert, causally isolated objects
> cannot effect the presence or absence of phenomenal states
> associated with a system", since two systems may have different
> counterfactual structures merely by virtue of an inert subsystem in
> one which *would have* become active if the initial state of the
> system had been slightly different.
>
> It seems to me that there might be ways of defining "causal
> structure" which don't depend on counterfactuals, though. One idea I
> had is that for any system which changes state in a lawlike way over
> time, all facts about events in the system's history can be
> represented as a collection of propositions, and then causal
> structure might be understood in terms of logical relations between
> propositions, given knowledge of the laws governing the system. As
> an example, if the system was a cellular automaton, one might have a
> collection of propositions like "cell 156 is colored black at time-
> step 36", and if you know the rules for how the cells are updated on
> each time-step, then knowing some subsets of propositions would
> allow you to deduce others (for example, if you have a set of
> propositions that tell you the states of all the cells surrounding
> cell 71 at time-step 106, in most cellular automata that would allow
> you to figure out the state of cell 71 at the subsequent time-step
> 107). If the laws of physics in our universe are deterministic than
> you should in principle be able to represent all facts about the
> state of the universe at all times as a giant (probably infinite)
> set of propositions as well, and given knowledge of the laws,
> knowing certain subsets of these propositions would allow you to
> deduce others.
>
> "Causal structure" could then be defined in terms of what logical
> relations hold between the propositions, given knowledge of the laws
> governing the system. Perhaps in one system you might find a set of
> four propositions A, B, C, D such that if you know the system's
> laws, you can see that A&B imply C, and D implies A, but no other
> proposition or group of propositions in this set of four are
> sufficient to deduce any of the others in this set. Then in another
> system you might find a set of four propositions X, Y, Z and W such
> that W&Z imply Y, and X implies W, but those are the only deductions
> you can make from within this set. In this case you can say these
> two different sets of four propositions represent instantiations of
> the same causal structure, since if you map W to A, Z to B, Y to C,
> and D to X then you can see an isomorphism in the logical relations.
> That's obviously a very simple causal structure involving only 04
> events, but one might define much more complex causal structures and
> then check if there was any subset of events in a system's history
> that matched that structure. And the propositions could be
> restricted to ones concerning events that actually did occur in the
> system's history, with no counterfactual propositions about what
> would have happened if the system's initial state had been different.
>
> Thinking in this way, it's not obvious that Maudlin is right when he
> assumes that the original "Olympia" defined on p. 418-419 of the
> paper cannot be implementing a unique computation that gives rise to
> complex conscious experiences. It's true that the armature itself is
> not responding in any way to the states of successive troughs it
> passes over, but there is an aspect of the setup that might give the
> system a nontrivial causal structure, namely the fact that certain
> troughs may be connected to other by pipes to other troughs in the
> sequence, so that as the armature empties or fills one it is also
> emptying or filling the one it's connected to (this is done to
> emulate the idea of a Turing machine's read/write head returning to
> the same memory address multiple times, even though Olympia's
> armature just steadily progresses down the line of troughs in
> sequence--troughs connected by pipes are supposed to represent a
> single memory address). If we represented the Olympia system as a
> set of propositions about the state of each trough and the position
> of the armature at each time-step, then the fact that the armature's
> interaction with one trough changes the state of another trough the
> armature won't visit until a later step may be enough to give
> different programs markedly different causal structures, in spite of
> the fact that the armature itself is just dumbly moving from one
> trough to the next.
>
> >

http://iridia.ulb.ac.be/~marchal/




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Wed Apr 29 2009 - 22:19:56 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST