RE: Consciousness is information?

From: Jesse Mazer <lasermazer.domain.name.hidden>
Date: Fri, 15 May 2009 00:32:19 -0400

Hi Bruno, I meant to reply to this earlier:

From: marchal.domain.name.hidden
To: everything-list.domain.name.hidden
Subject: Re: Consciousness is information?
Date: Sat, 02 May 2009 14:45:13 +0200


On 30 Apr 2009, at 18:29, Jesse Mazer wrote:
Bruno Marchal wrote:

On 29 Apr 2009, at 23:30, Jesse Mazer wrote:
But I'm not convinced that the basic Olympia machine he describes doesn't already have a complex causal structure--the causal structure would be in the way different troughs influence each other via the pipe system he describes, noting the motion of the armature.
>But Maudlin succeed in showing that in its particular running history, *that* causal structure is physically inert. Or it has mysterious influence not related to the computation.


Maudlin only showed that *if* you define "causal structure" in terms of counterfactuals, then the machinery that ensures the proper counterfactuals might be physically inert. But if you reread my post at http://www.mail-archive.com/everything-list.domain.name.hidden/msg16244.html you can see that I was trying to come up with a definition of the "causal structure" of a set of events that did *not* depend on counterfactuals...look at these two paragraphs from that post, particular the first sentence of the first paragraph and the last sentence of the second paragraph:
>It seems to me that there might be ways of defining "causal structure" which don't depend on counterfactuals, though. One idea I had is that for any system which changes state in a lawlike way over time, all facts about events in the system's history can be represented as a collection of propositions, and then causal structure might be understood in terms of logical relations between propositions, given knowledge of the laws governing the system. As an example, if the system was a cellular automaton, one might have a collection of propositions like "cell 156 is colored black at time-step 36", and if you know the rules for how the cells are updated on each time-step, then knowing some subsets of propositions would allow you to deduce others (for example, if you have a set of propositions that tell you the states of all the cells surrounding cell 71 at time-step 106, in most cellular automata that would allow you to figure out the state of cell 71 at the subsequent time-step 107). If the laws of physics in our universe are deterministic than you should in principle be able to represent all facts about the state of the universe at all times as a giant (probably infinite) set of propositions as well, and given knowledge of the laws, knowing certain subsets of these propositions would allow you to deduce others.
>"Causal structure" could then be defined in terms of what logical relations hold between the propositions, given knowledge of the laws governing the system. Perhaps in one system you might find a set of four propositions A, B, C, D such that if you know the system's laws, you can see that A&B imply C, and D implies A, but no other proposition or group of propositions in this set of four are sufficient to deduce any of the others in this set. Then in another system you might find a set of four propositions X, Y, Z and W such that W&Z imply Y, and X implies W, but those are the only deductions you can make from within this set. In this case you can say these two different sets of four propositions represent instantiations of the same causal structure, since if you map W to A, Z to B, Y to C, and D to X then you can see an isomorphism in the logical relations. That's obviously a very simple causal structure involving only 04 events, but one might define much more complex causal structures and then check if there was any subset of events in a system's history that matched that structure. And the propositions could be restricted to ones concerning events that actually did occur in the system's history, with no counterfactual propositions about what would have happened if the system's initial state had been different.


For a Turing machine running a particular program the propositions might be things like "at time-step 35 the Turing machine's read/write head moved to memory cell #82" and "at time-step 35 the Turing machine had internal state S3" and "at time-step 35 memory cell #82 held the digit 1". I'm not sure whether the general rules for how the Turing machine's internal state changes from one step to the next should also be included among the propositions, my guess is you'd probably need to do so in order to ensure that different computations had different "causal structures" according to the type of definition above...so, you might have a proposition expressing a rule like "if the Turing machine is in internal state S3 and its read/write head detects the digit 1, it changes the digit in that cell to a 00 and moves 2 cells to the left, also changing its internal state to S5." Then this set of four propositions would be sufficient to deduce some other propositions about the history of this computation, like "at time-step 36 the Turing machine's read/write head moved to memory cell #80" and "at time-step 36 the Turing machine had internal state S5."
So if we define causal structure in terms of relationships between propositions concerning the history of the Turing machine in this way, then look at propositions concerning the history of the Olympia machine described by Maudlin when it was emulating that Turing machine program, it's not clear to me whether it would be possible to map propositions about the original Turing machine to propositions about Olympia in such a way that you'd be able to show their causal structures were isomorphic (I think it is clear that such a mapping would be impossible in the case of your MGA 01 though, so if we identify OMs with causal structures this would suggest that the brain which functioned via random cosmic rays correcting errors would not have the same inner experience as the brain which was functioning correctly and did not require these cosmic rays).But either way, what is clear is that the presence or absence of inert machinery designed to guarantee the correct counterfactuals would not affect the answer, since we'd only be looking at propositions about events that actually occurred in the course of the Olympia machine's operation. If it turned out there was an isomorphism between these propositions and the propositions about the operation of the original Turing machine, then that would show Maudlin was too quick to dismiss the original Olympia machine (the one lacking the counterfactual machinery) as giving rise to phenomenal experience (even though the armature behaves in a monotonous way, the way the troughs influence each other via pipes might be enough to ensure that the causal structure associated with Olympia's operation does depend on what program is being emulated). If there wasn't such an isomorphism, then there still wouldn't be an isomorphism even with the counterfactual machinery added, so that could make it more clear why the Olympia machine was not really "instantiating" the same computation as the original Turing machine.


>Maudlin shows that you can reduce almost arbitrarily the amount of physical activity for running any computation, and keep their computational genuineness through the use of inert material. So the isomorphism you introduce vanish on the original Olympia (Pre-olympia).
>Olympia *is* "Pre-Olympia" + Klara (the inert (for the computation PI) machinery needed for the counterfactuals) OK? Olympia run the computation PI.



But what do you mean when you say the isomorphism vanishes? Do you mean that the causal structure of pre-Olympia would *not* be isomorphic to the causal structure of the original Turing machine that pre-Olympia was supposed to imitate (according to the definition of causal structure in terms of logical relations between propositions about the system's state at different moments)? If so, that would mean that regular Olympia (pre-Olympia + Klara) wouldn't have a causal structure isomorphic to the Turing machine either, since I was defining causal structure solely in terms of propositions about events that *do* occur in the system's history, meaning the extra counterfactual conditions provided by Klara are irrelevant to Olympia's causal structure, so Olympia's causal structure would be the same as pre-Olympia's.
If that's the case, why can't we postulate that consciousness supervenes on causal structure, since causal structure is after all part of the physical world? In fact one could say that physics is *only* concerned with "causality" in the sense of lawlike relations between propositions about observations, since the laws of physics tell us nothing about what particles or fields or wavefunctions "really are", only about how they interact with one another and how they can be used to predict the outcomes measurements. So if we say consciousness supervenes on causal structure, then Olympia would not qualify as an instantiation of the observer-moments that the original Turing machine instantiated, in much the same way that a lookup table wouldn't qualify.
I don't have a problem with the idea that a giant lookup table is just a sort of "zombie", since after all the way you'd create a lookup table for a given algorithmic mind would be to run a huge series of actual simulations of that mind with all possible inputs, creating a huge archive of "recordings" so that later if anyone supplies the lookup table with a given input, the table just looks up the recording of the occasion in which the original simulated mind was supplied with that exact input in the past, and plays it back. Why should merely replaying a recording of something that happened to a simulated observer in the past contribute to the measure of that observer-moment? I don't believe that playing a videotape of me being happy or sad in the past will increase the measure of happy or sad observer-moments involving me, after all. And Olympia seems to be somewhat similar to a lookup table in that the only way to construct "her" would be to have already run the regular Turing machine program that she is supposed to emulate, so that you know in advance the order that the Turing machine's read/write head visits different cells, and then you can rearrange the positions of those cells so Olympia will visit them in the correct order just by going from one cell to the next in line over and over again.
So: why can't the idea of consciousness supervening on causal structure be a possible strategy for avoiding the problem you talk about in step 08 of your UDA argument (if I am understanding it correctly), namely the idea that even if there was a physical universe it wouldn't be able to tell us anything about the measure of different computations? If we talk about the causal structure of a given computation, why can't we look at how frequently sets of physical events with an isomorphic causal structure occur in the physical universe, and derive a measure on physical implementations of computations in this way? Not that I personally would favor this approach to a philosophical "theory of everything", but would you say it isn't even a coherent possibility?

If you take any finite subset of true propositions (P1, P2, P3, ..., PN), then these propositions will be logically interrelated in some particular way--it might be that if you start out taking P2 and P3 as axioms you can deduce P5 from this but you can't deduce P4, for example. I imagine representing each proposition as a dot in a diagram, and then arrows would show which individual dots or collections of dots in this finite set can be used to deduce other dots in the same finite set. This diagram would define a unique "causal structure" for this set of propositions, and then if you have a set of propositions about something different from arithmetic, like the history of a particular Turing machine computation,


>The history of a particular Turing machine computation does belong to arithmetic. Already to Robinson Arithmetic. (Roughly: Peano Arithmetic without the induction axioms). You need just a Sigma_1 complete theory for the ontology. It is enough to (meta)define a richer internal epistemology justifying why, "from inside" things appear (and in some sense are) much richer. This is not obvious and technically relies on Gödel's compeleteness and incompleteness theorem, or Skolem theorem. It is long to explain, yet very short to understand, and utterly clear, if you are aware of Solovay theorem.

Are you saying that a notion similar to my definition of "causal structure" is already made use of in the areas of mathematics you're talking about, or when you say "the history of a particular Turing machine computation" are you talking about something unrelated to my definition of the computation's causal structure?
I also wonder if anything similar to this notion of causal structure could be found in category theory, since in some layman's summaries I've read that category theory defines mathematics in a purely relational way, where any mathematical object (or proposition?) is defined entirely in its relationships to other objects.




Maybe you could even make a TOE based on the idea that all that really "exists" is this infinite set of propositions about arithmetic, and that this infinite set defines a unique measure on all finite causal structures, based on how easy it is to find multiple "instantiations" of each finite causal structure within the infinite set of true propositions. I don't suppose this has any resemblance to your approach?

>UDA is an argument that if we (human) are machine it has to be that approach. It is the reversal physics/number theory. Physics is eventually the projection or limit of what the number can see when they look at themselves.
When you say "that approach", are you talking specifically about looking at isomorphisms between 1) logical relations among propositions about arithmetic, and 2) logical relations among propositions about the history of a Turing machine computation? Or were you saying that UDA takes an approach that is similar in some broader fashion?


since I'm suggesting some kind of absolute measure on all causal structures, and if you identify particular causal structures with OMs that would correspond to the ASSA, but you have said that your approach only uses the RSSA.

>There is no absolute measure on all "causal structures" , still less on OM, right! I would ba an ants or a bacteria in two seconds!
I guess it would depend on how the measure was defined. It might not just be in terms of the numerical frequency that a given causal structure appears in the world, but also in terms of things like how many other causal structures "remember" that causal structure in some sense (contain detailed information about it). That could perhaps give a measure which was biased towards more complex causal structures like human minds even though ants are much more common numerically.
Anyway I have no idea how you'd actually "count" the number of appearances of a given causal structure in the infinite set of propositions about arithmetic, so the idea of getting a measure on causal structures this way is very vague...

>Vague? I don't think so. Church thesis makes this purely mathematical. Difficult? Sure. That is why UDA is followed by AUDA where the case of "probability or credibility (whatever) ONE, is made entirely clear and formal. It is already proved that the physical local observations cannot be boolean, and there is already a well defined notion of quantization.UDA is also completely clear, even if it take some time for some people to grasp some steps. It is normal given that is new and counterintuitive.

Well, I didn't mean to sugest that your ideas are vague, only that my own notions of a connection between causal structure and measure were rather ill-defined...there'd be a lot more math I'd need to learn if I wanted to seriously try to develop these ideas, or to really understand the details of your own ideas. By the way, do you have a bibliography somewhere of books you think someone could use to self-teach themselves enough math to understand the details of your AUDA argument?
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Fri May 15 2009 - 00:32:19 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:15 PST