On 23 May 2009, at 09:35, Kelly Harmon wrote:
>
> Okay, below are three passages that I think give a good sense of what
> I mean by "information" when I say that "consciousness is
> information". The first is from David Chalmers' "Facing up to the
> Problem of Consciousness." The second is from the SEP article on
> "Semantic Conceptions of Information", and the third is from "Symbol
> Grounding and Meaning: A comparison of High-Dimensional and Embodied
> Theories of Meaning", by Arthur Glenberg and David Robertson.
>
> So I'm looking at these largely from a static, timeless, platonic
> view.
We agree then. Assuming comp we have no choice in the matter here.
> In my view, there are ungrounded abstract symbols that acquire
> meaning via constraints placed on them by their relationships to other
> symbols.
Absolutely so.
> The only "grounding" comes from the conscious experience
> that is intrinsic to a particular set of relationships.
Exactly.
> To repeat my
> earlier Chalmers quote, "Experience is information from the inside;
> physics is information from the outside." It is this subjective
> experience of information that provides meaning to the otherwise
> completely abstract "platonic" symbols.
I insist on this well before Chalmers. We are agreeing on this.
But then you associate consciousness with the experience of information.
This is what I told you. I can understand the relation between
consciousness and information content.
>
>
> So I think that something like David Lewis' "modal realism" is true by
> virtue of the fact that all possible sets of relationships are
> realized in Platonia.
We agree. This is explained in detail in "conscience et mécanisme".
Comp forces modal realism. AUDA just gives the precise modal logics,
extracted from the theory of the self-referentially correct machine.
>
>
> Note that I don't have Bruno's fear of white rabbits.
Then you disagree with all reader of David Lewis, including David
lewis himself who recognizes this inflation of to many realities as a
weakness of its modal realism. My point is that the comp constraints
leads to a solution of that problem, indeed a solution close to the
quantum Everett solution. But the existence of white rabbits, and thus
the correctness of comp remains to be tested.
> Assuming that
> we are typical observers is fine as a starting point, and is a good
> way to choose between otherwise equivalent explanations, but I don't
> think it should hold a unilateral veto over our final conclusions. If
> the most reasonable explanation says that our observations aren't
> especially typical, then so be it. Not everyone can be typical.
It is just a question of testing a theory. You seem to say something
like "if the theory predict that water under fire will typically boil,
and that experience does not confirm that typicality (water froze
regularly) then it means we are just very unlucky". But then all
theories are correct.
>
>
> I think the final passage from Glenberg and Robertson (from a paper
> that actually argues against what's being described) gives the best
> sense of what I have in mind, though obviously I'm extrapolating out
> quite abit from the ideas presented.
>
> Okay, so the passages of interest:
>
> --
>
> David Chalmers:
>
> The basic principle that I suggest centrally involves the notion of
> information. I understand information in more or less the sense of
> Shannon (1948). Where there is information, there are information
> states embedded in an information space. An information space has a
> basic structure of difference relations between its elements,
> characterizing the ways in which different elements in a space are
> similar or different, possibly in complex ways. An information space
> is an abstract object, but following Shannon we can see information as
> physically embodied when there is a space of distinct physical states,
> the differences between which can be transmitted down some causal
> pathway. The states that are transmitted can be seen as themselves
> constituting an information space. To borrow a phrase from Bateson
> (1972), physical information is a difference that makes a difference.
>
> The double-aspect principle stems from the observation that there is a
> direct isomorphism between certain physically embodied information
> spaces and certain phenomenal (or experiential) information spaces.
This can be shown false in Quantum theory without collapse, and more
easily with the comp assumption.
No problem if you tell me that you reject both Everett and comp.
Chalmers seems in some place to accept both Everett and comp, indeed.
He explains to me that he stops at step 3. He believes that after a
duplication you feel to be simultaneously at the both place, even
assuming comp. I think and can argue that this is non sense. Nobody
defends this on the list. Are you defending an idea like that?
>
> From the same sort of observations that went into the principle of
> structural coherence, we can note that the differences between
> phenomenal states have a structure that corresponds directly to the
> differences embedded in physical processes; in particular, to those
> differences that make a difference down certain causal pathways
> implicated in global availability and control. That is, we can find
> the same abstract information space embedded in physical processing
> and in conscious experience.
Assuming comp, the expression "physical processing" cannot be taken
for granted. It has to be explained.
>
>
> --
>
> SEP:
>
> Information cannot be dataless but, in the simplest case, it can
> consist of a single datum. A datum is reducible to just a lack of
> uniformity (diaphora is the Greek word for “difference”), so a general
> definition of a datum is:
>
> The Diaphoric Definition of Data (DDD):
>
> A datum is a putative fact regarding some difference or lack of
> uniformity within some context. [In particular data as diaphora de
> dicto, that is, lack of uniformity between two symbols, for example
> the letters A and B in the Latin alphabet.]
No problem with that.
>
>
> --
>
> Glenberg and Robertson:
>
> Meaning arises from the syntactic combination of abstract, amodal
> symbols that are arbitrarily related to what they signify. A new form
> of the abstract symbol approach to meaning affords the opportunity to
> examine its adequacy as a psychological theory of meaning. This form
> is represented by two theories of linguistic meaning (that is, the
> meaning of words, sentences, and discourses), both of which take
> advantage of the mathematics of high-dimensional spaces. The
> Hyperspace Analogue to Language (HAL; Burgess & Lund, 1997) posits
> that the meaning of a word is its vector representation in a space
> based on 140,000 word–word co-occurrences. Latent Semantic Analysis
> (LSA; Landauer & Dumais, 1997) posits that the meaning of a word is
> its vector representation in a space with approximately 300 dimensions
> derived from a space with many more dimensions. The vector elements
> found in both theories are just the sort of abstract features that are
> prototypical in the cognitive psychology of meaning.
>
> Landauer and Dumais also apply LSA to sentence and discourse
> understanding. A sentence is represented as the average of the vectors
> of the words it contains, and the coherence between sentences is
> predicted by the cosine of the angle (in multidimensional space)
> between the vectors corresponding to successive sentences. They claim
> that LSA averaged vectors capture “the central meaning” of passages
> (p. 231).
Perhaps. I don't see the relevance. It is quite coherent with comp
that some form of meaning can be approached in this or similar ways.
Assuming comp, what can be considered as lacking is the self-reference
of the universal machine involved in the attribution of meaning.
>
>
> Consider a thought experiment (adapted from Harnad, 1990, and related
> to the Chinese Room Argument) that suggests that something critical is
> missing from HAL and LSA. Imagine that you just landed at an airport
> in a foreign country and that you do not speak the local language. As
> you disembark, you notice a sign printed in the foreign language
> (whose words are arbitrary abstract symbols to you). Your only
> resource is a dictionary printed in that language; that is, the
> dictionary consists of other arbitrary abstract symbols. You use the
> dictionary to look up the first word in the sign, but you don’t know
> the meaning of any of the words in the definition. So, you look up
> the first word in the definition, but you don’t know the meaning of
> the words in that definition, and so on. Obviously, no matter how many
> words you look up, that is, no matter how many structural relations
> you determine among the arbitrary abstract symbols, you will never
> figure out the meaning of any of the words.
?
How do you thing a computer work? (Well, I guess I am asking Harnad
here).
> This is the symbol
> grounding problem (Harnad, 1990): To know the meaning of an abstract
> symbol such as an LSA vector or an English word, the symbol has to be
> grounded in something other than more abstract symbols.
This has been a recurrent critics of mechanism and platonism. But
unless introducing a substantial soul and dualism (and non
computationalism) I don't see how such approach can work.
The grounding problem is what the notion of universal machine explains
the best, at least if you agree that the arithmetical reality (not the
formalism!) is independent of you (this is needed to even make a
theory of information).
>
>
> Landauer and Dumais summarize the symbol grounding problem by noting,
> “But still, to be more than an abstract system like mathematics words
> must touch reality at least occasionally” (p. 227).
Touch reality? Or touch physical reality? It is ambiguous in our
context.
> Their proposed
> solution is to encode, along with the word stream, the streams from
> other sensory modalities.
With comp, those other "sensory modalities" are coded before being
processed by the brain, or the universal machine under consideration.
> “Because, purely at the word–word level,
> rabbit has been indirectly preestablished to be something like dog,
> animal, object, furry, cute, fast, ears, etc., it is much less
> mysterious that a few contiguous pairings of the word with scenes
> including the thing itself can teach the proper correspondences.
> Indeed, if one judiciously added numerous pictures of scenes with and
> without rabbits to the context columns in the encyclopedia corpus
> matrix, and filled in a handful of appropriate cells in the rabbit and
> hare word rows, LSA could easily learn that the words rabbit and hare
> go with pictures containing rabbits and not to ones without, and so
> forth” (p. 227). Burgess and Lund (1997) offer a similar solution, “We
> do think a HAL-like model that was sensitive to the same
> co-occurrences in the natural environment as a human language learner
> (not just the language stream) would be able to capitalize on this
> additional information and construct more meaningful representations”
>
> (p. 29).
This could be of interest for criticizing some implementation of
artificial intelligence. But it is not relevant in our fundamental
description because both the term "rabbit" and the picture of the
rabbit have to be encoded in the universal machine. lewis Carroll
himself we aware of the fun you can make with dictionary based theory
of meaning.
Kelly, the question is: do we disagree? I criticize your statement
"consciousness = information" for vagueness, but only BECAUSE you have
oppose it to the computationalist hypothesis, (and this despite you
seem to appreciate its the platonist idealist consequence). It is a
bit weird. Now, I am not even sure you criticize the computationalist
hypothesis.
Neither you nor me can accept Chalmers dualism which relies on both
comp and primitive matter, which I show to be epistemologically
incompatible. But this is going in your direction. Where is the problem?
Bruno
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sat May 23 2009 - 14:47:14 PDT