Doug Porpora wrote:
> I claimed, though, that the reductionist thesis is in trouble, and you
> asked why. There is a huge literature on emergentism and reductionism,
> but let me just stick with the so-called mind-body issue that Hal also
> alluded to.
>
> There have been two main reductionist strategies to deal with mental
> states, and they both -- to say the least -- have stalled. The two
> strategies are:
>
> 1. Eliminative materialism
> 2. Identity theory
>
> Eliminative materialism argued that human behavior could be explained
> scientifically without reference to the mental states of folk
> psychology. S-R behaviorism -- as in Skinner -- was the great effort
> here, and it is now largely judged a failure. We seem to need mental
> states to explain human -- and even lower animal -- behavior.
>
> So then there is the identity theory, the attempt to show that each
> mental state is identical to some (or finitely many) physical states.
> Well, this has not panned out either. At worst, we may be in for some
> many-to-many correspondence between mental states and physical states,
> which spells doom for identity theory and reductionism.
>
> I probably have not said enough to convince anyone here. This is a big
> issue, and much more could be said. I am just trying to summarize the
> current status of the mind-body debate. At the very least, the
> reductionist argument has been stalemated -- and there are good
> reasons, having to do with the role of language, for thinking it is
> false.
The CURRENT state of the art? Hold on a second. I haven't seen any
reference in what you describe to a computational
hypothesis about the mind-body problem. i.e. that a mind is
software+firmware running on a brain+body which is, amongst other
things, computing hardware. The explanatory power of this hypothesis is
such that it is hard to refute if you look into it
closely enough. Skinner behaviorism is kind of like saying the computer
computes about as well as a bulldozer does.
Mental state/physical state identity is some out-of-touch philosopher's
way of saying "I don't understand how (high-level)
information states in software and computer data can take on notions of
informational locality and difference that are not 1-to-1
isomorphic in any simple sense with, say, the physical layout and
voltage states (1/0 states) of my RAM chips." In otherwords,
the philosopher, who never took a programming course, is saying "I don't
understand how object definitions, class definitions,
procedure and function definitions etc in high-level programming
languages could ever be expressed nor how software
could operate on those defined objects with those defined functions
etc.." Nor does the philosopher
understand how the programmer might tie those informational objects and
their states, representing in a high-level computer
software language, and inside the computer in some kind of (who cares
what kind of) memory and processor architecture)
back to the real world via such things as user interfaces, I/O devices,
servos and sensors etc. Advice
to philosopher: TAKE THE PROGRAMMING COURSE. IN FACT, TAKE A GOOD
COMPSCI DEGREE, THEN
GET BACK TO US.
>
> 1. How do you even individuate thoughts so as to count them or
> correlate them with physical states? Is the belief that Mark Twain
> wrote Huckleberry Finn the same as or different from the belief that
> Samuel Clemens wrote Huckleberry Finn? Would that be one physical
> state you would seek to correlate with it or two? There are lots of
> well-discussed conceptual problems here.
Thoughts are (sets of) information-states. The nature of their physical
state realization (while there must be one) is only incidental
to their properties as information states. You count them as you would
count information-states (by enumerating the different ones,
and stating things like how much mutual information is conveyed by one
information-state (or set of them) about another, and how much
information-theoretic entropy is there in one information-state (or set
of information-states) versus another etc.
One can also talk about properties of them such as whether the
information-state (or part of it, or sets of them) can be said to
be isomorphic to, corresponding to, abstractly representative of etc.
some physical state or states.
Most precisely, "having a thought occurring to me" is equivalent to
focussing the program-counters of my attention-centre
brain hardware on certain sequences of information-states that are
represented in my brain exactly analogously to how information
states are represented in RAM memory or disk in a computer system. And
the thing that focuses the program-counters
of my attention-centre brain-hardware on particular sequences of
information-states is my "pay-attention" software/firmware
subroutines and my "explore associated information" software/firmware
subroutines running in those attention-centre hardware regions
of my brain, and my "form hypotheses" software/firmware and my
"form-and-test action-plans" software/firmware
and my "commit important info to long-term memory with emotional
emphasis tags" software/firmware. etc etc.
>
> 2. The mind-brain relation has sometimes been compared to the relation
> between software and hardware in computers. A certain software
> function might be endlessly realizable by different physical
> (hardware) configurations in different computers. Similarly, I
> suppose, the same hardware configuration might realize different
> software functions in different computers. The analogy might break
> down, but this is the idea.
>
Yes. That's the hypothesis I just elaborated above as being a much more
credible hypothesis these days than simple
naive-early-twentieth century lab psychology or wanking, sloppy,
blue-sky, pre-computational-theory philosophizing.
> 3. The denial of reductionism does not necessarily entail belief in
> what is called "a ghost in the machine," i.e., a soul or other
> mystical something. The denial of reductionism may instead imply that
> not only is there no ghost, there also is no machine (i.e., we don't
> behave in machine-like ways). (This is a point made by Searle.)
Searle is a bone-head. So is Dennett, come to think of it. I'd agree
with them as far as to say
that there's little computation going on in THEIR heads. :-)
>
> John, I am not sure I understand everything you said. One thing I
> would say along lines I think you suggest: Determinism suggests a
> closed system. If you don't have a closed system, you don't get
> deterministic predictiveness.
First of all, determinism DOES NOT imply predictiveness. It is a
fundamental theorem of theoretical computer science
that there are deterministic computations whose results CANNOT BE
PREDICTED FASTER (BY ANY LESS COMPLEX PROCESS)
THAN BY RUNNING EXACTLY THE PROGRAM WHICH IS COMPUTING THE RESULT
ANYWAY!!!
So even if a person's mind, or the universe is deterministic (and being
computed), it does NOT imply that its results
can be predicted any other way than by just "running that program" and
seeing what it does, whenever it gets
around to doing it.
Received on Sat Jan 17 2004 - 22:01:25 PST
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:09 PST