More lengthy mind arguments

From: Eric Hawthorne <egh.domain.name.hidden>
Date: Sun, 29 Dec 2002 12:36:21 -0800

Lengthy response follows: Ignore if not interested.

John M wrote: (Some criticisms which I am struggling with genuine effort to
understand, independent of attacks on style,
which I may have started. Sorry



A few factors:

1. BRAINS AND BRAIN SOFTWARE ARE HIGHLY SPECIALIZED AND OPTIMIZED

One thing I didn't state clearly. While I'm proposing a model of mind
as software, brain as hardware, this just tells us what a mind is capable
of computing, in principle; that is, computable functions. Every half-
decent computer, and every different high-level programming language, is
Turing equivalent, but each design yields different things that can be
computed conveniently or quickly. Turing-equivalence says nothing about
performance, and performance, on certain types of computations, is key to
how a brain of an animal works. So a brain is a computer optimized for
certain types of computations, and the software of a brain would be pretty
specialized in its detailed form to do computations efficiently on that
type of hardware.

Also, if you are uncomfortable thinking of software as having to be loaded
onto a brain to get it to function, then think of the software as being
already in the brain, in analogy to the software being implemented as ROM
firmware or processor microcode, or some other more comfortable analogy.
Just because it's already "built-in" doesn't mean it isn't essentially
software. I would define the essence of software as being "information
processing procedures".

2. THE HUMAN MYSTIQUE - LET'S GET OVER OURSELVES, PEOPLE

One of the biggest bugbears that would-be AI researchers face is "the
human mystique" attitude. The attitude that we are so fricking amazing
that no one could possibly understand us using our puny science. Well, I
think AI researchers would agree that humans (and other animals) are
pretty amazing indeed, but that doesn't stop an attempt to make inroads
into understanding how human minds work (or how a generalization or
interestingly similar variation of our minds work).

The AI approach is to try to tease out general insights about cognition,
knowledge, intelligence by putting theories of same to the test of
implementation on a computer.

That is, if you could create an alternative implementation of a process
that seemed to be perceiving and thinking and acting (say, conversing
about many domains and new domains) with similar effect to a human
perceiving and thinking and acting, then you have learned at least
something about the perception and thinking processes in general.

Maybe you have just learned more and more about what is NOT ESSENTIAL to
those processes, but if that's the case, at least you will have eliminated
a lot of the cloudiness of considering it all to just be unfathomable
levels of complexity.


3. THE QUALIA OF CONSCIOUSNESS IS NOT EXPLAINED BY AI, BUT LOTS IS

I didn't claim that AI yet gives us any adequate insight into the
"qualia of consciousness." But at least AI research can eliminate as
mysteries a number of behaviours closely tied to the qualia of
consciousness. To wit, it proposes that reflective cognition (a well
understood process) on the relation of a self-symbol to environment
symbols may have something to do with some of the behaviours that we
associate with consciousness. And it proposes that the shifts of "primary
attention" that seem to be a notable aspect of "the qualia of
consciousness" can be explained as emergence to the fore of the cognitions
of some few of the many different cognitive agent processes supervising
cognition effort at the highest level, in response to different primary-
drive-related priorities at various times, and also in response to how
well those cognitive agents have succeeded at coming up with a relevant
answer to something.

I would say that if we are to be able to theorize cogently about how the
qualia of consciousness come about, at the least we have to be able to
eliminate the above factors from consideration, and claim that "there is
still something else, separate from all that, and it is THIS." <-- still a
mystery (to me anyway).

4. GIVE ME HINTS OF A BETTER THEORY OF MIND THAN INFORMATION PROCESSING

If you don't like a theory of brain as hardware, mind as software, it
is your responsibility, in the scientific tradition, to come up with a
better theory of mind. I am truly interested (no sarcasm).


5. DON'T JUST SAY "IT'S VERY COMPLEX". BE MORE SPECIFIC

Appealing to "complexity" or "the unfathomable complexity of the whole"
(paraphrased) seems to me to be a mystic's cop-out, similar to the
religious arguments of yore. Maybe that's not what you're doing. I'm not
reading carefully enough.

6. REDUCTIONISM ISN'T EVERYTHING, BUT IT IS DAMNED USEFUL

It seems to me that it is the whole reductionist approach that you are
attacking. I would counter that while reductionism is certainly never the
whole answer, it does at least produce some simpler questions (about
subsets of reality) which it is manageable to try to find real answers to.
If you don't like reductionism at all, please stop using all those nasty
products of it, like any technology more advanced than a rock to throw. I
hope you're not just proposing that we give up the entire scientific
project and sit and say "ommmm". Don't get me wrong, I respect people who
do that and would like to be able to myself, but I find that the
scientific method (which requires reductionism as one of its techniques)
is also useful.

7. EMERGENCE OF COMPLEX SYSTEMS IS WAY COOL. AI ALREADY USES THAT IDEA

There is nothing incompatible between AI (i.e. intelligence-as-
information-processing) research and the realization that there is
profundity in the emergence of complex systems with emergent structure and
behaviour. The two ideas go hand in hand.

8. DON'T USE QUANTUM MECHANICS AS A CRUTCH

Don't use QM as a crutch to obtain the needed air of mystery surrounding
human cognition. QM may very well be involved somehow, but a lot
can be explained without resorting to it.
For example, a convincing illusion of free will could be generated
simply by the operation of an extremely complex, layered, classical
information processing machine whose behaviour is determined in a fixed
but maximally complex way by its information inputs and its algorithms,
which themselves are altered in complex ways over time by the inputs.
Received on Sun Dec 29 2002 - 15:34:33 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:07 PST