On 3 Sep, 01:26, David Nyman <david.ny....domain.name.hidden> wrote:
> 2009/9/2 Flammarion <peterdjo....domain.name.hidden>:
>
>
>
> >> and is thus not any particular physical
> >> object. A specific physical implementation is a token of that
> >> computational type, and is indeed a physical object, albeit one whose
> >> physical details can be of any variety so long as they continue to
> >> instantiate the relevant computational invariance. Hence it is hard
> >> to see how a specific (invariant) example of an experiential state
> >> could be justified as being token-identical with all the different
> >> physical implementations of a computation.
>
> > I was right.
>
> > A mental type can be associated with a computational
> > type.
>
> > Any token of a mental type can be associated with a token
> > of the corresponding computational type.
>
> But what difference is that supposed to make? The type association is
> implicit in what I was saying. All you've said above is that it makes
> no difference whether one talks in terms of the mental type or the
> associated computational type because their equivalence is a posit of
> CTM. And whether it is plausible that the physical tokens so picked
> out possess the causal efficacy presupposed by CTM is precisely what I
> was questioning.
question it then. what's the problem?
> >> But even on this basis it still doesn't seem possible to establish any
> >> consistent identity between the physical variety of the tokens thus
> >> distinguished and a putatively unique experiential state.
>
> > The variety of the physical implementations is reduced by grouping
> > them
> > as equivalent computational types. Computation is abstract.
> > Abstraction is
> > ignoring irrelevant details. Ignoring irrelevant details establishes a
> > many-to-one relationship : many possible implementations of one mental
> > state.
>
> Again, that's not an argument - you're just reciting the *assumptions*
> of CTM, not arguing for their plausibility.
you're not arguing against its plausibility
> The justification of the
> supposed irrelevance of particular physical details is that they are
> required to be ignored for the supposed efficacy of the type-token
> relation to be plausible. That doesn't make it so.
why not? we already know they can be ignored to establish
computational
equivalence.
> >> On the
> >> contrary, any unbiased a priori prediction would be of experiential
> >> variance on the basis of physical variance.
>
> > Yes. The substance of the CTM claim is that physical
> > differences do not make a mental difference unless they
> > make a computational difference. That is to say, switching from
> > one token of a type of computation to another cannot make
> > a difference in mentation. That is not to be expected on an
> > "unbiased" basis, just because it is a substantive claim.
>
> Yes it's precisely the claim whose plausibility I've been questioning.
You haven't said anything specific about what is wrong with it at all.
> > The variety of the physical implementations is reduced by grouping
> > them
> > as equivalent computational types. Computation is abstract.
> > Abstraction is
> > ignoring irrelevant details. Ignoring irrelevant details establishes a
> > many-to-one relationship : many possible implementations of one mental
> > state.
>
> Yes thanks, this is indeed the hypothesis. But simply recapitulating
> the assumptions isn't exactly an uncommitted assessment of their
> plausibility is it?
Saying it is not necessarily correct is not a critique
>That can only immunise it from criticism. There
> is no whiff in CTM of why it should be considered plausible on
> physical grounds alone.
> Hence counter arguments can legitimately
> question the consistency of its claims as a physical theory in the
> absence of its type-token presuppositions.
If you mean you can criticise the CTM as offering nothing specific
to resolve the HP, you are correct. But I *thought* we were
discussing the MG/Olympia style of argument, which purportedly
still applies even if you restrict yourself to cognition and forget
about experience/qualia.
Are we?
> Look, let me turn this round. You've said before that you're not a
> diehard partisan of CTM. What in your view would be persuasive
> grounds for doubting it?
I'll explain below. But the claim I am interested in is that CTM
somehow disproves materalism (Maudlin, BTW takes it the other way
around--
materialism disproves CTM). I have heard not a word in support of
*that* claim.
ust an Artificial Intellence be a Computer ?
An AI is not necessarily a computer. Not everything is a computer or
computer-emulable. It just needs to be artificial and intelligent! The
extra ingredient a conscious system has need not be anything other
than the physics (chemistry, biology) of its hardware -- there is no
forced choice between ghosts and machines.
A physical system can never be exactly emulated with different
hardware -- the difference has to show up somewhere. It can be hidden
by only dealing with a subset of a systems abilities relevant to the
job in hand; a brass key can open a door as well as an iron key, but
brass cannot be substituted for iron where magnetism is relevant.
Physical differences can also be evaded by taking an abstract view of
their functioning; two digital circuits might be considered equivalent
at the "ones and zeros" level of description even though they
physically work at different voltages.
Thus computer-emulability is not a property of physical systems as
such. Even if all physical laws are computable, that does not mean
that any physical systems can be fully simulated. The reason is that
the level of simulation matters. A simulated plane does not actually
fly; a simulated game of chess really is chess. There seems to be a
distinction between things like chess, which can survive being
simulated at a higher level of abstraction, and planes, which can't.
Moreover, it seems that chess-like things are in minority, and that
they can be turned into an abstract programme and adequately simulated
because they are already abstract.
Consciousness. might depend on specific properties of hardware, of
matter. This does not imply parochialism, the attitude that denies
consciousness to poor Mr Data just because he is made out of silicon,
not protoplasm. We know our own brains are conscious; most of us
intuit that rocks and dumb Chinese Rooms are not; all other cases are
debatable.
Of course all current research in AI is based on computation in one
way or another. If the Searlian idea that consciousness is rooted in
physics, strongly emergent, and non-computable is correct, then
current AI can only achieve consciousness accidentally. A Searlian
research project would understand how brains generate consciousness in
the first place -- the aptly-named Hard Problem -- before moving onto
possible artificial reproductions, which would have to have the right
kind of physics and internal causal activity -- although not
necessarily the same kind as humans.
"When I say that the brain is a biological organ and consciousness
a biological process, I do not, of course, say or imply that it would
be impossible to produce an artificial brain out of nonbiological
materials that could also cause and sustain consciousness...There is
no reason, in principle, why we could not similarly make an artificial
brain that causes consciousness. The point that needs to be empnasized
is that any such artificial brain would have to duplicate the actual
causes of human and animal brains to produce inner, qualitative,
subjective states of consciousness. Just producing similar output
behavior would not by itself be enough."
[Searle, MLS, p. 53]
"Is the Brain A Machine?"
John Searle thinks so .
The brain is indeed a machine, an organic machine; and its
processes, such as neuron firings, are organic machine processes.
The Mystery of Consciousness. page 17: Is he right ? To give a
typically philosophical answer, that depends on what you mean by
'machine'. If 'machine' means an artificial construct, then the answer
is obviously 'no'. However. Searle also thinks the the body is a
machine, by which he seems to mean that it has been understand in
scientific terms, we can explain biology by in terms of to chemistry
and chemistry in terms of physics. Is the brain a machine by this
definition ? It is being granted that the job of he brain is to
implement a conscious mind, just as the job of the stomach is to
digest, the problem then is that although our 'mechanical'
understanding of the stomach does allow us to understand digestion we
do not, according to Searle himself, understand how the brain produces
consciousness. He does think that the problem of consciousness is
scientifically explicable, so yet another definition of 'machine' is
needed, namely 'scientifically explained or scientifically explicable'
-- with the brain being explicable rather than explained. The problem
with this stretch-to-fit approach to the meaning of the word 'machine'
is that every time the definition of brain is broadened, the claim is
weakened, made less impactful.
PDJ 03/02/03
The Chinese Room
The Chinese Room and Consciousness
According to the proponents of Artificial Intelligence, a system is
intelligent if it can convince a human interlocutor that it is. This
is the famous Turing Test. It focuses on external behaviour and is
mute about how that behaviour is produced. A rival idea is that of the
Chinese Room, due to John Searle. Searle places himself in the room,
manually executing a computer algorithm that implements intelligent-
seeming behaviour, in this case getting questions written in Chinese
and mechanically producing answers, without himself understanding
Chinese. He thereby focuses attention on how the supposedly
intelligent behaviour is produced. Although Searle's original idea was
aimed at semantics, my variation is going to focus on consciousness.
Likewise, although Searle's original specification has him
implementing complex rules, I am going to take it that the Chinese
Room is implemented as a conceptually simple system --for instance, a
Giant Look-Up Table -- in line with the theorem of Computer Science
which has it that any computer can be emulated by a Turing Machine.
If you think a Chinese Room implemented with a simplistic, "dumb"
algorithm can still be conscious, you are probably a behaviourist; you
only care about that external stimuli get translated into the
appropriate responses, not how this happens, let alone what it feels
like to the system in question.
If you think this dumb Chinese Room is not conscious, but a smart one
would be, you need to explain why. There are two explanatory routes:
one that says consciousness is inessential, and another that says that
hardware counts as well as software.
Any smart AI can be implemented as a dumb TM, so the more complex
inner workings which supposedly implement consciousness , could be
added or subtracted without making any detectable difference. Given
the assumption that the computational differences are what matter,
this would to add up to epiphenomenalism, the view that consciousness
exists but is a bystander that doesn't cause anything, since there is
not any computational difference between the simple implementation and
the complex one.
On the other hand, if it is assumed that epiphenomenalism if false,
then it follows that implementational differences must matter, since
the difference between the complex and the dumb systems is not in
their computational properties. That in turn means computationalism is
false. The Chinese Room argument then succeeds, but only as
interpreted fairly strictly as an argument about the ability of
algorithms to implement consciousness. Any actual computational
systems, or artificial intelligence construct, will be more than just
an algorithm; it will be the concrete implementation of an algorithm.
Since it is the implementation that makes the difference between a
fully successful AI and a "zombie" (functional enough to pass a Turing
test, but lacking real consciousness), and since every AI would have
some sort of implementation, the possibility of an actual systems
being conscious is far from ruled out. The CR argument only shows that
it is not conscious purely by virtue of implementing an algorithm. It
is a succesful argument up to that point, the point that why AI may be
possible, it will not be pruely due to running the right algorithm.
While the success of an AI programme is not ruled out, it is not
guaranteed either. It is not clear which implementations are the right
ones. A system running the right algorithm on the wrong hardware may
well be able to pass a Turing Test, but if the hardware is relevant to
consciousness as well, a system with the wrong hardware will be an
artificial zombie. It will be cognitively competent, but lacking in
genuine phenomenal consciousness. (This is in line with the way robots
and the like are often portrayed in science fiction. A further wrinkle
is that an exact computational emulation of a real person -- a real
person who believes in qualia anyway -- would assert its possession of
qualia while quite possibly not possessing any qualia to boast about).
Thus the success of the CR argument against a software-only approach
to AI has the implication that the TT is not adequate to detect the
success of a strong AI (artificial consciousness) project. (Of course,
all this rests on beahviourism being false; if behaviourism is true
there is no problem with a TT, since it is a test of behaviour). We
need to peek inside the box; in order to know whether an AI device has
full phenomenal, consciousness, we would need a successful theory
linking consciousness to physics. Such a theory would be nothing less
than an answer to the Hard Problem. So a further implication of the
partial success of Searlian arguments is that we cannot bypass the
problem of explaining consciousness by some research programme of
building AIs. The HP is logically prior. Except for beahviourists.
Peter D Jones 8/6/05
Syntax and Semantics. The Circularity Argument as an Alternative
Chinese Room
The CR concludes that syntax, an abstract set of rules is insufficient
for semantics. This conclusions is also needed as a premise for
Searle's syllogistic argument
1. Syntax is not sufficient for semantics.
2. Computer programs are entirely defined by their formal, or
syntactical, structure.
3. Minds have mental contents; specifically, they have semantic
contents.
4. Therefore, No computer program by itself is sufficient to give a
system a mind. Programs, in short, are not minds, and they are not by
themselves sufficient for having minds.
Premise 01 is the most contentious of the four. The Chinese Room
Argument, which Searle puts forward to support itm is highly
contentious. We will put forward a different argument to support it.
An objection to the CR argument goes: "But there must be some kind of
information processing structure that implements meaning in our heads.
Surely that could be turned into rules for the operator of the Chinese
Room".
A response, the Circularity Argument goes: a system of syntactic
process can only transform one symbol-string into another; it does not
have the power to relate the symbols to anything outside the system.
It is a circualr, closed system. However, to be meaningful a symbol
must stand for something other than itself. (The Symbol must be
Grounded). Therefore it must fail to have any real semantics.
It is plausible that any given term can be given an abstract
definition that doesn't depend on direct experience. A dictionary is
collection of such definitions. It is much less plausible that every
term can be defined that way. Such a system would be circular in the
same way as:
"present: gift"
"gift: present"
...but on a larger scale.
A dictionary relates words to each other on a static way. It does not
directly have the power to relate words to anything outside itseld. We
can understand dictionary definitons because we have already grapsed
the meanings of some words. A better analogy for the Symbol Grounding
problem is that of trying to learn an entirely unknown langauge for a
dictionary. (I have switched from talking about syntacital
manipluation processes to static dicitonaries; Searles arguments that
syntax cannot lead to semantics have been critices for dealing with
"syntax" considered as abstract rules, whereas the computational
processes they are aimend are concrete, physcial and dynamic. The
Circularity argument does not have that problem. Both abstract syntax
and symbol-manipulation processed can be considered as circular).
If the Circularity Argument, is correct, the practice of giving
abstract definitions, like "equine quadruped" only works because
somewhere in the chain of definitions are words that have been defined
directly; direct reference has been merely deferred, not avoided
altogether.
The objection continues: "But the information processing structure in
our heads has a concrete connection to the real world: so do AI's
(although those of the Chinese Room are minimal). Call this is the
Portability Assumption.
But they are not the same concrete connections. The portability of
abstract rules is guaranteed by the fact that they are abstract. But
concrete causal connections are not-abstract. They are unlikely to be
portable -- how can you explain colour to an alien whose senses do not
include anything like vision?
Copying the syntactic rules from one hardware platform to another will
not copy the semantics. Therefore,semantics is more than syntax.
If the Portability Assumption is correct, an AI (particularly a
robotic one) could be expected to have some semantics, but there is no
reason it should have human semantics. As Wittgenstein said: "if a
lion could talk, we could not understand it".
Peter D Jones 13/11/05
The Chinese Room and Computability
I casually remarked that mental behaviour 'may not be computable'.
This will shock some AI proponents, for whom the Church-Turing thesis
proves that everything is computable. More precisely, everything that
is mathematically computable is computable by a relatively dumb
computer, a Turing that something can be simulated doesn't mean the
simulation has all the relevant properties of the original: flight
simulators don't take off. Thirdly the mathematical sense of
'computable' doesn't fit well with the idea of computer-simulating
fundamental physics. A real number is said to be mathematically
computable if the algorithm that churns it out keeps on churning out
extra digits of accuracy..indefinitely. Since such a algorithm will
never finish churning out a single real number physical value, it is
difficult to see how it could simulate an entire universe. Yes, I am
assuming the universe is fundamentally made of real numbers. If it is,
for instance finite, fundamental physics might be more readily
computable, but the computability of physics still depends very much
on physics and not just on computer science).
The Systems Response and Emergence
By far the most common response to the CR argument is that, while the
room's operator, Searle himself, does not understand Chinese, the room
as a whole does. According to one form of the objection, individual
neurons do not understand Chinese either; but this is not a fair
comparison. If you were to take a very simple brain and gradually add
more neurons to it, the increase in information-processing capacity
would keep in line with an increase in causal activity. However, the
equivalent procedure of gradually beefing up a CR would bascially
consist of adding more and more rules to the rule book while the
single "neuron", the single causally active constituent, the operator
of the room did all the work. It is hard to attribute understanding to
a passive rulebook, and hard to attribute it to an operator performing
simple rote actions. It is also hard to see how the whole can be more
than the sum of the parts. It is very much a characteristic of a
computer, or other mechanism, that there is no mysterious emegence
going on; the behavour of the whole is always explicable in term sof
the behaviour of the parts. There is no mystery, by contrast, in more
neurons being able to do more work. Searle doesn't think you can put
two dumbs together and get a smart. That is no barrier to putting 100
billion dumbs together to get a smart. Or to putting two almost-smarts
together to get a smart.
The Chinese Room and Speed
Of course, if we burden the room's operator with more and more rules,
he will go slower and slower. Dennett thinks a slow chinese room would
not count as conscious at all. Nature, he notes, requires conscious
beings to react within a certain timescale in order to survive. That
is true, but it does not suggest any absolute speed requirement.
Nature accomodates the tortoise and the mayfly alike. The idea that a
uselessly slow consciousness would not be actually be a concsiousness
at all is also rather idiosyncratic. We generally credit a useless
vestigal limb with being a loimb, at least.
Anyway, Dennett's speed objection is designed to lead into one of his
favourite ideas: the need for massive parallelism. One Searle might
lack conscious semantics, but a million might do the trick. Or so he
says. But what would parallelism bring us except speed?
The Chinese Room and complexity.
The Dennettians make two claims; that zombies are impossible, and that
the problem with the Chinese room is that it is too simple. We will
show that both claims cannot be true.
What kind of complexity does the Chinese Room lack? By hypothesis it
can pass a Turing test: it has that much complexity in the sense of
outward performance. There is another way of thinking about
complexity: complexity of implementation. . So would the Chinese Room
be more convincing if it had a more complex algorithm? The problem
here is that there is a well-founded principle of computer science
according to which a computer programme of any complexity can emulated
by a particular type of essentially simple machine called a Turing
Machine. As it happens, the Chinese Room scenario matches a Turing
Machine pretty well. A Turing Machine has a simple active element, the
read-write head and a complex instruction table. In the Chinese Room
the sole active element is the operator, performing instruction by
rote; any further complexity is in the rulebooks. Since there is no
stated limit to the "hardware" of the Chinese Room -- the size of the
rulebook, the speed of the operator -- the CR could be modified to
implement more complex algorithms without changing any of the
essential features.
Of course differences in implementation could make all sorts of non-
computational differences. Dennett might think no amount of
computation will make a flight simulator fly. He might think that the
Chinese Room lack sensor and effectuators to interact with its
environment, and that such interactions are needed to solve the symbol-
grounding problem. He might think that implementational complexity,
hardware over software is what makes the difference between real
consciousness and zombiehood. And Searle might well agree with him on
all those points: he may not be a computationalist, but he is a
naturalist. The dichotomy is this: Denett's appeal to complexity is
either based on software, in which case it is implausible, being
undermined by Turing equivalence; or it is based in hardware, in which
case it is no disproof of Searle. Rather, Searle's argument can be
seen as a successful disproof of computationalism(ie the only-software-
matters approach) and Dennett's theory of consciousness is a proposal
for a non-computationalistic, hardware-based, robotic approach of the
kind Searle favours.
Some Denettians think a particular kind of hardware issue matters:
parallelism. The Chinese room is "too simple" in that it is a serial
processor. Parallel processors cannot in fact computer anything --
cannot solve any problem -- that single processors can't. So parallel
processing is a difference in implementation, not computation. What
parallel-processing hardware can do that serial hardware cannot is
perform opertations simultaneously. Whatever "extra factor" is added
by genuine simultaneity is not computational. Presumably that means it
would not show up in a Turing test -- it would be indetectable from
the outside. So the extra factor added by simultaneity is something
that works just like phenomenality. It is indescernable from the
outside, and it is capable of going missing while external
functionality is preserved. (We could switch a parallel processor off
during a TT and replace it with a computationally equivalent serial
one. According to the parallel processing claim, any genuine
cosnciousness would vanish, although the external examiner preforming
the TT would be none the wiser). In short, simulateneity implies
zombies.
The Chinese Room and Abstraction
Consider the argument that computer programmes are too abstract to
cause consciousness. Consider the counter-argument that a running
computer programme is a physical process and therefore not abstract at
all.
1. Computationalism in general associates that consciousness with a
specific comptuer programme, programme C let's say.
2. Let us combine that with the further claim that programme C
causes cosnciousness, somehow leveraging the physical causality of the
hardware it is running on.
3. A corrolary of that is that running programme C will always
cause the same effect.
4. Running a programme on hardware is a physical process with
physical effects.
5. It is in the nature of causality that the same kind of cause
produces the same kind of effects-- that is, causaliy attaches to
types not tokens.
6. Running a programme on hardware will cause physical effects, and
these will be determined by the kind of physical hardware. (Valve
computers will generate heat, cogwheel computers will generate noise,
etc).
7. Therefore, running programme C on different kinds of hardware
will not produce a uniform effect as required by 1.
8. Programmes do not have a physical typology: they are not natural
kinds. In that sense they are abstract. (Arguably, that is not as
abstract as the square root of two, since they still have physical
tokens. There may be more than one kind or level of abstraction).
9. Conclusion: even running programmes are not apt to cause
consciousness. They are still too abstract.
Computational Zombies
This argument explores the consequenes of two assumptions:
1. We agree that Searle is right in his claim that software alone
is not able to bring about genuine intelligence,
2. But continue to insist that AI research should nonetheless be
pursued with computers.
In other words, we expect the success or failure of our AI to be
dependent on the choice of software in combination with the choice of
hardware.
The external behaviour of a computational system -- software and
hardware taken together -- is basically detemined by the software it
is running; that is to say, while running a programme on different
hardware will make some kind of external differences, they tend to be
irrelevant and uninteresting differences such as the amount of heat
and noise generated. Behaviourstic tests like the Turing Test are
specifically designed to filter out such differences (so that the
examiner's prejudices about what kind of system could be conscious are
excluded). The questions and responses in a TT are just the inputs and
outputs of the software.
Abandoning the software-only approach for a combined software-and-
hardware approach has a peculiar consequence: that it is entirely
possible that out of two identically programmed systems running on
different hardware, one will be genuinely intelligent (or have genuine
consciousness, or genuine semantic comprehension, etc) and the other
will not. Yet, as we have seen above, these differences will be --
must be -- indiscernable in a Turing Test. Thus, if hardware is
involved in the implementation of AI in computers, the Turing Test
must be unreliable. There is a high probability that it will give
"false positives", telling us that unconscious AIs are actually
conscious -- a probability that rises with the number of different
systems tested.
To expand on the last point: suppose you get a positive TT result for
one system, A. Then suppose you duplicate the software onto a whole
bunch of different hardware platforms, B, C, D....
(Obviously, they are all assumed to be capable of running the software
in the first place). They must give the same results to the TT for A,
since they all run the same software, and since the software
determines the responses to a TT, as we established above, they must
give positive results. But eventually you will hit the wrong hardware
-- it would be too unlikely to always hit on the right hardware by
sheer chance, like throwing an endless series of heads. When you do
hit the wrong hardware, you get a false positive. (Actually you don't
know you got a true positive with A in the first place...)
Thus, some AIs would be "zombies" in a restricted sense of "zombie".
Whereas a zombie is normally thought of a physical duplicate lacking
consciousness, these are software duplicates lacking appropriate
hardware.
This peculiar sitation comes about because of the separability of
software and hardware in a computational approach, and the further
separation of relevant and irrelvant behaviour in the Turing Test.
(The separability of software simply means the ability to run the same
software on differenet hardware). Physical systems in general -- non
computers, not susceptible to separate descriptions of hardware and
software -- do not have that separability. Their total behaviour is
determined by their total physical makeup. A kind of Articial
Intelligence that was basically non-computational would not be subject
to the Compuational Zombie problem. Searle is therefore correct to
maintain, as he does, that AI is broadly possible.
Neuron-silicon replacement scenarios
Chalmers claims that replacing neurons with silicon will preserve
qualia so long as it preserves function -- by which he means not just
outward, behavioural function but also the internal organisation that
produces it. Obviously, he has to make that stipulation because it is
possible to think of cases, such as Searle's Chinese Room, where
outward behaviour is generated by a very simplistic mechanism, such as
a lookup table. In fact, if one takes the idea that consciousness
supervenes on the functional to the extreme, it becomes practically
tautologous. The most fine-grained possible functional description
just is a physical description (assuming physics does not deliver
intrinsic properties, only structural/behavioural ones) , and the
mental supervenes in some sense on the physical, so consciousness can
hardly fail to supervene on an ultimately fine-grained functional
simulation. So the interesting question is what happens between these
two extremes at, say, the neuronal level.
One could imagine a variation of the thought-experiment where one's
brain is first replaced at the fine-grained level, and the replaced
again with a coarser-grained version, and so, on, finishing in a Giant
Look Up Table. Since hardly anyone thinks a GLUT would have phenomenal
properties, phenomenality would presumably fade out. So there is no
rigid rule that phenomenality is preserved where functionality is
preserved.
It is natural suppose that one's functional dispositions are in line
with one's qualia. One claims to see red because one is actually
seeing red. But an intuition that is founded on naturalness cannot be
readily carried across to the very unnaturual situation of having
one's brain gradually replaced.
What is it like to have one's qualia fade away ? If one had ever been
a qualiaphile, one would continue to claim to have qualia, without
actually doing so. That is, one would be under an increasing series of
delusions. It is not difficult to imagine thought-experiments where
the victim's true beliefs are changed into false ones. For instance,
the Mad Scientist could transport the victim from their bedroom to a
"Truman Show" replica while they slept. Thus the victim's belief that
they were still in their own bedroom would be falsified. Since beliefs
refer to states-of-afairs outside the head, you don't even need to
change anything about someone's psychology to change the truth of
their beliefs. So there is no great problem with the idea that
rummaging in someone's head does change their beliefs -- any such
process must change beliefs relating to what is physically inside the
victims head. Since the victim is funtionally identical, they must
carry on believing they have neural tissue in their head, even after
it has all been replaced. It doesn't follow from this that replacing a
brain with silicon must destroy qualia, but there is definitely a
precedent for having false beliefs about one's own qualia after one's
brain has been tampered with.
A GLUT of Turings
An old programmer's trick is to store "potted" results rather than
calculating them afresh each time. This saves tiem at the expense of
using up memory. Earlier, we used the idea of a "Giant Look-Up Table"
to implement, in an essentially dumb way, the whole of an extremely
coplicated system, such as a human brain.
Can a (Giant) Look-Up Table emulate any Turing Machine (and therefore,
any computer, and therefore, if computationalism is true, any brain).
The usual objection to LUT's is that they are stateless. But that is
easy get round. Add a timestamp as an additional input.
Or includde with the fresh input each time a record of all previous
conversations it has had, with the total table size limiting the
"lifespan" of the machine
The feedback of the old conversation gives the machine a state memory,
very straightforwardly is voluminously encoded
What is the LUT for a sorting algorithm?
It is a table which matches lists of unsorted numbers against sorted
numbers. it doesn't even need to be stateful. And, yes, if it is
finite it will only sort lists of up to some size limit. But then any
algorithm has to run for a finite length of time, and will not be able
to sort some lists in the time allowed. So time limits are just being
traded for space limits.
If you want to pass a Turing test with a glut, you only need a coarse-
grained (but still huge) GLUT, that matches verbal resonses to verbal
inputs. (A GLUT that always produced the same response to the same
query would be quickly detected as a machine, so it would need the
statefullness trick, making it even larger...). However, it is
counterinutitive that such a GLUT would simulate thought since nothing
goes on between stimulus and response. Well, it is counterintuitive
that any GLUT would think or feel anything. Daryl McCullough and Dave
Chalmers chew the issue over in this extract from a Newsgroup
discussion.
Computationalism
Computationalism is the claim that the human mind is essentially a
computer. It can be picturesquely expressed in the "yes, doctor"
hypothesis -- the idea that, faced with a terminal disease, you would
consent to having your consciousness downloaded to a computer.
There are two ambiguities in "computationalism" -- consciousness vs.
cognition, process vs programme -- leading to a total of four possible
meanings.
Most people would not say "yes doctor" to a process that recorded
their brain on a tape a left it in a filing cabinet. Yet, that is all
you can get out of the timeless world of Plato's heaven (programme vs
process).
That intuition is, I think, rather stronger than the intuition that
Maudlin's argument relies on: that consciousness supervenes only on
brain activity, not on counterfactuals.
But the other ambiguity in computationalism offers another way out. If
only cognition supervenes on computational (and hence counterfactual)
activity, then consciousness could supervene on non-counterfactual
activity -- i.e they could both supervene on physical processes, but
in different ways.
Aritifical intelligence and emotion
AI enthusiasts are much taken with the analogy between the brain's
(electro) chemical activity and the electrical nature of most current
computers. But brains are not entirely electrical. Neurons sit in a
bath of chemicals which effects their behaviour, too. Adrenaline, sex
hormones, recreational drugs all affect the brain. Why are AI
proponents so unconcerned about brain chemistry? Is it because they
are so enamoured with the electrical analogy? Or because they just
aren't that interested in emotion?
Platonic computationalism -- are computers numbers?
Any computer programme (in a particular computer) is a long sequence
of 1's and 0's, and therefore, a long number. According to Platonism,
numbers exist immaterially in "Plato's Heaven". If programmes are
numbers, does that mean Plato's heaven is populated with computer
programmes?
The problem, as we shall see is the "in a a particular computer"
clause.
As Bruno Marchal states the claim in a more formal language:
"Of course I can [identify programmes with numbers ]. This is a key
point, and it is not obvious. But I can, and the main reason is Church
Thesis (CT). Fix any universal machine, then, by CT, all partial
computable function can be arranged in a recursively enumerable list
F1, F2, F3, F4, F5, etc. "
Of course you can count or enumerate machines or algorithms, i.e.
attach unique numerical labels to them. The problem is in your "Fix
any universal machine". Given a string of 1's and 0s wihouta universal
machine, and you have no idea of which algorithm (non-universal
machine) it is. Two things are only identical if they have all*their
properties in common (Leibniz's law). But none of the propeties of the
"machine" are detectable in the number itself.
(You can also count the even numbers off against the odd numbers , but
that hardly means that even numbers are identical to odd numbers!)
"In computer science, a fixed universal machine plays the role of a
coordinate system in geometry. That's all. With Church Thesis, we
don't even have to name the particular universal machine, it could be
a universal cellular automaton (like the game of life), or Python,
Robinson Aritmetic, Matiyasevich Diophantine universal polynomial,
Java, ... rational complex unitary matrices, universal recursive group
or ring, billiard ball, whatever."
Ye-e-es. But if all this is taking place in Platonia, the only thing
it can be is a number. But that number can't be associated with a
computaiton by another machine, or you get infinite regress.
Is the computationalist claim trivial -- are all systems computers?
It can be argued that any physical theory involving real numbers poses
problems (and all major theories do, at the time of writing). Known
physics is held to be computable, but that statement needs to be
qualified in various ways. A number thinking particularly of a real
number, one with an infinite number of digits -- is said to be
computable if a Turing machine will continue to spit out digits
endlessly. In other words, there is no question of getting to the
"last digit". But this sits uncomfortably with the idea of simulating
physics in real time (or any plausible kind of time). Known physical
laws (including those of quantum mechanics) are very much infused with
real numbers and continua.
"So ordinary computational descriptions do not have a cardinality
of states and state space trajectories hat is sufficient for them to
map onto ordinary mathematical descriptions of natural systems. Thus,
from the point of view of strict mathematical description, the thesis
that everything is a computing system in this second sense cannot be
supported"
Moreover, the universe seems to be able decide on their values on a
moment-by-moment basis. As Richard Feynman put it: "It always bothers
me that, according to the laws as we understand them today, it takes a
computing machine an infinite number of logical operations to figure
out what goes on in no matter how tiny a region of space, and no
matter how tiny a region of time. How can all that be going on in that
tiny space? Why should it take an infinite amount of logic to figure
out what one tiny piece of space/time is going to do?
However, he went on to say:
So I have often made the hypotheses that ultimately physics will not
require a mathematical statement, that in the end the machinery will
be revealed, and the laws will turn out to be simple, like the chequer
board with all its apparent complexities. But this speculation is of
the same nature as those other people make I like it, I dont like it,
and it is not good to be prejudiced about these things".
Is no physical system a computer, except in the eye of the beholder
Comsider the claim that "computation" may not correctly be ascribed to
the physics per se. Maybe it can be ascribed as an heuristic device as
physical explanation has an algorithmic component as Wolfram suggests.
Whether everything physical is computational or whether specific
physical systems are computational are two quite different questions.
As far as I can see, a NAND's gate being a NAND gate is just as
objective as a square thing's being square.
Are the computations themselves part of the purely physical story of
what is going on inside a compter?
Seen mathematically, they have to be part of the physical story. They
are not some non-physical aura hanging over it. A computer doing
something semantic like word-processing needs external interpretation
in the way anything semantic does: there is nothing intrinsic and
objective about a mark that makes it a sign standing for something.
But that is down to semantics, not computation. Whilst we don't expect
the sign "dog" to be understood universally, we regard mathematics as
a universal language, so we put things like
| || ||| ||||
on space probes, expecting them to be understood. But an entity that
can understand a basic numeric sequence could understand a basic
mathematical function. So taking our best guesses about
intersubjective comprehensibility to stand for objectivity,
mathematical computation is objective.
Is hypercomoutation a testable hypothesis? We can decide between non-
computable physics (CM) and computable physics (QM). What the question
hinges on is the different kinds and levels of proof used in emprical
science and maths/logic.
Is Reality real ? Nick Bostrom's Simulation Argument
The Simulation Argument seeks to show that it is not just possible
that we are living inside a simulation, but likely.
1 You cannot simulate a world of X complexity inside a world of X
complexity.(quart-into-a-pint-pot-problem).
2 Therefore, if we are in a simulation the 'real' world outside the
simulation is much more complex and quite possibly completely
different to the simulated world.
3 In which case, we cannot make sound inferences from the world we are
appear to be in to alleged real world in which the simulation is
running
4 Therefore we cannot appeal to an argumentative apparatus of advanced
races, simulations etc, since all those concepts are derived from the
world as we see it -- which, by hypothesis is a mere simulation.
5 Therefore, the simulation argument pulls the metaphysical rug from
under its epistemological feet.
The counterargument does not show that we are not living in a
simulation, but if we are , we have no way of knowing whether it is
likely or not. Even if it seems likely that we will go on to create
(sub) simulations, that does not mean we are living in a simulation
that is likely for the same reasons, since our simulation might be
rare and peculiar. In particular, it might have the peculiarity that
sub-simulations are easy to create in it. For all we know our
simulators had extreme difficulty in creating our universe. In this
case, the fact that it is easy to create sub simulations within our
(supposed) simulation, does not mean it is easy to creae simulations
per se.
Computational counterfactuals, and the Computational-Platonic Argument
for Immaterial Minds
For one, there is the argument that: A computer programme is just a
long number, a string of 1's and 0's.
(All) numbers exist Platonically (according to Platonism)
Therefore, all programmes exist Platonically.
A mind is special kind of programme (According to computaionalism)
All programmes exist Platonically (previous argument)
Therefore, all possible minds exist Platonically
Therefore, a physical universe is unnecessary -- our minds exist
already in the Platonic realm
The argument has a number of problems even allowing the assumptions of
Platonism, and computationalism.
A programme is not the same thing as a process.
Computationalism refers to real, physical processes running on
material computers. Proponents of the argument need to show that the
causality and dynamism are inessential (that there is no relevant
difference between process and programme) before you can have
consciousness implemented Platonically.
To exist Platonically is to exist eternally and necessarily. There is
no time or change in Plato's heave. Therefore, to "gain entry", a
computational mind will have to be translated from a running process
into something static and acausal.
One route is to replace the process with a programme. let's call this
the Programme approach.. After all, the programme does specify all the
possible counterfactual behaviour, and it is basically a string of 1's
and 0's, and therefore a suitable occupant of Plato's heaven. But a
specification of counterfactual behaviour is not actual counterfactual
behaviour. The information is the same, but they are not the same
thing.
No-one would believe that a brain-scan, however detailed, is
conscious, so not computationalist, however ardent, is required to
believe that a progamme on a disk, gathering dust on a shelf, is
sentient, however good a piece of AI code it may be!
Another route is "record" the actual behaviour, under some
circumstances of a process, into a stream of data (ultimately, a
string of numbers, and therefore something already in Plato's heaven).
Let's call this the Movie approach. This route loses the conditional
structure, the counterfactuals that are vital to computer programmes
and therefore to computationalism.
Computer programmes contain conditional (if-then) statements. A given
run of the programme will in general not explore every branch. yet the
unexplored branches are part of the programme. A branch of an if-then
statement that is not executed on a particular run of a programme will
constitute a counterfactual, a situation that could have happened but
didn't. Without counterfactuals you cannot tell which programme
(algorithm) a process is implementing because two algorithms could
have the same execution path but different unexecuted branches.
Since a "recording" is not computation as such, the computationalist
need not attribute mentality to it -- it need not have a mind of its
own, any more than the characters in a movie.
(Another way of looking at this is via the Turing Test; a mere
recording would never pass a TT since it has no condiitonal/
counterfactual behaviour and therfore cannot answer unexpected
questions).
A third approach is make a movie of all possible computational
histories, and not just one. Let's call thsi the Many-Movie approach.
In this case a computation would have to be associated with all
related branches in order to bring all the counterfactuals (or rather
conditionals) into a single computation.
(IOW treating branches individually would fall back into the problems
of the Movie approach)
If a computation is associated with all branches, consciousness will
also be according to computationalism. That will bring on a White
Rabbit problem with a vengeance.
However, it is not that computation cannot be associated with
counterfactuals in single-universe theories -- in the form of
unrealised possibilities, dispositions and so on. If consciousness
supervenes on computation , then it supervenes on such counterfactuals
too; this amounts to the response to Maudlin's argument in wihch the
physicalist abandons the claim that consciousness supervenes on
activity.
Of ocurse, unactualised possibilities in a single universe are never
going to lead to any White Rabbits!
Turing and Other Machines
Turing machines are the classical model of computation, but it is
doubtful whether they are the best model for human (or other organic)
intelligence. Turing machines take a fixed input, take as much time as
necessary to calculate a result, and produce a perfect result (in some
cases, they will carry on refining a result forever). Biological
survival is all about coming up with good-enough answers to a tight
timescale. Mistaking a shadow for a sabre-tooth tiger is a msitake,
but it is more accpetable than standing stock still calculating the
perfect interpretation of your visual information, only to ge eaten.
This doesn't put natural cognition beyone the bounds of computation,
but it does mean that the Turing Machine is not the ideal model.
Biological systems are more like real time systems, which have to
"keep up" with external events, at the expense of doing some things
imprefectly.
Quantum and Classical Computers
(Regarding David Deutsch's FoR)
To simulate a general quantum system with a classical computer you
need a number of bits that scales exponentially with the number of
qubits in the system. For a universal quantum computer the number of
qubits needed to simulate a system scales linearly with the number of
qubits in the system. So simulating quantum systems classically is
intractable, simulating quantum systems with a universal quantum
computer is tractable.
Time and Causality in Physics and Computation
The sum total of all the positions of particles of matter specififies
a (classical) physical state, but not how the state evolves. Thus it
seems that the universe cannot be built out of 0-width (in temporal
terms) slices alone. Physics needs to appeal to something else.
There is one dualistic and two monistic solutions to this.
The dualistic solution is that the universe consists (separately) of
states+the laws of universe. It is like a computer, where the data
(state) evolves according to the programme (laws).
One of the monistic solutions is to put more information into states.
Physics has an age old "cheat" of "instantaneous velocities". This
gives more information about how the state will evolve. But the state
is no longer 0-width, it is infinitessimal.
Another example of states-without-laws is Julian Barbour's Platonia.
Full Newtonian mechanics cannot be recovered from his "Machian"
approach, but he thinks that what is lost (universes with overall
rotation and movement) is no loss.
The other dualistic solution is the opposite of the second: laws-
without-states. For instance, Stephen Hawking's No Boundary Conditions
proposal
Maudlin's Argument and Counterfactuals
We have already mentioned a parallel with computation. There is also
relevance to Tim Maudlin's claim that computationalism is incompatible
with physicalism. His argument hinges on serparating the activity of a
comptuaitonal system from its causal dispositions. Consciousness, says
Maudlin supervened on activity alone. Parts of an AI mechansim that
are not triggered into activity can be disabled without changing
consciousness. However, such disabling changes the computation being
performed, because programmes contain if-then statements only one
branch of which can be executed at a time. The other branch is a
"counterfactual", as situation that could have happened but didn't.
Nonetheless, these counterfactuals are part of the algorithm. If
changing the algorithm doesn't change the conscious state (because it
only supervenes on the active parts of the process, not the unrealised
counterfactuals), consciousness does not supervene on computation.
However, If causal dispositions are inextricably part of a physical
state, you can't separate activity from counterfactuals. Maudlin's
argument would then have to rely on disabling counterfactuals of a
specifically computational sort.
We earlier stated that the dualistic solution is like the separation
between programme and data in a (conventional) computer programme.
However, AI-type programmes are typified by the fact that there is not
a barrier between code and programme -- AI software is self-modifying,
so it is its own data. Just as it is not physically necessary that
there is a clear distinction between states and laws (and thus a
separability of physical counterfactuals), so it isn't necessarily the
case that there is a clear distinciton between programme and data, and
thus a separability of computational counterfactuals. PDJ 19/8/06
Chalmers on GLUTS
Daryl McCullough writes:
I made the split to satisfy *you*, Dave. In our discussion about
the table lookup program, your main argument against the table lookup
being conscious was the "lack of richness" of its thinking process.
And this lack of richness was revealed by the fact that it took zero
time to "think" about its inputs before it made its outputs. So I have
patched up this discrepancy by allowing "silent" transitions where
there is thinking, but no inputs. However, as I thought my example
showed, this silent, internal thinking can be perfectly trivial; as
simple as counting. It is therefore not clear to me in what sense
there can be more "richness" in some FSA's than there is in a table
lookup.
Dave Chalmers writes:
I made it abundantly clear that the problem with the lookup table
is not the mere lack of silent transitions -- see my response to your
message about the brain that beeps upon every step. Rather, the
objection is that (a) a lot of conscious experience goes on between
any two statements I make in a conversation; and (b) it's very
implausible that a single state-transition could be responsible for
all that conscious experience.
Like the beeping brain, ordinary FSAs with null inputs and outputs
aren't vulnerable to this argument, as in those cases the richness of
such conscious experience need not result from a single state-
transition, but from a combination of many.
DM:
If you allow a "null input" to be a possible input, then the
humongous table lookup program becomes functionally equivalent to a
human brain. To see this, note that the states of the table lookup
program are essentially sequences of inputs [i_1,i_2,i_3,...,i_n]. We
use the mapping M([]) = the initial state, M([i_1,i_2, ..., i_n,i_{n
+1}]) = I(M([i_1,i_2, ..., i_n]),i_{n+1}). The output for state
[i_1,i_2, ..., i_n] is whatever the lookup table has for that sequence
of inputs, which is correct by the assumption that the table lookup
program gets the behavior right.
DC:
You made essentially this argument before, and I responded in a
message of Feb 28. Here's the relevant material:
Your complaint about clocks, that they don't support
counterfactuals, is I think, easily corrected: for example, consider a
machine M with a state determined by a pair: the time, and the list of
all inputs ever made (with the times they were made). If
"implementation" simply means the existence of a mapping from the
physical system to the FSA, then it seems that such a system M would
simultaneously implement *every* FSA. Counterfactuals would be
covered, too.
This is an interesting example, which also came up in an e-mail
discussion recently. One trouble with the way you've phrased it is
that it doesn't support outputs (our FSAs have outputs as well as
inputs, potentially throughout their operation); but this can be fixed
by the usual "humongous lookup table" method. So what's to stop us
saying that a humongous lookup table doesn't implement any FSA to
which it's I/O equivalent? (You can think of the table as the
"unrolled" FSA, with new branches being created for each input. To map
FSA states to (big disjunctions of) table states, simply take the
image of any FSA state under the unrolling process.) This is a tricky
question. Perhaps the best answer is that it really doesn't have the
right state-transitional structure, as it can be in a given state
without producing the right output and transiting into the appropriate
next state, namely when it's at the end of the table. Of course this
won't work for the implementation of halting FSAs (i.e. ones that must
halt eventually, for any inputs, but one could argue that the FSA
which describes a human at a given time isn't a halting FSA (the human
itself might be halting, but that's because of extraneous influences
on the FSA). Your example above doesn't have the problem at the end of
the table; it just goes on building up its inputs forever, but at cost
of being able to produce the right outputs.
Not that I don't think lookup-tables pose some problems for
functionalism -- see my long response to Calvin Ostrum. But in any
case this is far from Putnam's pan-implementationalism.
DM:
The conclusion, whether you have silent transitions or not, is
that functional equivalence doesn't impose any significant constraints
on a system above and beyond those imposed by behavioral equivalence.
DC:
Even if your argument above were valid, this certainly wouldn't
follow -- the requirement that a system contains a humongous lookup
table is certainly a significant constraint! I also note that you've
made no response to my observation that your original example, even
with the silent transitions, is vastly constrained, about as
constrained as we'd expect an implementation to be.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list+unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Thu Sep 03 2009 - 01:32:16 PDT