A useful model of computation is the Turing Machine. A TM has a tape
with symbols on it; a head which moves along the tape and which can read
and write symbols, and a state machine with a fixed number of states
that controls head movement and symbol writing based on the current
state and the symbol at the head's current location. It has been shown
that this relatively simplistic model is able to do anything that more
sophisticated computer models can do.
We can consider the "state" of a TM to be made up of the conjunction of
three things: the current state of the tape (i.e. the string of symbols
written there); the position of the head; and the state of the internal
state machine. Maybe it would be best to call this the superstate
because normally the "state" of the TM just refers to its internal state
machine state. The TM can then be said to advance from superstate to
superstate according to its internal rules and the contents of the tape.
If a TM ever gets into the same superstate twice, it is in an infinite
loop. This is because the TM is fully deterministic and so it will
always go into the same successor superstate from a given superstate.
Halting TM's never get into the same superstate twice. Therefore halting
TM's go through a unique succession of superstates, from the first to
the last.
We can map or label a TM's superstates with successive integers,
corresponding to the order that it goes through the superstates of a
computation. In this mapping, the only difference between two different
computations is their length. If two computations had the same length N,
they would both go through states labeled 0, 1, 2, ..., N.
What is a computation? A TM computation has two parts. One is the
initial conditions: the initial value on the tape, the initial head
position. The other is the set of rules used, the internal state machine
that controls the machine. Together these two parts define a trajectory
of the TM through a sequence of superstates.
We often think of the internal state machine as being like the "program"
and the initial contents of tape as being the "data". However, as
Turing was the first to recognize, this distinction is not always useful,
and sometimes it makes more sense to think of at least part of the tape
contents as being program rather than data. In particular, the Universal
TM treats part of the tape as a specification for a specific other TM
that it will emulate, and the remainder of the tape is then the input
to that TM.
Generally, when we think of counterfactuals in a TM computation we mean
to change the data, not the program. We don't mean to ask, what would
happen if you ran a different program on the same data. Rather, we
mean, what would happen if you ran the same program on different data.
We want to say that two computations are equivalent only if they have
the same counterfactual behavior - that is, if the programs would behave
the same on all data.
One problem with this is noted above, that we cannot always cleanly
distinguish program and data. In the case of the UTM, is the prefix part
of the tape, that defines the particular TM to emulate, program or data?
If it is program, we would not try to vary it in considering whether
two computations are equivalent. If it is data, we should consider
such variations. In general, I don't think we can always distinguish
these cases cleanly. UTMs can be nested to any desired degree. What is
program to one is data to another. More complex UTM computations may be
aided by certain patterns on the tape which will disrupt the computation
if they are changed.
Another problem is that a more complex mapping may be able to be set up
between two different computations even if we consider counterfactuals
as all different initial tape configurations. We could make the mapping
be a function of the superstate as defined above. Two computations with
different initial tapes will start in different superstates, hence the
mapping is still unique. And it will be robust over all possible inputs
and hence all possible counterfactual computations.
On these considerations, It seems to me that there are problems
with basing the distinction between computations on support for
counterfactuals. TMs make the very notion of counterfactuals rather
fuzzy, and still admit the possibility of mappings between computations
that remain robust even in the face of counterfactuals.
My preferred view is to focus on the algorithmic complexity of the
mapping between two computations, and to ask whether the information
needed to specify the mapping is less than the information needed to
write down the computation from scratch. If not, if the mapping is
substantially bigger than the computation it purports to describe,
then the correspondence is an illusion and is not real.
Hal Finney
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Wed Aug 02 2006 - 20:20:05 PDT