Re: computationalism and supervenience

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Fri, 01 Sep 2006 23:49:21 -0700

Stathis Papaioannou wrote:
> Peter Jones writes:
>
>
>> Stathis Papaioannou wrote:
>>
>>> Peter Jones writes:
>>>
>>>
>>>>> I'm not necessarily talking about every possible computation being
>>>>> implemented by every physical system, just (at least) the subset of finite
>>>>> computations implemented by a physical computer or brain. I think this is
>>>>> another way of saying that a recording, or a single trace of a computation
>>>>> branching in the multiverse, can be conscious. To prevent a recording
>>>>> being consious yoiu can insist on counterfactual behaviour, but that seems
>>>>> an ad hoc requirement introduced simply to prevent the "trivial" case of a
>>>>> recording or any physical system implementing a computation.
>>>>
>>>> The requirement that computations require counterfactuals isn't ad hoc, it
>>>> comes from the observation that computer programmes include if-then
>>>> statements.
>>>>
>>>> The idea that everyting is conscious unless there is a good reason it isn't
>>>> -- *that* is ad hoc!
>>>
>>> No, it follows from the idea that anything can be a computation. I think this
>>> is trivially obvious, like saying any string of apparently random characters
>>> is a translation of any English sentence of similar or shorter length, and if
>>> you have the correct dictionary, you can find out what that English sentence
>>> is.
>>
>> But that is actually quite a dubious idea. For one thing there is an objective
>> basis for claiming that one meaning is the "real" meaning, and that is the
>> meaning intended by the writer.
>
>
> There might have been a particular meaning intended by the writer, but remember
> materialism: all you have really is ink on paper, and neither the ink nor the
> paper knows anything about where it came from or what it means. Suppose a stream
> of gibberish is created today by the proverbial monkeys typing away randomly, and
> just by chance it turns out that this makes sense as a novel in a language that
> will be used one thousand years from now. Is it correct to say that the monkeys'
> manuscript has a certain meaning today? Or is it meaningless today, but meaningful
> in a thousand years? If the latter, does it suddenly become meaningful when the
> new language is defined, or when someone who understands the new language actually
> reads it? What if the manuscript never comes to light, or if it comes to light and
> is read but after another thousand years every trace of the language has
> disappeared?
>
> I don't think it makes sense to say that the manuscript has intrinsic meaning;
> rather, it has meaning in the mind of an observer. Similarly, with a computation
> implemented on a computer, I don't think it makes sense to say that it has meaning
> except in its interaction with the environment or in the mind of an observer.

But then, as you'v noted before, you can regard the environment+computer as a bigger
computer with no external interaction.

You've used this argument as a reductio absurdum against the idea that a manuscript
or any arbitrary object has a meaning. Yet you seem to accept the similar argument
that any object implements a computation - given the right
"dictionary/interpretation/manual".

>Any
> string of characters or any physical process can be seen as implementing a
> language or a computation, if you have the right "dictionary". There is a very
> interesting special case of this if we allow that some computations can be
> self-aware, in the absence of any environmental interaction or external observer:
> by definition, they are their own observer and thus they bootstrap themselves into
> consciousness.
>
>
>> For another, your translations would have to be complex and arbitrary, which
>> goes against the ususal modus operandi of seeking simple and consistent
>> explanations.
>
>
> It may be inefficient, but that does not mean it is invalid.
>
>
>>> This is analogous to finding an alien computer which, when power is applied,
>>> is set into motion like an inscrutable Rube Goldberg machine. If you get your
>>> hands on the computer manual, you might be able to decipher the machine's
>>> activity as calculating pi.
>>
>> You might not need the manual. Numbers don't have arbitrary semantics in the
>> same way words do. That's why SETI uses mathematical transmissions.
>
>
> Mathematical truths are eternal and observer-independent, but mathematical
> notation certainly is not. SETI assumes that there will likely be greater
> similarities in how different species express mathematical statements than in
> their non-mathematical communication. There is nothing to stop the aliens using a
> mathematical notation that varies according to the moods of their emperor or
> something, making their broadcasts of mathematical theorems seem completely random
> to us. Maybe that's why we haven't recognised them yet.

Of course if you have a very low noise channel the greatest communication efficiency
is realized by sending data compressed messages - which look random.

>
>
>> It is also something Everythingist arguments rely on. You can't exist as a
>> computation in a numbers-only universe if computations require external
>> interpretation.
>
>
> The computation is a mathematical object that exists in Platonia. The
> implementation of a computation on a physical computer so that we can observe it
> is something else. It is like the difference between the number 3 and a collection
> of 3 oranges.
>
>
>>> Moreover, you might be able to reach inside and shift a few gears or discharge
>>> a few capacitors and make it calculate e instead, utilising the fact that the
>>> laws of physics determine that if the inputs change, the outputs will change
>>> (which, I trust you will agree, is the actual physical basis of the if-then
>>> statements).
>>
>>
>>
>>> Now, in human languages as in machine design, there are certain regularities
>>> to make things easier for user. It might be possible, albeit difficult, to
>>> decipher a foreign language or figure out what an alien computer is computing
>>> by looking for these regularities. However, it is not necessary that there be
>>> any pattern at all: the characters in the unknown language may change in
>>> meaning every time they appear in the string in accordance with a random
>>> number generator, a cryptographic method called a "one-time pad". Similarly,
>>> the meaning of the physical states of the alien computer could change with
>>> each clock cycle according to some random number sequence, so that if you had
>>> the key you could figure out that the computer was calculating pi, but if you
>>> did not its activity would seem random.
>>
>> Assuming that computational states have an external semantics like words.
>
>
> Of course they do. Does Intel or Microsoft follow some universal rule of computer
> design? Any computer can be emulated on a UTM, but that doesn't mean the computer
> can't be based on otrageously bizarre and unpredictable rules, inscrutable to
> anyone not in the know.
>
>
>>> I don't think it would be reasonable to say that the computer is only
>>> calculating pi when you have the manual at hand ready to refer to, even though
>>> without the manual the computer is completely useless to you if you want to
>>> calculate the area of a circle, for example.
>>>
>>> Remember, even the apparently random computer handles counterfactuals, in that
>>> if a gear or a semiconductor junction were changed, the whole subsequent
>>> activity of the machine would change, and the manual would tell you how the
>>> computation had changed.
>>>
>>> You could dismiss the computations of random physical systems as trivial or
>>> useless, but what if you believe that some computations can be conscious? It
>>> would be no easier for us to observe or interact with these computations than
>>> it would be for us to observe or use the pi calculation, but by definition the
>>> conscious computations *themselves* would be self-aware.
>>
>>
>> The difficulty is artifical. It comes from your willingness to put baroque
>> interpretations on things.
>
>
> I am talking about what is theoretically possible, not what is sensible or
> efficient.

Suppose some computation, such as what's happening in your brain, implements
consciousness. How much could it be changed and still be conscious? Could we slice
it up into segments and rearrange them? How long a segment? Is there "something it
is like" to be conscious and insane? I think if we can answer this and then limit
our discussion to sane consciousness then some of these theoretical possibilities go
away.

>
>
>> There is an established method of finding the simplest and most consistent
>> mathematical structure that maps a physical system, and that is physics.
>>
>>
>>> We might say in the above cases that the burden of the computation shifts from
>>> the physical activity of the computer to the information in the manual. The
>>> significance of this is that the manual is static, and need not even be
>>> instantiated if we don't care about interacting with the computer: it is a
>>> mathematical object residing in Platonia.

But the manual - or a look-up table - is timeless, while computation is a process. It
depends on being presented a sequence of inputs. I see no reason give up this
distinction between computation and mathematical object. To equate them seems to me
to beg the question of whether computation can be a mathematical object in Platonia.

Brent Meeker


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Sat Sep 02 2006 - 02:51:14 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST