Re: Numbers, Machine and Father Ted

From: Brent Meeker <meekerdb.domain.name.hidden>
Date: Mon, 30 Oct 2006 10:01:37 -0800

Stathis Papaioannou wrote:
>
> David Nyman writes:
>
>> I think we're in agreement, Stathis, but I'm trying to focus on a
>> problem, and what I think is a non-trivial aspect of evolved brain
>> functionality that would be required to overcome it. Of course, I agree
>> with you that each aspect of the experience '.....falls perfectly into
>> position in each case by virtue of its content alone' - it's precisely
>> what I've been arguing. But there's a subtler point here also, I think,
>> that leads to the problem. Let's take the 'cat sat on the mat': now
>> 'cat' starts at t1 and 'mat' ends at t2. Let's subdivide t1t2 into
>> occasions o1-o1000, and let teleportation occur between each. Each
>> occasion o1-o1000 is as informationally closed as OMt1t2 (the
>> 'teleportation' is of course inserted precisely to make this point),
>> but now it has become implausible to believe that any individual
>> occasion, say o492, is of sufficient extent to recover any coherent
>> component whatsoever of the conscious thought 'the cat sat on the mat'.
>> And yet, we know that we *are* in fact able to routinely recover such
>> components, corresponding loosely to a 'specious present' of some 1.5
>> seconds extent.
>>
>> Now comes the problem: how do we account for our manifest ability to do
>> this without invoking some form of illicit 'continuity' between
>> informationally separated occasions of arbitrarily fine granularity? No
>> individual occasion apparently contains all the necessary information,
>> and it seems that we almost can't stop ourselves imaginatively invoking
>> some sort of continuity over multiple occasions, in order that coherent
>> experiences can somehow be recovered by summing over the sequence.
>>
>> I think, if true, this would be a real problem in reconciling our
>> experience with the facts, and I think therefore that it requires a
>> real solution (actually an aspect of Barbour's time capsule theory
>> which I'm extrapolating a bit further). Simply, if what I'm arguing is
>> valid, it must follow that my assumption about individual occasions
>> 'not containing the necessary information' *must be wrong*.
>> Consequently, sufficient information to recover 'speciously present'
>> dynamic experiences *must* in fact be *simultaneously* represented by
>> the brain - be present on one occasion - and that this simultaneous
>> 'dynamic' presentation must be the engine that renders both the
>> duration and the dynamism of the experience. And, to complete the
>> (evolutionary) circularity, this would be precisely *why* the brain
>> would possess this capability - because without it, extended, dynamic
>> environmental presentations would simply be *unavailable* to the
>> organism.
>>
>> Does this make sense?
>
> I think I see what you mean, but it's as much a problem for the intact and
> normally functioning brain as it is for teleportation experiments, isn't it? For that
> matter, it's as much a problem for a computer that gets teleported around in the
> course of its calculations. If the teleportation time slices are of femtosecond
> duration, then there is nothing within a particular slice to mark it as part of the
> calculation 5464*2342. Yet a computer strobing in and out of existence like this,
> technical problems aside, will still come up with the right answer. Indeed, if the
> computer only materialised in the final femtosecond it would have the right answer
> and if a log were kept, evidence of how it arrived at the answer. Do you believe
> that there must be some super-computation information in each femtosecond slice
> that binds them all together?

I think that's possible. The fact that the computation is realized in a physical system which, to the best of our knowledge is continuous, may mean that there is something more than the the computational process conceived as discrete and finite (in the infomation sense).

On the other hand our theories of physics tell us that physical processes, including those that realize the computation, can also be approximated by discrete processes - except that time and space variables are kept as implicitly continuous. By this I mean that when simulating such a process on a digital computer (I'm old enough to remember when we did it on analog computers), we set the steps smaller and smaller and we're only satisfied when making the step smaller doesn't change the answer. I think this is going to be the case for any closed physical system. But for an open system, you+universe, I'm not so sure.

Brent Meeker

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Mon Oct 30 2006 - 13:54:47 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST