RE: Am I a token or a type?
Lennart wrote:
>
> If what you say means that what is possible for a sentient being given
> sufficiently advanced technique to percieve is what can
> possibly exist,including the feeling
> we have what it is like being us, I'm with you. The description and the
> descibed belong to the same dimension. The border between them becomes
relative,
> just like the Now in relativity theory.
> So what is the uppermost level for AI?
>
> Lennart
>
In answer to Lennart and as commentary to Bruno at al…..
This issue resonated a bit with my current thinking in various areas. I have
taken the time to ‘dump it’ to paper, so that I may get on to more practical
matters – to let it leave me alone! The following meandering word dump is
what happened, as copied from my design/journal/book/whatever the hell it
is. It’s rather long but I hope, worth a read.
>Am I a token or a type?
>How can an abstraction be felt?
>What is the uppermost level for an AI?
Our culture is that 'observer' is automatically taken as 'outside' the
observed system. We characterise the observations with cognition itself
assumed and tacit. What I am trying to get people to do is to treat the
observer as part of the system - indeed created _of_ the same system. We
observe from within. Our symbols and mathematical idealisations become the
means of communicating observations from one observer to another, using the
common perception mechanisms of a given ‘type’ of observer. This
mathematical description in no way communicates 'what it is like' to be an
observer and never will, just as any number of words describing flowers will
never capture what it is truly like to be a flower or even a human in the
presence of a flower.
=====================================
Imagine, {and with this opening word I have already fallen into the
traditional deception of the thought experiment!} that the
‘everything-listers’ rest on their backs on a grassy slope, musing the
clouds. Let’s say in summer, near Heidelberg, Germany. The beer halls and
many Rhine valley wines beckon. (We may as well make this fun!).
Collectively we see the following in the sky:
cloud cloud cloud
cloud cloud cloud
cloud
cloud
cloud cloud
cloud cloud
cloud
cloud
cloud cloud cloud
cloud cloud cloud
The universe has randomly contrived to arrange a cloud that we all recognise
as the letter E. The fact of the resonance in the minds of us all – the
‘meaning’ of the shape – is just that: a resonance in the minds of us all as
computational entities – observers - and no more. The E shape is a natural
spontaneous occurrence, the result of what we like to call the output of a
computational entity we call the universe. It has no meaning to the universe
that created it (see *** below) – it is, of itself, no more meaningful than
any other cloud shape. The presence of the cloud itself is where the
computational expression ends, for the universe at the spatial/temporal
scale of the everything-list observers.
In my previous post I proposed that we shift the whole viewpoint of
abstraction by recognising our true place as observers. “Throw a net around
any chunk of any universe and then consider the observer – the computational
entity – thus created”. The boundaries of a computational entity can be
found by growing the boundaries until the computational causality-modelling
capacity ceases to increase (unfortunately you need another observer to do
this, and so on – lets just say there is one!). At that boundary you then
continue to increase the boundary to some arbitrary level, which is the
sensory interface to the rest of the universe. If we are talking about
construction of a human observer then the boundary of the computational
component is the brain and the sensory boundary is the external boundary of
a human (no tools or proprioception just yet, please). The only reason for
this boundary is that there are multiple observers (tokens) of ‘type’ people
and they all interact (communicate) at that level .ie. communications have
been calibrated for an entity with that boundary (- an example being the
cloud E previously described an arbitrary shape mutually agreed as
representing something for as yet undefined purposes).
In this way we ‘are’ a chunk of the universe we observe. The same
computational rules that created the clouds created us. What we call
reality, when viewed like this, becomes the ‘thoughts’ of the universe.
Subscribers to the many flavours of QM, MWI, multiverses will say that the
universe has many ‘friends’ (‘tokens’ of ‘type’ universe) to ‘talk to’. For
this discussion let us consider there is only one – our position as
observers places us there even if we are the sporadic output of the UD
beasty (Thanks Hal for spelling it out!). *** (from above) We have it that
because the universe has no-one to talk to, its ‘thoughts’ need have no
calibration – standardisation. Within the universe the ‘thoughts’ that are
humans have a uniformity – a standardisation level- that engenders our
survival, as it is that very similarity and the communication between us
(from genes up) that allows the continued existence of ‘us’ tokens of type
‘people’.
Clouds are just clouds: billowing fractals –communicated between ourselves
in symbols collectively known as the ‘abstraction’ called flow dynamics by
us observers - scudding across a summer sky. As computational entities they
are comparatively shallow and any notional perception is very ‘dim’.
This is the practical reality for us and, with apologies, my selfish focus
of interest as an ‘AI constructor’.
We are now able to, I think, get a little resolution on the ‘type’/’token’
distinction. What I have so far discussed is the agreement between a group
of observers (everything-listers on the grassy slope) that they are, to a
high degree of certainty, in the presence of ‘E’-ness. What exactly is
happening here?
Light from the E-cloud hits the senses of each computational entity.
A-priori training, which occurs as a result of being a chunk of universe
with an appropriate level of causality modelling and by also co-existing
with other like entities has created a causality link to an internal
symbolic representation of E or E-ness. The awareness of E may then be
conveyed to other like entities via verbal communication: a mapping through
the air.
Fine – but what exactly is the causality modeller? Conceptually is it any
different to the cloud? Whereas the cloud is a constellation of water
droplets that just happens to have taken the E shape, in the mind we have
contrived – learned – a constellation of neural firings that has the same
significance – a significance calibrated for the very purposes of
standardised communication and behaviour. We have also learned of (modelled)
the essential features of E-ness that have to be present – the pattern of
neural firings – that connects us to the awareness of the presence of
E-ness. The abstraction E – a thing created by mutual agreement between
computational entities – then controls/determines behaviour. E does not
exist. The matter that has been configured to represent E- ness exists.
So here is where my assertion has surfaced. E-ness is an abstraction. It has
no reality of its own. The efficacy of this arrangement is profound. Large
scale and far reaching understanding of causality can be contained within a
small symbolic (abstracting) computational entity of sufficient
sophistication. The causality model that ‘is’ the mind, structured of the
same matter as the rest of the universe, has created a personalised,
customised, set of relationships that, if characterised using symbolic
mathematics would represent a set of ‘laws of the mind’ that are NOT laws of
physics. These laws operate to create our internal thoughts – the equivalent
of the universe’s laws of physics, that allow the universe to ‘think’
clouds. The byproduct of this is that we can think 'outside' the system -
imagine arbitrary instances of things that do not obey what we understand to
be normal causality in our universe.
Some say that the universe is a massive cellular automata. Regardless of the
accuracy of this we can say with some certainty that a configuration of much
larger-scale cellular automata (neurons) runs a customised set of adaptive
symbol-manipulation 'equations' that is ‘mind’ including what we describe as
human-level cognitive capacity. In the absence of any other processor
architecture, an AI with a near-human ‘what it is like’ experience would
clearly be based optimally on cellular automata.
So where does this lead? The single most significant thing is that if you
want to build an AI that displays human level cognition you do NOT create an
artificial processor manipulating the abstract symbols of the human mind.
This is what the computer scientist does when code-crashing ‘AI’. This is
like taking the E-cloud above and encoding it, such as I have done in my
email as a bunch of pixels, and then manipulating the pixels! What real
understanding of our universe can an AI gain from this? The AI is modelling
NOT our universe, but the causality between artificial symbols we have
created to communicate our understanding to each other in the real universe.
It’s a whole level of indirection removed from reality. The absolute best
that can be obtained from this is clearly a simulation. A sufficiently
sophisticated AI of this type will sense and feel not our universe, but
whatever ‘it is like’ to embody causal relationships between a set of human
generated artificial symbols!
Role play what it would be like to be two of these simulations. Would they
be able to communicate to each other realistically about 'what it is like to
be human'? Absolutely not, except as notional 'race-callers'. They would
connect with each other at an intimate level 'what it is like' to be them.
This is something from which humans are excluded, just as 'what it is like
to be a dog or Deep-Blue' is excluded from our cognitive envelope.
Here is an "AI Rule #1": An AI with human level cognition of our universe
has to model the universe directly, through its own senses as an observer at
the same spatio-temporal level .ie. with a processing created –of- the
universe it is supposed to model.
This means that the only potential strong-AI will come from
a) those AI workers starting from a robotics standpoint – where the sensory
environment is implicitly part of the AI and where the AI is as physically
similar as practical.
b) Cellular automata approach.
This means that strong-AI will not come from
c) AI without sophisticated sensory/actuation connectivity to our universe.
d) Von-Neumann sequential symbolic processing. Whilst creating simulations
of cognition that may behave ‘as if’ they were in our universe (to ‘us’
observers in that universe), simply cannot have a ‘what it is like’
experience that we have of our universe or any deep understanding of a human
’s place in our universe. They will have their own ‘what it is like’
experience that we will never see or comprehend. An AI created like this
will be completely unable to comprehend the human context, as we will be
unable to comprehend the AI’s context. It seems that this avenue would
indeed create a version of the legendary ‘zombie’.
What about a hybrid? For example: A cellular automata based AI where each
cell is a mini-Von-Neumann architecture. This would get a little closer to
human equivalence.
When you get to this level of consideration you have to look at what you are
trying to achieve. It becomes clearer that the only computational entity
with the same ‘what it is like’ description as a human will be something
constructed identically to a human. A human! The question becomes more: What
cognitive capacity do you want to achieve, whilst making a useful
computational entity?
For safety reasons – the closer to human the better.
With this in mind a set of working hypotheses are as follows:
a) Observers are computational entities constructed from the system being
observed with cognitive sophistication commensurate with computational
sophistication, defined as the capacity of a computational entity to model
the rest of the universe.
b) That morphology matters: The more similarities there are between any two
computational entities, the easier it will be for each to cognitively handle
the other entity .ie. control behaviour to suit the needs of each
computational entity.
c) That the sensory/actuation feeds that connect any two computational
entities allow communication and are also constructed from the system.
d) ‘Typing’ - the descriptions (symbols, which includes mathematical
abstractions) that flow between observers is only that, in no way conveys
'what it is like to be' the phenomenon thus described and finally – in no
way does an abstraction (a ‘typing’) instantiate or run an abstraction.
Answers……
If you take the above on board, I hold that you are in a better position to
understand the role of 'mind' and 'subjectivity' in the scheme of things.
Answers lead some where:
Am I a token or a type?
If you can ask the question you are instantiated and you answer ‘Token’ to
yourself. Another instantiated like type would answer ‘Token’ (for you) as
well, if asked. (An abstraction, by definition, only exists in the mind of
an observer as symbology without instantiation, it is not possible to ‘be’ a
type and view a token. Indeed this is the very reason the distinction is
useful.)
How can an abstraction be felt?
An abstraction cannot be felt. It is a communication between observers
transmitted as symbols.
What is the uppermost level for an AI?
There is no ‘uppermost level’, except that which we configure for our own
utility. If we need an AI to be as useful as a human then it needs to be at
least as computationally sophisticated and physically similar (in terms of
perception/actuation) to a human as possible.
Is existence ‘created’ in the mind of the observer?
No. The result of an observation is created in the mind of an observer.
My daughter ate a teddy bear biscuit. The teddybear had a smile. Do you
think she tasted the smile?
No.
Will ‘AI’ like CYC or ALICEBOT ever have any real human cognition?
No.
A lovely example of this is in “Godel Escher Bach” and the anteater’s “Aunt
Hillary”. “Aunt Hillary” is a termite nest with which the anteater has a
primitive cognitive relationship. The ants are the cellular automata and
when a ‘net’ is thrown over it a definite functional computational entity
can be delineated, courtesy of the biologists. Hofstader nicely explores the
way the communication occurs and the 'symbols' generated during the process.
Lastly I have realised just what a tough job that philosophers have always
had in describing the mind. All they have to go by is language – a set of
abstract symbols remapping brain states and thus losing all their ability to
communicate or indeed define the ‘what it is like’ subjective experience, a
job of description which was, after all, doomed from the start!
=======================================================
stop stop, no more, you say.......
I hope you made it this far.....
:-)
Colin
Received on Wed Aug 07 2002 - 17:38:28 PDT
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:07 PST