Re: computer pain

From: 1Z <peterdjones.domain.name.hidden>
Date: Sun, 17 Dec 2006 07:32:28 -0800

Colin Geoffrey Hales wrote:
> Stathis wrote:
> I can understand that, for example, a computer simulation of a storm is
> not a storm, because only a storm is a storm and will get you wet. But
> perhaps counterintuitively, a model of a brain can be closer to the real
> thing than a model of a storm. We don't normally see inside a person's
> head, we just observe his behaviour. There could be anything in there - a
> brain, a computer, the Wizard of Oz - and as long as it pulled the
> person's strings so that he behaved like any other person, up to and
> including doing scientific research, we would never know the difference.
>
> Now, we know that living brains can pull the strings to produce normal
> human behaviour (and consciousness in the process, but let's look at the
> external behaviour for now). We also know that brains follow the laws of
> physics: chemistry, Maxwell's equations, and so on. Maybe we don't
> *understand* electrical fields in the sense that it may feel like
> something to be an electrical field, or in some other as yet unspecified
> sense, but we understand them well enough to predict their physical effect
> on matter. Hence, although it would be an enormous task to gather the
> relevant information and crunch the numbers in real time, it should be
> possible to predict the electrical impulses that come out of the skull to
> travel down the spinal cord and cranial nerves and ultimately pull the
> strings that make a person behave like a person. If we can do that, it
> should be possible to place the machinery which does the predicting inside
> the skull interfaced with the periphery so as to take the brain's place,
> and no-one would know the difference because it would behave just like the
> original.
>
> At which step above have I made a mistake?
>
> Stathis Papaioannou
>
> -----------------------
> I'd say it's here...
>
> "and no-one would know the difference because it would behave just like
> the original"
>
> But for a subtle reason.
>
> The artefact has to be able to cope with exquisite novelty like we do.
> Models cannot do this because as a designer you have been forced to define
> a model that constrains all possible novelty to be that which fits your
> model for _learning_.

If the model has been reverse-engineered from how
the nervous system works (ie, transparent box, not black box), it will
have the learning abilities of NS -- even if we don't know what they
are.

> Therein lies the fundamental flaw. Yes... at a given
> level of knowledge you can define how to learn new things within the
> knowledge framework. But when it comes to something exquisitely novel, all
> that will happen is that it'll be interpreted into the parameters of how
> you told it to learn things... this will impact in a way the artefact
> cannot handle. It will behave differently and probably poorly.
>
> It's the zombie thing all over again.
>
> It's not _knowledge_ that matters. it's _learning_ new knowledge. That's
> what functionalism fails to handle. Being grounded in a phenomenal
> representation of the world outside is the only way to handle arbitrary
> levels of novelty.

That remains to be seen.

> No phenomenal representation? = You are "model-bound"
> and grounded, in effect, in the phenomenal representation of your
> model-builders, who are forced to predefine all novelty handling in an "I
> don't know that" functional module. Something you cannot do without
> knowing everything a-priori! If you already know that you are god so why
> are you bothering?

So long as you can peak into a system, you can functionally duplicate
it
without knowing how
it behaves under all circumstances. I can rewrite
the C code

double f(double x, double.y)
{
   return 4.2+ sin(x) - exp(cos(y), 9.7);
}

in Pascal, although I couldn't tell you offhand
what the output is for x=0.77 , y=0.33


> Say you bring an artefact X into existence. X may behave exactly like a
> human Y in all the problem domains you used to define you model. Then you
> expose both to novelty nobody has seen, including you.... and that is
> where the two will differ. The human Y will do better every time. You
> can't program qualia. You have to have them and you can't do without them
> in a 'general intelligence' context.
>
> Here I am on a sat morning...proving I have no life, yet again! :-)
>
> Colin Hales


--~--~---------~--~----~------------~-------~--~----~
 You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---
Received on Sun Dec 17 2006 - 10:32:47 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:12 PST