Le 20-mai-06, à 01:17, Hal Finney a écrit :
>
> Bruno writes:
>> Meanwhile just a few questions to help me. They are hints for the=20
>> problem too. Are you familiar with the following "recursive"
>> program=20
>> for computing the factorial function?
>>
>> fact(0) = 1
>> fact (n) = n * fact(n - 1)
>>
>> Could you compute "fact 5", from that program? Could you find a
>> similar=20
>> recursive definition (program) for multiplication (assuming your=20
>> machine already know how to add)?
>> Could you define exponentiation from multiplication in a similar way?
>> =20
>> Could you find a function which would grow more quickly than=20
>> exponentiation and which would be defined from exponentiation like=20
>> exponentiation is defined from multiplication? Could you generalize
>> all=20
>> this and define a sequence of more and more growing functions.
>> Could=20
>> you then diagonalise effectively (=3D writing a program who does
>> the=20
>> diagonalization) that sequence of growing functions so as to get a=20
>> function which grows more quickly than any such one in the
>> preceding=20
>> sequence?
>
> Here's what I think you are getting at with the fairy problem. The
> point
> is not to write down the last natural number, because of course there
> is no such number.
Right!
> Rather, you want to write a program which represents
> (i.e. would compute) some specific large number, and you want to come
> up
> with the best program you can for this, i.e. the program that produces
> the largest number from among all the programs you can think of.
... and write on a reasonable amount of paper provided by the fairy. Of
course it must be a finite description.
>
> If we start with factorial, we could define a function func0 as:
>
> func0(n) = fact(n)
>
> Now this gets big pretty fast. func0(100) is already enormous, it's
> like
> a 150 digit number. However we can "stack" this function by calling
> it on itself. func0(func0(100)) is beyond comprehension. And we can
> generalize, to call it on itself as many times as we want, like n
> times:
>
> func1(n) = func0(func0(func0( ... (n))) ... )))
>
> where we have nested calls of func0 on itself n times.
All right. You provide a sequence of growing functions: func0(n),
func0(func0(n), etc. And then you get func1(n) by an n-iteration of
func0.
> This really gets
> bigger fast, much faster than func0.
>
> Then we can nest func1:
>
> func2(n) = func1(func1(func1( ... (n))) ... )))
>
> where again we have nested calls of func1 on itself n times. We know
> that func1(n) gets bigger so fast, func1(func1(n)) will get bigger
> amazingly faster, and of course with n of them it is that much faster
> yet.
>
> This clearly generalizes to func3, func4, ....
>
> Now we can step up a level and define hfunc1(n) = funcn(n), the nth
> function along the path from func1, func2, func3, .... Wow, imagine
> how fast that gets bigger. hfunc is for "hyperfunc".
>
> Then we can stack the hfuncs, and go to an ifunc, a jfunc, etc. Well,
> my terminology is getting in the way since I used letters instead of
> numbers here. But if I were more careful I think it would be possible
> to continue this process more or less indefinitely. You'd have program
> P1 which continues this process of stacking and generalizing, stacking
> and generalizing. Then you could define program P2 which runs P1
> through
> n stack-and-generalize sequences. Then we stack-and-generalize P2,
> etc.
> It never ends. But it's not clear to me how to describe the process
> formally.
All right. I will do this a little bit more formally, but you got the
right idea. Eventually we will see that there is no systematic way to
get the biggest number by such a procedure, so that it is not so
important which one to choose, except it is better to choose one which
is such that we can easily describe a big part of that sequence of
sequences of sequences ... of growing functions.
Your "stack and generalize" seems to correspond to diagonalizations.
>
> So we have this ongoing process where we define a series of functions
> that get big faster and faster than the ones before. I'm not sure how
> we
> use it. Maybe at some point we just tell the fairy, okay, let me live
> P1000(1000) years.
Yes. Well, if you have still some remaining place you can write
P1000(P1000(1000)).
> That's a number so big that from our perspective it
> seems like it's practically infinite.
Absolutely! We should ask the fairy if she provides the "growing
brain" needed for living a so long time. If the brain is not growing,
given that its "dimension" are very small compare to such big number,
it will cycle!
> But of course from the infinite
> perspective it seems like it's practically zero.
Right, but then this will be the case with any fairy capable of
providing only finite (but long) life. In a sense, all numbers are big,
and even *very big*, and "very very big", etc ... Except for a finite
number of them.
Computer scientist say "almost all number have property P" when all
numbers have property P except for a finite number of exception. For
example, imagine an infinite row of standing domino, one for each
number, and such that if the first one falls, it falls on the second
one, and so on. Then we have this curious fact that for any domino,
there is a time at which time it will fall (all dominoes will indeed
fall!), but, at *any* time, almost all dominoes still stand up.
Let me give a sequence of growing function which generalize the best
known, i.e. addition, multiplication, and exponentiation.
Let me first explain (for the benefit of others who could have miss
this) why the following:
fact(0) = 1
fact(n) = n * fact(n-1)
can be seen as instructions for computing the factorial function.
Suppose you want to compute fact(5). "fact(5)" does not match the
first equation, so we try the second, and we see that indeed "fact(5)"
match the left part of the second equation "fact(n)" by substituting
the variable n by the number 5: so we arrive at:
fact(5) = 5 * fact(4)
Making the same reasoning for fact(4), we get:
fact(5) = 5 * (4 * fact(3)), so that eventually
fact 5 = 5 * (4 * (3 * (2 * (1 * fact(0))))), and here fact(0) matches
the "terminating condition", that is, the first equation above which
says that fact(0) = 1. So we get:
fact(5) = 5 * (4 * (3 * (2 * (1 * 1)))), and, assuming we have already
define multiplication, this will give successively:
5 * (4 * (3 * (2 * (1 * 1))))
5 * (4 * (3 * (2 * 1)))
5 * (4 * (3 * 2))
5 * (4 * 6)
5 * 24
120 (bingo!).
To fix the thing, I will suppose the fairy can only compute the
successor function, that is, the one which gives n + 1, when you enter
n as input, and the predecessor function (which gives n - 1 from n).
And I will suppose the fairy can understand recursive definition like
the one given for the factorial (fact) function.
Addition is just the iteration, in Hal Finney's sense, of the successor
function, except it is a function of two variables, and we give a
definition by recursion on the second variable. Also I will keep the
notation x + y in the place of the more formal one add(x,y). x
represents an natural number, and n represents a natural number (the
one on which we define by recursion). Don't be afraid by the jargon,
the following should be self-explicative:
x + 0 = x
x + n = (x + (n - 1)) + 1
Here too we have transform the problem of adding x + n into adding one
to the solution of the simpler problem "x + (n -1)".
Let us compute 4 + 5.
"4 + 5" does not match the first equation, but matches the second one:
we get:
4 + 5 = (4 + 4) + 1.
We must compute 4 + 4, and this gives (again by the second equation) (4
+ 3) + 1, so we get, successively:
4 + 5 =
(4 + 4) + 1
((4 + 3) + 1) + 1
(((4 + 2) + 1) + 1) + 1
((((4 + 1) + 1) + 1) + 1) + 1
(((((4 + 0) + 1) + 1) + 1) + 1) + 1; now 4 + 0 matches x + 0, and this
gives x, by the first equation; we get (remember the fairy can compute
the successor function sending x on x + 1:
((((4 + 1) + 1) + 1) + 1) + 1
(((5 + 1) + 1) + 1) + 1
((6 + 1) + 1) + 1
(7 + 1) + 1
8 + 1
9 (bingo!)
I am not saying this is an efficient way to add numbers, but who cares?
We want just give a description of a big number, not actually computing
it (something which necessarily will take a long time).
Let us define multiplication in a similar recursive way, iterating the
addition which we have just defined:
x * 1 = x
x * n = x + (x * (n - 1))
Again we have reduce the problem "x * n" by the simpler problem x + (x
* (n - 1)).
Gosh, my Saturday's students arrive already- I must leave.
Exercises:
- Compute 4 * 5 from the recursive definition above.
- Define exponentiation recursively (iterating multiplication),
- Define the next growing function, iterating exponentiation.
- Define the whole sequence of growing functions
- Define a function growing more quickly than any function from the
preceding sequence
- Define a function growing more quickly than the preceding one.
Bruno
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list.domain.name.hidden
To unsubscribe from this group, send email to everything-list-unsubscribe.domain.name.hidden
For more options, visit this group at
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---
Received on Sat May 20 2006 - 09:55:30 PDT