Re: is induction unformalizable?

From: Wei Dai <weidai.domain.name.hidden>
Date: Wed, 13 Jul 2005 22:04:41 -0700

>> Correct me if wrong, but isn't the halting problem only
>> undecidable when the length of the program is unbounded? Wouldn't the AI assign non-zero
>> probability to a machine that solved the halting problem for
>> programs up to size S? (S is the number of stars in the sky, grains of sand,
>> atoms in the universe, etc...) As an aside, this would actually be my best guess as to
>> what was really going on if I were presented with such a box (and I'm not
>> even programmed with UD+ASSA, AFAIK). Any sufficiently advanced
>> technology is indistinguishable form magic (but not actual magic) and all that ;->...
>>
>> Moshe

The AI would assign approximately 2^-S to this probability. A human being would intuitively assign a significantly greater a priori probability, especially for larger values of S. As S goes to infinity, the AI's probability would converge to 0, whereas the human's would converge to some positive constant.

Why 2^-S? Being able to solve the halting problem for programs up to size S is equivalent to knowing the first S bits of the halting probability (Chaitin's Omega). Since Omega is incompressible by a Turing machine, the length of the shortest algorithmic description of the first S bits of Omega is just S (plus a small constant). See http://www.umcs.maine.edu/~chaitin/xxx.pdf.

Here's another way to see why the AI's method of induction does not capture our intuitive notion. Supposed we've determined empirically that the black box can solve the halting problem for programs up to some S. No matter how large S is, the AI would still only assign a probability of 2^-100 to the black box being able to solve halting problems for programs of size S+100.
Received on Thu Jul 14 2005 - 01:05:22 PDT

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:10 PST