- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Eric Hawthorne <egh.domain.name.hidden>

Date: Sat, 24 Jan 2004 19:31:50 -0800

Wei Dai wrote:

what it lets met understand is that some algorithms are O(1), O(n), O(nlogn),O(n^2) O(e^n) etc.

I'm also generally familiar with Turing Machine concepts, but I'm rusty on the details.

I'm a bit confused as to what is meant by a string having a lower algorithmic complexity.

Does that mean that ths shortest program that could result in a symbol string of that form

has a vertain algorithmic complexity that is lower than the algorithmic complexity that

could compute some other string? What are these strings anyway? Symbol strings which

are a finite subpart of the turing machine's tape, conceptually?

A question that would arise with that definition above of what the "algorithmic complexity of

a string" means is: Shortest algorithm that could generate that string starting with what as its

input? Surely if the input were a string that was, say, just one value in one tape-position

different than the output string, then any output string can be computed by a trivial turing

machine program (one step or so) from that special input. So how do you define what

the input is in assessing "the algorithmic complexity of a string?"

Or is the string a sequence of instructions and datastore positions comprising the turing machine program itself?

and we're discussing the inherent computational complexity of that particular program, for any (or average or whatever)

input?

I guess I have more trouble mapping directly in my head from turing machine programs to multiverse states than

I do mapping raw bitstrings to multiverse states.

The general question I asked above would seem to come down to "isn't the complexity of getting to

some subsequent information state determined by what the previous information state is?"

--------

Second terminology thing: When you say "each universal Turing machine, again I get confused".

Isn't "a turing machine" just the abstraction consisting of the movable read/write head and a tape?

Isn't the correct terminology "each turing machine PROGRAM which is NP-complete" or which is

"universal"? How can we have different machines themselves? Or is it conventional to say that

"a turing machine" is "the movable head, plus its current position, plus a particular set of values

on a tape (i.e. a particular program?) In normal computing terminology, the machine is the machine

and the software program is the software program and the data is the data.

If you can just help me a little with these terminology stumbling blocks, I'm sure I (and other

computational-complexity-theory-tourists on the list) can understand the concepts.

Date: Sat, 24 Jan 2004 19:31:50 -0800

Wei Dai wrote:

I took some small smattering of that stuff in comp sci undergrad, but essentiallyOn Sat, Jan 24, 2004 at 12:21:40PM -0800, Eric Hawthorne wrote:Can you explain briefly why the choice of measure is subjective? I haven't read any of the books you mentioned (will try to get to them) but am familiar with computability theory and decision theory.Since you do not mention that you're familiar with the theory of algorithmic complexity, I suggest that you read the first book on that list ASAP. The following response might not make sense until you do.

what it lets met understand is that some algorithms are O(1), O(n), O(nlogn),O(n^2) O(e^n) etc.

I'm also generally familiar with Turing Machine concepts, but I'm rusty on the details.

I'm a bit confused as to what is meant by a string having a lower algorithmic complexity.

Does that mean that ths shortest program that could result in a symbol string of that form

has a vertain algorithmic complexity that is lower than the algorithmic complexity that

could compute some other string? What are these strings anyway? Symbol strings which

are a finite subpart of the turing machine's tape, conceptually?

A question that would arise with that definition above of what the "algorithmic complexity of

a string" means is: Shortest algorithm that could generate that string starting with what as its

input? Surely if the input were a string that was, say, just one value in one tape-position

different than the output string, then any output string can be computed by a trivial turing

machine program (one step or so) from that special input. So how do you define what

the input is in assessing "the algorithmic complexity of a string?"

Or is the string a sequence of instructions and datastore positions comprising the turing machine program itself?

and we're discussing the inherent computational complexity of that particular program, for any (or average or whatever)

input?

I guess I have more trouble mapping directly in my head from turing machine programs to multiverse states than

I do mapping raw bitstrings to multiverse states.

The general question I asked above would seem to come down to "isn't the complexity of getting to

some subsequent information state determined by what the previous information state is?"

--------

Second terminology thing: When you say "each universal Turing machine, again I get confused".

Isn't "a turing machine" just the abstraction consisting of the movable read/write head and a tape?

Isn't the correct terminology "each turing machine PROGRAM which is NP-complete" or which is

"universal"? How can we have different machines themselves? Or is it conventional to say that

"a turing machine" is "the movable head, plus its current position, plus a particular set of values

on a tape (i.e. a particular program?) In normal computing terminology, the machine is the machine

and the software program is the software program and the data is the data.

If you can just help me a little with these terminology stumbling blocks, I'm sure I (and other

computational-complexity-theory-tourists on the list) can understand the concepts.

Received on Sun Jan 25 2004 - 00:17:33 PSTBasically, all of the sensible proposed measures are based on the universal distribution, which assigns a larger probabilities to strings that have lower algorithmic complexities. However there's actually an infinite class of universal distributions, one for each universal Turing machine, and there's no objective criteria for determining which one should be used. Another problem is that using the universal distribution forces you to assume that non-computable universes do not exist. If one does not want to make this assumption, then a more dominant measure need to be used (for example, based on a TM with an oracle for the halting problem or the complexity of a string's logical definition) but then there are even more measures to choose from (how high up the computability hierarchy do you go? how high up the set theoretic hierarchy?). Now suppose that two people, Alice and Bob, somehow agree that a measure M is the objectively correct measure, but Bob insists on using measure M' in making decisions. He says "So what if universe A has a bigger measure than universe B according to M? I just care more about what happens in universe B than universe A, so I'll use M' which assigns a bigger measure to universe B." What can Alice say to Bob to convince him that he is not being rational? I don't see what the answer could be.

*
This archive was generated by hypermail 2.3.0
: Fri Feb 16 2018 - 13:20:09 PST
*