21
$\begingroup$

Wikipedia defines it to be

An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, i.e., $T(n) = O(n^k)$ for some constant k.

The algorithm runs in strongly polynomial time if [8]

  • the number of operations in the arithmetic model of computation is bounded by a polynomial in the number of integers in the input instance; and

  • the space used by the algorithm is bounded by a polynomial in the size of the input.

In Bernhard Korte, Jens Vygen, Combinatorial Optimization:

Definition 1.4.

An algorithm with rational input is said to run in polynomial time if

  • there is an integer k such that it runs in $O(n^k)$ time, where n is the input size, and
  • all numbers in intermediate computations can be stored with $O(n^k)$ bits.

An algorithm with arbitrary input is said to run in strongly polynomial time if

  • there is an integer k such that it runs in $O(n^k)$ time for any input consisting of n numbers and
  • it runs in polynomial time for rational input.

Please correct me if I am wrong. Following are the literal differences I noticed:

  • For polynomial time algorithms, Korte and Vygen's definition is "the Wikipedia's definition + polynomial storage space".

  • For strongly polynomial time algorithms, Korte and Vygen's definition and Wikipedia's definition both require polynomial time in the input storage size. But K and V's additionally requires polynomial time in the number of numbers in any input, while Wikipedia's additionally requires polynomial storage space in the input size.

So are K and V's and Wikipedia's definitions for these two concepts equivalent, respectively? What other differences and relations are between them?

Thanks and regards!

$\endgroup$
2
  • $\begingroup$ the wikipedia section right after the defn has a pretty good explanation, isnt it clear enough? it has to do with how many bits represent numbers, and very large numbers can affect complexity measurements "upward". think the defns with K&V are prob "near" equivalent. as for rational inputs, one needs a proof that arithmetic on rationals does not increase their size in a large way. think this can be shown by finding the LCD of all inputs and doing all arithmetic in numbers wrt the LCD. $\endgroup$
    – vzn
    Commented Dec 19, 2012 at 16:32
  • 1
    $\begingroup$ @vzn: the explanation in Wikipedia (1) is decent for arithmetic vs. Turing machine but is quite shallow regarding the strongly-poly purpose and definition, and (2) was completely wrong on what GCD exemplifies. $\endgroup$
    – alexei
    Commented Dec 22, 2012 at 3:01

1 Answer 1

8
$\begingroup$

Before formal definitions, consider what the "strongly/weakly" classification aims to distinguish.

First, consider running either one on a Turing Machine. Both will run in a number of steps polynomial in the length of binary-encoded input. Consequently, the number of arithmetic operations performed by both would have to be polynomial in the length of binary-encoded input. Hence, for both Turing Machine running time would grow polynomially as the number of input values or their magnitudes grows. To emphasize the latter, note that even the strong one will take more TM steps on larger magnitudes (it needs to at least read the extra bits). Under no circumstance does one become exponential (as is the case with the unrelated pseudo-polynomial time). With a Turing machine alone, a fundamental difference doesn't seem detectable.

Now consider running each on an arithmetic machine where any operation is an arithmetic operation on whole numbers and arithmetic operations are $O(1)$. As you increase the magnitudes of the input numbers, the length of the binary-encoded input grows, and the running time of the weak algorithm would grow, however the running time of the strong algorithm would not change, because can be bound by the number of input numbers, which you are not changing (e.g. matrix multiplication vs. Euclid's GCD).

The set of algorithms that run in number of arithmetic operations polynomial in the number of input numbers is well-defined, but overlaps with the class of algorithms that take number of TM steps exponential in the length of binary-encoded input (see examples). Hence, for this set, the properties in the second paragraph would not hold. To exclude the unwanted intersection, we add a condition for a polynomial TM space [*].

In [1] this is stated in two ways:

  • The algorithm runs in strongly polynomial time if the algorithm is a polynmomial space algorithm and performs a number of elementary arithmetic operations which is bounded by a polynomial in the number of input numbers.
  • A polynomial algorithm is a polynomial space algorithm (in our standard Turing machine model) and polynomial time algorithm in the arithmetic model (see this question for a clarification).

This sub-classification of what are already "efficient" algorithms might be be useful since physical computers resemble arithmetic machines more than Turing machines, so the sub-class might identify the algorithm that would run faster on a physical machine if their polynomial bounds in length of binary-encoded input are otherwise comparable (which is not the case in e.g. strong $O(n^3)$ vs. weak $O(n^2)$)

[*] The second condition is stated everywhere as polynomial space, whereas requiring polynomial time makes more sense to me. The former is more inclusive, but strange. Are there strongly-polynomial algorithms that take more than polynomial time? Note that repeated-squaring example takes neither polynomial time nor polynomial space.

[1] Grötschel, Martin; László Lovász, Alexander Schrijver (1988). "Complexity, Oracles, and Numerical Computation". Geometric Algorithms and Combinatorial Optimization. Springer. ISBN 0-387-13624-X.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.