13
$\begingroup$

Below, assume we're working with an infinite-tape Turing machine.

When explaining the notion of time complexity to someone, and why it is measured relative to the input size of an instance, I stumbled across the following claim:

[..] For example, it's natural that you'd need more steps to multiply two integers with 100000 bits, than, say multiplying two integers with 3 bits.

The claim is convincing, but somehow hand-waving. In all algorithms I came across, the larger the input size, the more steps you need. In more precise words, the time complexity is a monotonically increasing function of the input size.

Is it the case that time complexity is always an increasing function in the input size? If so, why is it the case? Is there a proof for that beyond hand-waving?

$\endgroup$
10
  • $\begingroup$ "Directly proportional" has a specific mathematical meaning that means, essentially linear time. In other words, if your input has size $n$, if the time is directly proportional the algorithm runs in time $cn$. I'd imagine that's not what you mean, as many algorithms do not run in linear time, i.e. sorting. Can you explain further? $\endgroup$
    – SamM
    Commented Aug 13, 2012 at 23:12
  • $\begingroup$ So you're asking about an algorithm that runs in $o(1)$ time? $O(1)$ means the algorithm runs in the same time regardless of input size, $o(1)$ means it runs faster as the input gets larger. I can't think of one that runs in that time off the top of my head, but the notation is fairly common because an algorithm will often run in something like $O(n^2) + o(1)$ time--in other words, it takes $O(n^2)$ time, and there are some other terms that grow smaller as the input gets larger. $\endgroup$
    – SamM
    Commented Aug 13, 2012 at 23:28
  • $\begingroup$ Good question. What about the counter-example of computing the prime factors of $c / n$ for some large $c$ (this is only an increasing function for $n \geq c$)? @Sam Note that an increasing function says that the time must be decreasing at some point along the real line (i.e. $f(b) < f(a), a < b$). $\endgroup$ Commented Aug 13, 2012 at 23:32
  • $\begingroup$ @Darthfett I'm afraid I don't follow. Not all increasing functions are decreasing at some point along the real line. $\endgroup$
    – SamM
    Commented Aug 13, 2012 at 23:42
  • $\begingroup$ @Jennifer Yes, I understand, that makes sense. I'd recommend using the term $o(1)$ as it has the meaning you're looking for. And I'd like to reemphasize that direct proportionality implies linearity; I see what you're getting at but it may be confusing to those who are reading the question for the first time. $\endgroup$
    – SamM
    Commented Aug 13, 2012 at 23:43

3 Answers 3

13
$\begingroup$

Is it the case that time complexity is always an increasing function in the input size? If so, why is it the case?

No. Consider a Turing machine that halts after $n$ steps when the input size $n$ is even, and halts after $n^2$ steps when $n$ is odd.

If you mean the complexity of a problem, the answer is still no. The complexity of primality testing is much smaller for even numbers than for odd numbers.

$\endgroup$
4
$\begingroup$

Is it the case that time complexity is always an increasing function in the input size? If so, why is it the case? Is there a proof for that beyond hand-waving?

Let $n$ denote the input size. To read the entire input, a turing machine already needs $n$ steps. So if you assume that an algorithm has to read it's entire input (or $n/c$ for some constant $c$), you will always end up with at least linear run time.


The problem with defining algorithms with a "monotonically decreasing run time function" is, that you have to define the run time for $n = 1$ somehow. You have to set it to some finite value. But there are infinite possible values for $n > 1$, so you end up with a function which is constant for infinite many values.


Probably sublinear algorithms are of interest for you, which do not read the entire input. See for example http://www.dcs.warwick.ac.uk/~czumaj/PUBLICATIONS/DRAFTS/Sublinear-time-Survey-BEATCS.pdf.

$\endgroup$
4
  • $\begingroup$ There exist sublinear algorithms. For example, see people.csail.mit.edu/ronitt/sublinear.html. It's a reasonably new field but it's very interesting. There are other counterexamples to this. Finding an element given a sorted list takes $O(\log n)$ time in the RAM model. I agree with the idea behind your post. It doesn't make sense to have an algorithm take less time as the input gets larger because it doesn't have time to read all of the input (how does it know to take less time?). But I don't know how to prove that they don't exist, and that a trick couldn't make it $o(1)$. $\endgroup$
    – SamM
    Commented Aug 14, 2012 at 2:00
  • $\begingroup$ @Sam: Sorry, I did not see your comment before my edit (adding sublinear algorithms). $\endgroup$ Commented Aug 14, 2012 at 2:06
  • $\begingroup$ quite the opposite; I didn't see your edit before adding my comment. I would delete it but the second half still applies and an additional link can't hurt the OP $\endgroup$
    – SamM
    Commented Aug 14, 2012 at 2:15
  • 1
    $\begingroup$ a counterexample: a constant function like $f(x)=0$. What you describe works for functions that need to read their input. $\endgroup$
    – Kaveh
    Commented Aug 14, 2012 at 2:31
1
$\begingroup$

The relation $(\mathbb{N},\leq)$ is well-founded, i.e. there are no infinite falling sequences in the natural numbers. Since (worst-case) runtime functions map to the naturals, all runtime functions therefore have to be in $\Omega(1)$, that is all runtime functions are (in the limit) non-decreasing.

That said, average runtimes can contain oscillating components, for example Mergesort.

$\endgroup$
4
  • $\begingroup$ I don't see how this answer is related to the question. $\endgroup$
    – A.Schulz
    Commented Aug 14, 2012 at 7:37
  • $\begingroup$ @A.Schulz It gives a proof for the main question "Is it the case that time complexity is always an increasing function in the input size?", reading "increasing" as "non-decreasing", i.e. not necessarily stricly increasing. $\endgroup$
    – Raphael
    Commented Aug 14, 2012 at 12:34
  • 1
    $\begingroup$ Well, "not increasing" doesn't necessarily have to mean "decreasing?. Or to put it the other way around non-decreasing $\not=$ increasing. $\endgroup$
    – A.Schulz
    Commented Aug 14, 2012 at 15:51
  • $\begingroup$ @A.Schulz: Still, my interpretation seems to be what Jennifer is interested in. $\endgroup$
    – Raphael
    Commented Aug 14, 2012 at 20:32