0
$\begingroup$

I've been struggling with the commonly accepted notion in computer science that exponential algorithms are inefficient. The standard explanation is that they "grow exponentially in the size of the input", but this explanation does not sound so satisfactory. It does not fully capture why exponential growth specifically makes these algorithms impractical in our physical world.

For instance, consider a hypothetical thought experiment where an intelligent alien from another universe - where exponential algorithms can be run efficiently - asks why these algorithms are inefficient here. Saying "because they grow exponentially" would not clarify much. The alien might still be confused because exponential growth alone isn't inherently a problem; it's the limitation in our physical world that make it so.

I am seeking references to scientific papers or textbooks or even answers that provide a thorough examination of this topic, including data or theoretical reasoning that supports why exponential algorithms are deemed inefficient compared to polynomial ones. Most computer science textbooks and resources I've found, often restrict their focus to polynomial-time algorithms without delving into the deeper reasoning or evidence behind these classifications. How should we answer the alien in this hypothetical scenario?

Additionally, where do we draw the line between efficient and inefficient algorithms? Is any algorithm that takes longer than polynomial-time algorism considered inefficient?

I have already looked at similar questions: [1], [2]

New contributor
Josh is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
$\endgroup$
2
  • $\begingroup$ It might help if you think of it as "efficient" being shorthand for "asymptotically exponential (or worse) in terms of at least one input that is likely to exceed some small constant" (an algorithm that's O(n^2 + 2^m) will get called O(n^2) if we know m is never going to exceed 5), rather than "efficient" being a formal concept that we discovered to have its boundary at O(2^n). There are plenty of polynomial-time algorithms that are too inefficient to be practical, as well. It's just that nearly all exponential algorithms are, so that's where the (somewhat arbitrary) line gets drawn. $\endgroup$
    – Ray
    Commented 2 days ago
  • 2
    $\begingroup$ Frame challenge: In algorithms, "efficient" (without definition specified) typically mean something is more efficient than (relative to) previous methods, not a specific claim on polynomial-time. $\endgroup$
    – pcpthm
    Commented 2 days ago

4 Answers 4

8
$\begingroup$

Any algorithm that is infeasible to execute, within our lifetime (or within the lifetime of the solar system, etc.), is hard to consider "efficient".

An exponential-time algorithm is infeasible to execute within our lifetime, for every fairly modest-sized problem instances. For instance, reasonable physical models suggest an upper bound on perhaps $10^{100}$ steps of computation, within your lifetime, if you convert all the mass of the universe into a massive parallel computer. Now if you have an algorithm whose running time is $2^n$ steps of computation, then once $n\ge 340$, the algorithm will not terminate within your lifetime.

So, generally speaking, exponential-time algorithms can only be considered "efficient" if you only need to use them on very small problem instances.

Of course, there is no "computation police" who enforces a strict definition of the word "efficient". You are welcome to set any criteria you wish for what you consider efficient enough for your purposes. Nonetheless, I hope this illustrates fundamental challenges with using exponential-time algorithms on problems of any non-trivial size.

$\endgroup$
1
  • $\begingroup$ Is there a reference for the physical models that you mentioned? $\endgroup$
    – Josh
    Commented 4 hours ago
8
$\begingroup$

An algorithm or implementation is “inefficient” if it takes longer than necessary. Some problems are hard. There are no fast algorithms because the problem is so hard. An algorithm that takes $2^n$ steps is slow, yet may be the most efficient algorithm available due to the difficulty of the problem. But another algorithm taking n! steps takes much longer than needed and is therefore inefficient.

The word you are looking for is “infeasible”. For reasons of physics (minimum amount of energy needed for any state change due to quantum physics, total energy in the universe) it is physically impossible to have any computation that takes more than $2^{256}$ steps. There is a much smaller number if you calculate the maximum number of steps that can be performed using all resources of Earth in the lifetime of my grand children. If an algorithm cannot run with these limitations, it is infeasible.

Aliens should be quite capable of translating the number of operations into required resources, and their limits for feasibility will not be much higher than ours.

$\endgroup$
5
$\begingroup$

One of the key ideas of algorithm analysis is to be able to assess how the running time changes as the size of the input changes.

Consider a list of $n$ elements.

  1. Saying an algorithm runs in linear time means that, when you add one more element to the list, the running time will increase by, say, 1ms. If you again add another element to the list, again the running time will increase by 1ms. If you double the number of elements, then it will take twice as much time. If you triple the number of elements, then it will take three times as much time. And so on.

  2. Saying an algorithm runs in exponential time (let's say $2^n$) means that, when you add one more element to the list, the running time will double. If you again add one more element, the running time will quadruple with regards to the initial list. If you double the length of the list, there isn't even a way to succintly say what happens.

To understand why this is "inefficient" you should reflect on two things:

  • there is not much room for doubling: our current understanding of physics has "Planck time" ($5.39 \times 10^{−44}$ seconds) as the smallest meaningful unit of time; our current understanding of the Universe places its age at 13.787 billion years. If you start from a Planck unit of time and keep doubling up the time, you will get to the age of the Universe only after about 200 doublings. From smallest, to largest unit of time, you can only double a couple hundred times. And keep in mind that computers' smallest unit of time is much, much larger than a Planck unit; the timespan in which we want to run algorithms to get results is much, much smaller than the age of the Universe.

  • we usually deal with tens, hundreds, thousands of things; many indeed: we apply algorithms to solve real-world problems. But even toy examples used for teaching usually involve tens or hundreds of objects. In the real-world, our lists might be a company's clients, the books of a library; our graphs may model city maps, or pages in the web. Millions of objects or more.

Combining these things, it follows that, while you can run your exponential-time algorithm on an input of $n=2$ elements, there is not much room for that $n$ to grow; but chances are that interesting real-world instances will have $n$ in the hundreds, thousands, millions etc.

$\endgroup$
0
$\begingroup$

As far as references, you're looking for big-O notation. It groups the "efficiency" of algorithms into things like $O(\log n)$, $O(n^2)$, $O(n^5)$ and $O(2^n)$ (that last being exponential time). Each category is worse than the next, and exponential is way, way out there. The crazy thing is there's no limit to how much worse. $O(n^2)$ will be 10 times slower than $O(n \log n)$ for a smallish input, but then 100 times slower for a larger input, up to a million times slower and so on as the input size increases. The full explanation is a bit tricky, but "big-O notation" is the search term you were asking for. It's covered in many 2nd-year algorithm classes.

As for the alien, suppose we find out what's different in their universe. Maybe their PC's can spawn an infinite number of pocket dimensions, which can each check a different possible result to see which one is correct. When we tell them that can't be done in this universe, they might say "wow, so I guess solving exponential problems is very inefficient for you?"

Now back to references. Com Sci actually has that idea of gradually spawning infinite parallel copies, which "solves" many exponential problems. Brute-force chess is an example: merely create one pocket dimension for each legal move, each pocket dimension will naturally repeat that for the next move, and so on. There will be gazillions of them, but each only runs for a few hundred moves (which is nothing). This theoretical ability is used in an NFA (non-deterministic finite-automaton). NFA's are just thought-experiments for use in certain proofs, but -- and this may not be current --there was talk that a quantum computer might be able to simulate an NFA (using quantum states?) and thus make a class of exponential problems easily solvable, in this universe.

New contributor
Owen Reynolds is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.