1
$\begingroup$

The overarching theme of computer science seems to be that polynomial time or space or what-have-you for an algorithm is a success, and exponential is a failure. The definitions of P and NP revolve around this. This seems to be the case in OMBL (online mistake-bound learning) as well.

On the other hand I heard from a respected machine learning expert that in big data processing, O(n) algorithms (such as k-grams) really are the only that work, because n is very huge for modern, real industrial corpuses in high tech, finance, etc.

Are there fundamental theoretical reasons for the divide between polynomial and exponential, or is it motivated entirely by what can run on a real computer in a normal amount of time?

$\endgroup$
0

0

Browse other questions tagged or ask your own question.