0
$\begingroup$

I'm having difficulties to understand the following passage out of the CLRS book, when the Counting Sort algorithm is introduced:

Counting sort assumes that each of the elements is an integer in the range $1$ to $k$, for some integer $k$. When $k = \mathcal{O}(n)$, the Counting-Sort runs in $\mathcal{O}(n)$ time.

Remark: $n$ is the length of the input array

I don't understand the mathematical background of the requirement $k = \mathcal{O}(n)$. If I'm not mistaking the claim $k = \mathcal{O}(n)$ always holds, because for $ C := k$ holds $k \leq C *n = k*n$. But the fact that the quoted paragraph stresses this requirement makes me think that I overlook something.

I'd be thankful if you could help me with this.

$\endgroup$
1
  • 1
    $\begingroup$ Why assume $k$ is a constant? Counting sort in general runs in $O(n+k)$ time. So it can be slow and memory intensive if $k$ is very large, and will not be $O(n)$ if $k$ grows faster then $n$ $\endgroup$
    – Henry
    Commented Feb 1, 2018 at 21:33

1 Answer 1

1
$\begingroup$

If we have an array such that $a(n) = n^2$, then $k = n^2$ which is not $O(n)$.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .