I'm having difficulties to understand the following passage out of the CLRS book, when the Counting Sort algorithm is introduced:
Counting sort assumes that each of the elements is an integer in the range $1$ to $k$, for some integer $k$. When $k = \mathcal{O}(n)$, the Counting-Sort runs in $\mathcal{O}(n)$ time.
Remark: $n$ is the length of the input array
I don't understand the mathematical background of the requirement $k = \mathcal{O}(n)$. If I'm not mistaking the claim $k = \mathcal{O}(n)$ always holds, because for $ C := k$ holds $k \leq C *n = k*n$. But the fact that the quoted paragraph stresses this requirement makes me think that I overlook something.
I'd be thankful if you could help me with this.