In the paper "PRIMES is in P" the following is said (page 1):
Let PRIMES denote the set of all prime numbers. The definition of prime numbers already gives a way of determining if a number $n$ is in PRIMES: try dividing $n$ by every number $m \leq \sqrt{n}$ — if any m divides $n$ then it is composite, otherwise it is prime. This test was known since the time of the ancient Greeks — it is a specialization of the Sieve of Eratosthenes (ca. 240 BC) that generates all primes less then $n$. The test, however, is inefficient: it takes $\Omega (\sqrt{n})$ steps to determine if $n$ is prime. An efficient test should need only a polynomial (in the size of the input = $\lceil \log n \rceil$) number of steps.
Firstly, isn't complexity $\sqrt{n}$ already polynomial?
Secondly, why would complexity $\lceil \log n \rceil$ represent an efficient algorithm? This seems somewhat arbitrary to me.