$O(\log n)$ in the worst case implies $O(\lg n)$ amortized.
Basically, given two data structures supporting the same operator, one in $O(\lg n)$ time in the worst case and the other one in $O(\lg n)$ amortized time, the first one is considered to be superior asymptotically: being $O(\lg n)$ time in the worst case means that each call of the operator will be supported in this time, while having an amortized complexity of $O(\lg n)$ means that some (very few) operator calls can take $O(n)$ time. Usually, the concept of amortized analysis is used for data-structures which amortized complexity is better than their worst case complexity.
As an illustration, consider a data-structure for integers, storing each such integer $x$ as a string of bits (e.g. $x=8$ represented by $(1,0,0,0)$), and the operator $x.inc()$ which increments $x$ by $1$.
- In the worst case (e.g. on $x=7$ represented by $(1,1,1)$), the operator $x.inc()$ corresponds to $\log(x)+1$ binary write (e.g. to write $(1,0,0,0)$ corresponding to $8$).
- In the best case (e.g. on $x=8$), the operator $x.inc()$ corresponds to exactly one binary write (e.g. to change the last bit of $(1,0,0,0)$ to $1$, giving $(1,0,0,1)$).
In a sequence of increments of the same integer object (e.g. enumerating all integers from $0$), the "best case" described above happens half of the time, and the worst case one every $n$ increment (i.e. after $1,2,4,8,16,...$ increments). A case requiring $i$ bits write happens $1/2^i$ of the time. Summing all those costs gives an amortized cost of $\sum_i i/2^i \in O(1)$. Hence the operator $inc()$ has a worst case complexity of $O(\lg n)$ but an amortized complexity of $O(1)$.
The notion of amortized analysis is well explained on wikipedia. You might want to see the page on the Potential method for more details.
Hope it helps!