I measure the probability of getting heads on a coin flip, $p$, from $n$ trials. As I understand it (source), the variance of this measurement is $\sigma^2 = np(1-p)$.
The part I'm struggling to understand is how this would apply to a sample that has 0% or 100% heads. In these examples, if I put this value into the formula we would get $\sigma^2 = 0$, which wouldn't make sense [as a representation of my confidence in the measurement].
One possibility that occurred to me is that in the derivation of this formula, $p$ is the population expected value. So for a perfect coin that would mean the variance would be $\sigma^2=0.25n$. But, if that were the case, how would one apply this to an unknown expected value?
Edit: As in this answer, it has been pointed out that this formula for the variance does make sense as a representation of the spread of my sample. How then would I represent a level of confidence in my measurement?