If I was asked in an interview (i.e. verbally rather than on paper), where I'd think the focus would be on demonstrating on-hand understanding of facts that give a quick approximate answer, I'd respond as follows:
Since when the population correlation is 0, the sample correlation has an asymptotic standard error of $1/\sqrt{n}$ (and should be asymptotically normal), a correlation of $0.01$ would correspond roughly to an approximate $Z$ value of about $0.1$ at $n=100$, $\sqrt{1/10}$ at $n=1000$ and $1$ at $n=10000$ respectively. Distinction between this asymptotic $Z$ and regression's $t$-value nor the accuracy of the asymptotic approximation at say $n=100$ and other such issues won't make enough difference to matter here.
If the correlation were twice as large it would be significant at $n=10000$ and if it was a bit over six times as large it would be significant at $n=1000$; it would need to be about 20 times as large (i.e. about 0.2) to be significant at $n=100$.
Additional accuracy in that calculation is unimportant and we don't need an approximation that works when the correlation isn't zero; we only need the information for the sampling distribution at $\rho=0$ and really only the asymptotic $1/\sqrt{n}$ fact is needed.
If I was solving it with pen and paper and had a few minutes to try to pull the details up out of what's left of my memory (or to try to derive them), I'd consider discussing the relationship of the correlation to the t-test in regression - but it would have no impact on the conclusions.
I'd also point out that they mean "at the 5% level" not "at the 95% level" (politely, of course).
If asked to demonstrate the $1/\sqrt{n}$ fact, it's pretty straightforward -- $Var(XY)$ for independent zero-mean RVs isn't too hard to derive - which is the main thing (things may be a bit easier if you assume the variances are both 1 and since we're passing to a correlation the scale won't matter).
You can argue the asymptotic distribution of the correlation coefficient by using Slutsky's theorem to focus on the numerator, which is an average and then argue from CLT.
Basic facts like the asymptotic standard error of a sample correlation when the population correlation is zero (which is what s often used to judge an autocorrelation or partial autocorrelation, for example) are just the kind of thing I'd hope an aspiring statistician would have in their head. It's interesting how often you can tell what will be significant and what won't with a few simple facts.