Here are some graphs that you may find funny:
As we have discussed, the pure linear model is $n=\frac m{1-\tau m}$ can be improved by the corrected Ludlum-type model $n=\frac m{1-\tau m- F\tau^2 m^2}$ with $F=f-f^2/2$ (the optimal choice for Taylor; if you try to correct in the other direction, under the suggested physical model, you just go further away from the truth).
I dare to suggest a simple four step iteration with the same parameter $F$ that starts at $m$ and ends at $n$ with much higher precision than both on the high end of the radiation without sacrificing anything at the low end. The algorithm is as follows:
mu=tau*m;
mu/=1-mu;
mu/=1-F*mu^2;
mu/=1-F^2/2*mu^3;
mu/=1-F^2*mu^4;
n=mu/tau;
The linear model for $f=0.05$ (your 5%) is depicted in orange, the Ludlum correction in green and the suggested iteration computation in magenta. Blue is the actual dependence in the semi-paralyzable model. $n$ is horizontal, $m$ is vertical. I trace $n$ until the curve changes its direction, after which point the counter becomes useless. The next three pictures are for other values of $f$: $f=0.2$, $f=0.5$ and $f=0.8$ respectively (just to convince you that the improvement is there uniformly in the whole range). The code is easy to execute on any built-in arithmetic processor with elementary arithmetic (I suspect that modern counters have more computing capabilities but I did not count on that when designing this simple scheme). The scheme can be improved by making the coefficients more precise but then you'll have fancy dependencies on the independent parameter $f$ and one misprint in the manual will ruin the whole idea. So, I kept it as simple as possible. You also can force the currently used Ludlum correction to bend the green line more, but it starts to noticeably deviate from the blue line in the middle range before it goes anywhere near magenta line in the high range and the parameter loses any clear physical meaning. So, I still believe that my approach is advantageous. Of course, it is only as good as the physical model with which I compare everything. Let me know what you think. :-).
$f=0.05$
![f=0.05](https://cdn.statically.io/img/i.sstatic.net/YIWmv.png)
$f=0.2$
![f=0.2](https://cdn.statically.io/img/i.sstatic.net/7ghlP.png)
$f=0.5$
![f=0.5](https://cdn.statically.io/img/i.sstatic.net/qQunP.png)
$f=0.8$
![f=0.8](https://cdn.statically.io/img/i.sstatic.net/rrqBn.png)
*Edit: the derivation of the relation between $m$ and $n$ for different models.
It is always pretty much the same: one has to figure out how many events really occur on average between the moment the detector gets paralyzed in the active state and the moment it wakes up. If that quantity is $E$, then the relation is $m=n/E$. Below I assume that $\tau=1$ (it is just a time scaling; when scaling back, the relation I obtain between $n$ and $m$ will be just the relation between $n\tau$ and $m\tau$). I assume the Poisson model of intensity $n$ for the event process.
- Non-paralyzable model (recovery time 1)
Here $E$ is just the initial event that paralyzed the detector plus the expected number of events during the sleep time, i.e., $n$. Thus, we get our $m=\frac n{1+n}$ (before scaling back) and $\tau m=\frac{\tau n}{1+\tau n}$, i.e., $m=\frac{n}{1+\tau n}$ after scaling back.
- Fully paralyzable model (every event during the sleep time resets the clock)
Here we have an equation
$$
E=1+(1-e^{-n})E
$$
(the initial event plus the probability that the clock is reset times the same very expectation: the whole recovery process just starts all over).
Hence $E=e^n$ and the relation is $m=ne^{-n}$ (before scaling back) and $m=ne^{-n\tau}$ after scaling back.
Now the more interesting case of semi-paralyzable models. The answer here depends on how you model that partial paralyzation.
- Let's start with the model where the event goes undetected and does not reset the clock if it occurs before time $1-f$ and goes undetected and resets the clock if it occurs between times $1-f$ and $1$ (assuming the initial event that paralyzed the detector occurred at time $0$).
Then we have
$$
E=1+n(1-f)+(1-e^{-nf})E
$$
(the initial event plus the expected number of events during the "safe" sleeping time from $0$ to $1-f$ plus the probability that the clock is reset during the "dangerous" sleeping time times the same expectation).
From here we get $e^{-nf}E=1+n^(1-f)$, so $E=e^{nf}(1+n(1-f))$ and, after scaling back, we recover the formula in the OP, though not the model described in OP.
- Now let's turn to the model in OP, which is that during the sleep time each event has probability $f$ to reset the clock independently of other events. Due to the basic properties of the Poisson distribution (random selection just splits the process into two independent ones with the corresponding intensities), we can reformulate this model as the one having two sort of events: "killer" ones that have intensity $fn$ and always reset the clock, and the "safe" ones that have intensity $\bar fn=(1-f)n$ and don't reset the clock (both do paralyze an active detector, however).
The equation then becomes
$$
E=1+e^{-fn}\bar f n+ \int_0^1(\bar f n t)d(1-e^{-fnt})+(1-e^{-fn})E
$$
(the initial event plus the probability that the killer does not arrive times the expected number of safe events during sleep time $1$ plus the expected number of safe events by the time of the killer arrival plus the probability that the killer arrives times the same expectation. After putting all $E$'s on the LHS, integrating by parts, and clearing the resulting mess, one arrives at
$$
e^{-nf}E=1+\bar f\frac{1-e^{-nf}}f=e^{-nf}+(1+\tfrac{\bar f}f)(1-e^{-nf})
=e^{-nf}(1+\frac{e^{nf}-1}f)
$$
and
$$
E=1+\frac{e^{nf}-1}f,\qquad m=\frac{n}{1+\frac{e^{nf}-1}f}
$$
before scaling back and
$$
m=\frac{n}{1+\frac{e^{nf\tau}-1}f}
$$
after scaling back, which is different from the formula in the original post but agrees with the model described there.
Fortunately, my 4-step iteration is equally efficient for 3) and 4), just in the case of (3) the meaning of the parameter $F$ is $f/2$, not $f-f^2/2$. However, I don't think $f$ can be determined with high precision directly, so one would have to calibrate the parameters anyway, in which case the best fit with my magenta curve is better than the best fit with the Ludlum type green curve anyway (below is the "best fit" for $f=0.05$ and the OP equation for green and magenta in the "high range":)
![best fits](https://cdn.statically.io/img/i.sstatic.net/xZb7w.png)