3
$\begingroup$

This is kind of a real-world question, in that it comes from the work I do, but I'm just pursuing it for my own edification.

When a radiation detector detects an event, it is insensitive to further events for a certain amount of time until it has recovered, the deadtime. The two standard models are paralyzable and non-paralyzable. If it is paralyzable, a new event that happens within the deadtime period resets the clock, so to speak, extending the time of insensitivity. That is modeled as $$m=n e^{-n \tau}$$ where m is the measured count rate, n the theoretical true count rate, and $\tau$ is the deadtime. For a Geiger-Müller counter $\tau$ is around $100 \mu s$, a scintillator around $10 \mu s$. So, given a true count rate, it predicts a measured count rate. The non-paralyzable model assumes that new events during the deadtime period are simply lost, and is modeled as $$n = \frac{m}{1-m \tau}$$ That, at least, is easy to express in terms of either $m$ or $n$. No detector is really one or the other, and a better model is semi-paralyzable, with a $\tau_P$ for the non-paralyzable and $\tau_N$ for the paralyzable deadtimes, or $f$ for the fraction of paralysis (for a GM-counter $f$ is around 5%). So, $$m = \frac{n e^{-n \tau f}}{1 + n \tau (1-f)}$$ So it's both the above models put together. Again, the true (but unknown) count rate is the input. Meanwhile, Ludlum Measurements, having sensible engineers, has a deadtime correction in some of their instruments that estimates true count rate, $n$, from the measured count rate, $m$, by including a quadratic term in the denominator of the non-paralyzable model and fitting to two deadtimes, $\tau_1$ and $\tau_2$, $$n = \frac{m}{1 - m \tau_1 + m^2\tau_2}$$ What I would like to do is to use the improved deadtime model to find an estimate of the true count rate, so $n$ on the left-hand side and $m$ is the input, and I would like to compare the paralysis time, $\tau f$ to Ludlum's deadtime 2, $\tau_2$ which, since it multiplies $m^2$, has to actually be a time-squared.

I thought at first, it's just some algebra and a few terms of Taylor expansion, how hard can it be? But all I seem to get is a mess, and I don't know how to compare the semi-paralyzable model with Ludlum's.

Edit: I brute-forced it with a spreadsheet. This time m, measured, is on the horizontal, and n, estimated true count rate, the vertical. I didn't make it look very pretty, I'm not very good with that software. But blue squares are a linear response for comparison, green triangles the hybrid model, which I assume to be essentially correct, orange diamonds the standard non-paralyzable model, and yellow triangles are Ludlum's correction in the Model 3000. I used 80 microsecond deadtime, 5% paralysis fraction (thinking of a 44-9 probe), and chose a deadtime 2 that illustrates the trend, although at 2e-9 it's pretty small. The takeaway is that it pulls you down below the true count rate, even below the imperfect non-paralyzable model. enter image description here

$\endgroup$
15
  • $\begingroup$ Not sure what you are asking. Do you want to find a $\tau_2$ that will give an approximate inversion of semi-paralyzable formula? $\endgroup$
    – user619894
    Commented Dec 5, 2022 at 9:19
  • $\begingroup$ From what I wrote in the end of my answer, I suspect that Ludlum is actually with $-m^2\tau_2$ in the denominator, but, unfortunately, my attempt to google the equation resulted in "Shop for ...". If it is, indeed, with $-$, it would make more sense to call it a deadtime (squared) $\endgroup$
    – fedja
    Commented Dec 6, 2022 at 4:04
  • $\begingroup$ That formula is taken from the Model 3000 manual on Ludlum's web site, and the plus sign is correct. If the standard non-paralyzable model is rearranged it gives m=n/(1+n*tau), which goes to 1/tau as n goes to infinity. That's what is happening in the n= version when the denominator goes to zero. In the real world the needle on the meter can actually dip back down as the exposure gets very high, because it's all deadtime, giving the dangerous impression of a low field. So two m's for every n. $\endgroup$
    – Greg
    Commented Dec 6, 2022 at 17:52
  • $\begingroup$ @Greg Then the models cannot be put into a decent agreement without negative parameters. I wonder whether it means that the semiparalyzable model is not reflecting reality, or whether the Ludlum engineers are less sensible than you said, or both :lol: (at least, I see no error in my algebra) $\endgroup$
    – fedja
    Commented Dec 6, 2022 at 17:57
  • $\begingroup$ I'm still sitting here with pen and paper, working through what you just did. Looks good to me so far. The approximations help. $\endgroup$
    – Greg
    Commented Dec 6, 2022 at 18:05

2 Answers 2

2
$\begingroup$

I presume that the question is

How should $(\tau,f)$ and $(\tau_1,\tau_2)$ be related so that the relation agree for small $m$ to the highest possible power of $m$, which is assumed to be reasonably small (so you don't drop your counter into the nuclear reactor or on the Sun).

Then it is, indeed, Taylor but the difficulty is that the first equation readily expands in $n$ while the second one in $m$.

So, from the second one we have $$n=m(1+\tau_1 m+(\tau_1^2-\tau_2) m^2+\dots)=m(1+\tau_1 m+Tm^2+\dots)$$

Meanwhile, the first one reads $$ m=n(1-n\tau f+\tfrac 12 n^2\tau^2f^2+\dots)(1-n\tau(1-f)+n^2\tau^2(1-f) ^2+\dots) \\ =n(1-\tau n+\tau^2[\tfrac 12 f^2+f(1-f)+(1-f)^2] n^2+\dots) \\ =n(1-\tau n+\tau^2 Fn^2) $$ Hence we must have $$ (1+\tau_1 m+Tm^2)(1-\tau n+\tau^2 F n^2+\dots)=1\,. $$ Thus (since in the first order $m=n$) $\tau=\tau_1$ (which is not a big surprise since both give the linear correction). The second order is, indeed, messier: $$ Tm^2+\tau^2 Fn^2-\tau^2 mn+\tau(m-n)=0\,. $$ Since $m^2\approx mn\approx n^2$ and $m-n\approx -\tau n^2$, this yields the relation $$ T+\tau^2(F-2)=0\, $$ i.e., $$ -\tau_2+\tau^2(\tfrac 12f^2-f)=0 $$ so in order to agree the formulae to the second order, you have to use negative $\tau_2$ and, since $0\le f\le 1$, you can do a backward agreement only if $\tau_2\in[-\tau^2/2,0]$.

I hope I haven't made a stupid algebraic mistake anywhere.

$\endgroup$
6
  • $\begingroup$ What I had tried to do is render both the Ludlum equation and the hybrid model as n=m(1+m*tau)*(corrections). But the hybrid model is written as it's written, and I couldn't get anything to match up. $\endgroup$
    – Greg
    Commented Dec 6, 2022 at 18:11
  • $\begingroup$ @Greg Nah, that gives the first order only (i.e., $\tau=\tau_1$). You need to keep the quadratic part too to agree $f$ and $\tau_2$. $\endgroup$
    – fedja
    Commented Dec 6, 2022 at 18:35
  • $\begingroup$ Well, $\tau = \tau_1$, ignoring the possibility that when a $\tau_2$ is put in there the other will have to be adjusted to make a best fit. And if I just solve for $\tau_2$ I get a complicated expression with m and n, and not ratios that can drop out in the approximation. You probably did as much as could be done. And I haven't seen any indication -- and I have looked for it in the literature I could find -- that the physicists ever intended to turn their hybrid model into a working correction factor for a survey meter in the field. Something about theory versus practice. I suppose. $\endgroup$
    – Greg
    Commented Dec 6, 2022 at 18:51
  • $\begingroup$ @Greg OK, I'll try to do the Ludlum-type inversion with minus sign and show you the graphs for some $f$. My point is that under the semi-paralyzable model it just never makes sense to correct the fractional linear inversion down as Ludlum does and may make a lot of sense to correct it up. Wait until I report to you in the evening. Now I have some other fish to fry :) $\endgroup$
    – fedja
    Commented Dec 6, 2022 at 19:16
  • $\begingroup$ To be honest, I knew what it looks like in the real world, and the formula was suppose to make an improved correction, so I didn't try very hard to think it through? But now that I'm thinking, I wonder if it's a typo. I don't routinely have access to a Model 3000, but next time I get my hands on one maybe I'll put it on the pulser and play around with that parameter, and see if the measured pulse rate goes up or down. $\endgroup$
    – Greg
    Commented Dec 6, 2022 at 21:04
2
$\begingroup$

Here are some graphs that you may find funny:

As we have discussed, the pure linear model is $n=\frac m{1-\tau m}$ can be improved by the corrected Ludlum-type model $n=\frac m{1-\tau m- F\tau^2 m^2}$ with $F=f-f^2/2$ (the optimal choice for Taylor; if you try to correct in the other direction, under the suggested physical model, you just go further away from the truth).

I dare to suggest a simple four step iteration with the same parameter $F$ that starts at $m$ and ends at $n$ with much higher precision than both on the high end of the radiation without sacrificing anything at the low end. The algorithm is as follows:

mu=tau*m;
mu/=1-mu;
mu/=1-F*mu^2;
mu/=1-F^2/2*mu^3;
mu/=1-F^2*mu^4;  
n=mu/tau;

The linear model for $f=0.05$ (your 5%) is depicted in orange, the Ludlum correction in green and the suggested iteration computation in magenta. Blue is the actual dependence in the semi-paralyzable model. $n$ is horizontal, $m$ is vertical. I trace $n$ until the curve changes its direction, after which point the counter becomes useless. The next three pictures are for other values of $f$: $f=0.2$, $f=0.5$ and $f=0.8$ respectively (just to convince you that the improvement is there uniformly in the whole range). The code is easy to execute on any built-in arithmetic processor with elementary arithmetic (I suspect that modern counters have more computing capabilities but I did not count on that when designing this simple scheme). The scheme can be improved by making the coefficients more precise but then you'll have fancy dependencies on the independent parameter $f$ and one misprint in the manual will ruin the whole idea. So, I kept it as simple as possible. You also can force the currently used Ludlum correction to bend the green line more, but it starts to noticeably deviate from the blue line in the middle range before it goes anywhere near magenta line in the high range and the parameter loses any clear physical meaning. So, I still believe that my approach is advantageous. Of course, it is only as good as the physical model with which I compare everything. Let me know what you think. :-).

$f=0.05$

f=0.05

$f=0.2$

f=0.2

$f=0.5$

f=0.5

$f=0.8$

f=0.8

*Edit: the derivation of the relation between $m$ and $n$ for different models.

It is always pretty much the same: one has to figure out how many events really occur on average between the moment the detector gets paralyzed in the active state and the moment it wakes up. If that quantity is $E$, then the relation is $m=n/E$. Below I assume that $\tau=1$ (it is just a time scaling; when scaling back, the relation I obtain between $n$ and $m$ will be just the relation between $n\tau$ and $m\tau$). I assume the Poisson model of intensity $n$ for the event process.

  1. Non-paralyzable model (recovery time 1)

Here $E$ is just the initial event that paralyzed the detector plus the expected number of events during the sleep time, i.e., $n$. Thus, we get our $m=\frac n{1+n}$ (before scaling back) and $\tau m=\frac{\tau n}{1+\tau n}$, i.e., $m=\frac{n}{1+\tau n}$ after scaling back.

  1. Fully paralyzable model (every event during the sleep time resets the clock)

Here we have an equation $$ E=1+(1-e^{-n})E $$ (the initial event plus the probability that the clock is reset times the same very expectation: the whole recovery process just starts all over).

Hence $E=e^n$ and the relation is $m=ne^{-n}$ (before scaling back) and $m=ne^{-n\tau}$ after scaling back.

Now the more interesting case of semi-paralyzable models. The answer here depends on how you model that partial paralyzation.

  1. Let's start with the model where the event goes undetected and does not reset the clock if it occurs before time $1-f$ and goes undetected and resets the clock if it occurs between times $1-f$ and $1$ (assuming the initial event that paralyzed the detector occurred at time $0$).

Then we have $$ E=1+n(1-f)+(1-e^{-nf})E $$
(the initial event plus the expected number of events during the "safe" sleeping time from $0$ to $1-f$ plus the probability that the clock is reset during the "dangerous" sleeping time times the same expectation).

From here we get $e^{-nf}E=1+n^(1-f)$, so $E=e^{nf}(1+n(1-f))$ and, after scaling back, we recover the formula in the OP, though not the model described in OP.

  1. Now let's turn to the model in OP, which is that during the sleep time each event has probability $f$ to reset the clock independently of other events. Due to the basic properties of the Poisson distribution (random selection just splits the process into two independent ones with the corresponding intensities), we can reformulate this model as the one having two sort of events: "killer" ones that have intensity $fn$ and always reset the clock, and the "safe" ones that have intensity $\bar fn=(1-f)n$ and don't reset the clock (both do paralyze an active detector, however).

The equation then becomes $$ E=1+e^{-fn}\bar f n+ \int_0^1(\bar f n t)d(1-e^{-fnt})+(1-e^{-fn})E $$ (the initial event plus the probability that the killer does not arrive times the expected number of safe events during sleep time $1$ plus the expected number of safe events by the time of the killer arrival plus the probability that the killer arrives times the same expectation. After putting all $E$'s on the LHS, integrating by parts, and clearing the resulting mess, one arrives at $$ e^{-nf}E=1+\bar f\frac{1-e^{-nf}}f=e^{-nf}+(1+\tfrac{\bar f}f)(1-e^{-nf}) =e^{-nf}(1+\frac{e^{nf}-1}f) $$ and $$ E=1+\frac{e^{nf}-1}f,\qquad m=\frac{n}{1+\frac{e^{nf}-1}f} $$ before scaling back and $$ m=\frac{n}{1+\frac{e^{nf\tau}-1}f} $$ after scaling back, which is different from the formula in the original post but agrees with the model described there.

Fortunately, my 4-step iteration is equally efficient for 3) and 4), just in the case of (3) the meaning of the parameter $F$ is $f/2$, not $f-f^2/2$. However, I don't think $f$ can be determined with high precision directly, so one would have to calibrate the parameters anyway, in which case the best fit with my magenta curve is better than the best fit with the Ludlum type green curve anyway (below is the "best fit" for $f=0.05$ and the OP equation for green and magenta in the "high range":)

best fits

$\endgroup$
10
  • $\begingroup$ Wow. Well, I'll tell you what I was thinking. I was thinking I'm going to brute-force it in a spreadsheet and make graphs a lot like what you did there, and try to understand on my own terms how the different approaches compare, how they diverge. Next time I get my hands on an M3000 I'm going to see if that's really a plus sign in the denominator. Depending on how that goes, I might give Bob Bray at Ludlum a call and tell him what I think. And I'll tell him that fedja helped me. $\endgroup$
    – Greg
    Commented Dec 7, 2022 at 17:28
  • $\begingroup$ @Greg Yeah, by all means do your own investigation. We can try to find some good and simple correcting procedure together once we agree upon what the exact requirements are and whether the physical model you suggested is close enough to the truth. It may be a fun project, so just keep me updated. And yeah, they are probably, using $\tau_2$ well above the one predicted by the Taylor expansion to take care of the "high end" where the real deviation is. I'm pretty sure that $\tau_1=\tau$ though because any deviation there is immediately felt at the low end. $\endgroup$
    – fedja
    Commented Dec 7, 2022 at 19:13
  • $\begingroup$ Unless... maybe it's not meant to increase the range. As a tube goes bad it can start multiple-counting and, for reasons I haven't figured out yet, seems to do that more strongly with higher exposure. So it goes non-linear, and you would WANT to pull the count rate down. $\endgroup$
    – Greg
    Commented Dec 7, 2022 at 21:11
  • $\begingroup$ @Greg Yep, it is a question of how good the model is. BTW, your description of the model doesn't match your formula unless I misunderstand something. I presume that the model was that we have a Poisson process of intensity $n$. Once in the active state, the detector counts the event and goes into a passive state with sleep clock set to $\tau$. If an event occurs at the sleep time, with probability $f$ the sleep clock is reset to $\tau$; otherwise nothing happens. This description with $\tau=1$ gives $m=\frac {nf}{f+e^{fn}-1}$, which is close to but not the same as $\frac{ne^{-fn}}{1+(1-f)n}$ $\endgroup$
    – fedja
    Commented Dec 7, 2022 at 22:49
  • $\begingroup$ @Greg Anyway, we have to figure out what is a decent physical description here. I wonder if you can provide some real data of $m$ versus $n$. $\endgroup$
    – fedja
    Commented Dec 7, 2022 at 22:53

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .