0
\$\begingroup\$

Just for some background context:

A digital signal has jitter present on it. In this situation there is only random jitter (RJ), which is unbounded and normally distributed. When the digital signal is sent to a receiver, there is a bit error rate (BER). Let's say the BER is 1E-12, that means that 1 in every 1E12 bits is an error.

I was told that when measuring bit errors, there is a difference between sampling for errors 10 times for 1 minute each time, and 1 time for 10 minutes. I have trouble understanding why this is the case. I'm thinking that a bit error will occur because the instantaneous jitter is high enough to move the data edge to "miss" the sampling time. I.e. would be a transition in the middle of the eye opening.

I ran some MATLAB code to try to see this, but it seems like my results are agreeing with my intuition.

clc
clear

% compare the amount of normally distrubuted (mu = 0, sigma = 1) 
% random values with magnitude greater than 3 for two cases:
%
%   case 1: 100 runs of 1000000 samples
%   case 2: 1 run of 100000000 samples
%
%

test_count = 100;
sample_count = 1000000;


counter_1 = 0;
for test_num = 1:test_count
    samples = normrnd(0, 1, 1, sample_count);

    for ind = 1:length(samples)
        if abs(samples(ind)) > 3 
            counter_1 = counter_1 + 1; 
        end
    end
end

counter_2 = 0;
samples = normrnd(0, 1, 1, sample_count*test_count);

for ind = 1:length(samples)
    if abs(samples(ind)) > 3 
        counter_2 = counter_2 + 1; 
    end
end

disp("Case 1: " + sample_count + " samples " + test_count + " times: " + counter_1)
disp("Case 2: " + (sample_count * test_count) + " samples " + 1 + " times: " + counter_2)

The results are:

Case 1: 1000000 samples 100 times: 270540

Case 2: 100000000 samples 1 times: 271171

According to a z-score table, there is a 0.27% chance of getting a sample 3 sigma out, which my test agrees with. So unless I'm missing something, shouldn't 1 minute of BER testing ten times be equivalent to 10 minutes once?

\$\endgroup\$
4
  • \$\begingroup\$ Who claimed there would be a difference and what was their reasoning? \$\endgroup\$
    – The Photon
    Commented Apr 23, 2022 at 3:56
  • \$\begingroup\$ @ThePhoton My co-worker, and the claim was because the gaussian "tail" get longer and longer as time goes on, and because the 1 trial is longer the tail will grow longer than the shorter trials. But I disagree with that notion, and was wondering if it was true or not. Stats are un-intuitive sometimes, so I wanted to see if there was some better justification. \$\endgroup\$ Commented Apr 23, 2022 at 4:30
  • \$\begingroup\$ Your co-worker doesn't know what they're talking about. \$\endgroup\$
    – The Photon
    Commented Apr 23, 2022 at 5:29
  • \$\begingroup\$ @Michael the coworker is wrong. They'd only be right if the expectation of the jitter was not 0 – but in your case, it is zero. (If it wasn't zero, we would decompose the timing error into two parts: a zero-expecation jitter and a sample/symbol clock offset that contains the expectation.) \$\endgroup\$ Commented Apr 23, 2022 at 11:13

3 Answers 3

2
\$\begingroup\$

It makes no difference. Assuming the jitter is uncorrelated from bit to bit, the number of bit errors observed in a given interval of time (1 minute or 10 minutes) is governed by the Poisson distribution.

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. [Wikipedia]

The probability distribution function for a Poisson process is given by

$$\!f(k; \lambda)= \Pr(X{=}k)= \frac{\lambda^k e^{-\lambda}}{k!}$$

where \$\lambda\$ is a parameter that is both the mean and the variance of the distibution. If \$r\$ is the rate of the event occuring (\$10^{-12}\times B\$ if we're counting errors occuring in a bit stream with bit rate \$B\$ and BER of \$10^{-12}\$), then for observation over a time interval \$T\$, \$\lambda=rT\$.

In order to show that the probability of some number \$n\$ of errors occuring in a given interval is the same as a total of \$n\$ occurring in two intervals of half the duration, we have to consider the possibility that, 0 errors occur in the first interval, but \$n\$ occur in the second interval; 1 error occurs in the first interval and \$n-1\$ occur in the second interval, and so on. So we have to show

$$\sum_{k=0}^n f(k;\lambda/2) f(n-k;\lambda/2) = f(n; \lambda)$$

or

$$\sum_{k=0}^n \frac{(\frac{\lambda}{2})^k e^{\lambda/2}}{k!}\frac{(\frac{\lambda}{2})^{n-k} e^{\lambda/2}}{(n-k)!} = \frac{\lambda^n e^{-\lambda}}{n!}$$

The left hand side can be rewritten as

$$(\frac{\lambda}{2})^n e^{-\lambda}\sum_{k=0}^n\frac{1}{k!(n-k)!}$$

or, from the definition of the binomial coefficient \$\binom{n}{k}\$,

$$(\frac{\lambda}{2})^n e^{-\lambda}\sum_{k=0}^n\binom{n}{k}\frac{1}{n!}$$

Moving the \$frac{1}{n!}\$ term outside the sum and applying the binomial theorem, this is

$$(\frac{\lambda}{2})^n e^{-\lambda}\frac{1}{n!}(1+1)^n$$

Cancelling factors of \$2^n\$ in the numerator and denominator we have

$$\frac{\lambda^n e^{-\lambda}}{n!}$$

which is just what we were trying to get to.

From this we can generalize to the case of breaking up our test interval into any number of smaller intervals that total to the same time, and we will get the same probability distribution for the number of errors observed.

\$\endgroup\$
1
  • \$\begingroup\$ Something like this is exactly what I was looking for, @all thank you for you answers \$\endgroup\$ Commented Apr 23, 2022 at 22:16
1
\$\begingroup\$

There can be many causes for differences in BER testing in different ways. It is largely due to lack of stated assumptions on the topology, synchronization, overhead, sync pattern, detection method (sample vs integrate) , data pattern, preamble length, clock jitter (which may vary at time of start sync, short term, long term, framing and data pattern dependency.) and test method.

There can be many other causes outside of Gaussian noise interference that influences statistical curve fitting deviations, but we can ignore those special cases.

If one is well-trained in design and BER curve analysis, one can detect possible causes of discrepancies from asymptote offsets caused by detector asymmetry with symmetrical data patterns vs random, ISI caused by the transmitter, channel or receiver, or periodic noise, Ricean Fading losses, adjacent channel cross-talk, frequency pattern dependant return loss issues, etc.

I used to be but, I'm probably pretty rusty now.

shouldn't 1 minute of BER testing ten times be equivalent to 10 minutes once?

In theory yes, in practice, often not as there is more overhead. But that depends on the block size for 10 messages per minute. Your conclusion is 10 times longer than what you were told.

there is a difference between sampling for errors 10 times for 1 minute each time, and 1 time for 10 minutes

In a well-designed system, there is no significant difference. But in your co-worker's case, there may be rational but sub-optimal reasons.

\$\endgroup\$
1
\$\begingroup\$

To add to The Photon's answer:

The coworker is wrong. They'd only be right if the expectation of the jitter was not 0 – but in your case, it is zero.

If it wasn't zero, we would decompose the timing error into two parts: a zero-expecation jitter and a sample/symbol clock offset that contains the expectation.

Of course, if there's such a clock offset, then the time error is a function linearly growing with time. Thus, if you synced once and looked for a longer time, that would be worse than syncing 10 times for segments 1/10 in length. (By the way, that's not a fair comparison at all: syncing takes a lot of time. So whilst, in this scenario, which is not what you simulated, you get better BER the "short-snippet" way, you'd also waste 10× the time not to get useful data but to sync the receiver. Not a free lunch at all – which is why you practically always give high-speed receivers the ability to adjust their sample clocks based on properties of the signal while they're still running (which doesn't require zero-useful-data-transmisson periods!), and don't ever do "sync once, has to be good forever" in high-rate streaming systems.)

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.