Just for some background context:
A digital signal has jitter present on it. In this situation there is only random jitter (RJ), which is unbounded and normally distributed. When the digital signal is sent to a receiver, there is a bit error rate (BER). Let's say the BER is 1E-12, that means that 1 in every 1E12 bits is an error.
I was told that when measuring bit errors, there is a difference between sampling for errors 10 times for 1 minute each time, and 1 time for 10 minutes. I have trouble understanding why this is the case. I'm thinking that a bit error will occur because the instantaneous jitter is high enough to move the data edge to "miss" the sampling time. I.e. would be a transition in the middle of the eye opening.
I ran some MATLAB code to try to see this, but it seems like my results are agreeing with my intuition.
clc
clear
% compare the amount of normally distrubuted (mu = 0, sigma = 1)
% random values with magnitude greater than 3 for two cases:
%
% case 1: 100 runs of 1000000 samples
% case 2: 1 run of 100000000 samples
%
%
test_count = 100;
sample_count = 1000000;
counter_1 = 0;
for test_num = 1:test_count
samples = normrnd(0, 1, 1, sample_count);
for ind = 1:length(samples)
if abs(samples(ind)) > 3
counter_1 = counter_1 + 1;
end
end
end
counter_2 = 0;
samples = normrnd(0, 1, 1, sample_count*test_count);
for ind = 1:length(samples)
if abs(samples(ind)) > 3
counter_2 = counter_2 + 1;
end
end
disp("Case 1: " + sample_count + " samples " + test_count + " times: " + counter_1)
disp("Case 2: " + (sample_count * test_count) + " samples " + 1 + " times: " + counter_2)
The results are:
Case 1: 1000000 samples 100 times: 270540
Case 2: 100000000 samples 1 times: 271171
According to a z-score table, there is a 0.27% chance of getting a sample 3 sigma out, which my test agrees with. So unless I'm missing something, shouldn't 1 minute of BER testing ten times be equivalent to 10 minutes once?