Skip to main content

Jason and Nibot's answer differ in one important aspect: Jason's method calculatecalculates the std dev and mean for the the whole signal (since y = 0), while nibot'sNibot's is a "running" calculation, i.e. it weighs more recent samples stronger than samples from the distant past.

Since the application seems to require std dev and mean as a function of time, Nibot's method is probably the more appropriate one (for this specific application). However, the real tricky part will be to get the time weighting part right. Nibot's example uses a simple single pole filter.

The proper way to describe this is to that we get an estimate of E$E[x]$ by filter x[n]filtering $x[n]$ and an estimate for E<x^2>$E[x^2]$ by filtering x[n]^2$x[n]^2$. Estimation filters are typically low pass filters. These filters should be scaled to have 0dB gain at 0 Hz. Otherwise there is a consantconstant gain error.

The choice of lowpass filter can be guided by what you know about your signal and the time resolution you need for your estimation. Lower cutoff frequencies and higher order will result in better accuracy but slower response time.

To complicate things further one filter is applied in the linear domain and another in the squared domain. Squaring changes significantly changes the spectral content of the signal so you may want to use a different filter in the squared domain.

Here is an example on how to estimate mean, rms and std dev as a function of time.

%% example
fs = 44100; n = fs; % 44.1 kHz sample rate, 1 second
% signal: white noise plus a low frequency drift at 5 Hz)
x = randn(n,1) + sin(2*pi*(0:n-1)'*5/fs);
% mean estimation filter: since we are looking for effects in tehthe 5 Hz range we use maybe a
% 25 Hz filter, 2nd order so it's not too sluggish
[b,a] = butter(2,25*2/fs);
xmeanEst = filter(b,a,x);
% now we estimate x^2, since most frequency double we use twice the bandwidth
[b,a] = butter(2,50*2/fs);
x2Est = filter(b,a,x.^2);
% std deviation estimate
xstd = sqrt(x2Est)-xmeanEst;
% and plot it
h = plot([x, xmeanEst sqrt(x2Est) xstd]);
grid on;
legend('x','E<x>','sqrt(E<x^2>)','Std dev');
set(h(2:4),'Linewidth',2);

Jason and Nibot's answer differ in one important aspect: Jason's method calculate the std dev and mean for the the whole signal (since y = 0), while nibot's is a "running" calculation, i.e. it weighs more recent samples stronger than samples from the distant past.

Since the application seems to require std dev and mean as a function of time, Nibot's method is probably the more appropriate one (for this specific application). However, the real tricky part will be to get the time weighting part right. Nibot's example uses a simple single pole filter.

The proper way to describe this is to that we get an estimate of E by filter x[n] and an estimate for E<x^2> by filtering x[n]^2. Estimation filters are typically low pass filters. These filters should be scaled to have 0dB gain at 0 Hz. Otherwise there is a consant gain error.

The choice of lowpass filter can be guided by what you know about your signal and the time resolution you need for your estimation. Lower cutoff frequencies and higher order will result in better accuracy but slower response time.

To complicate things further one filter is applied in the linear domain and another in the squared domain. Squaring changes significantly the spectral content of the signal so you may want to use a different filter in the squared domain.

Here is an example on how to estimate mean, rms and std dev as a function of time.

%% example
fs = 44100; n = fs; % 44.1 kHz sample rate, 1 second
% signal: white noise plus a low frequency drift at 5 Hz)
x = randn(n,1) + sin(2*pi*(0:n-1)'*5/fs);
% mean estimation filter: since we are looking for effects in teh 5 Hz range we use maybe a
% 25 Hz filter, 2nd order so it's not too sluggish
[b,a] = butter(2,25*2/fs);
xmeanEst = filter(b,a,x);
% now we estimate x^2, since most frequency double we use twice the bandwidth
[b,a] = butter(2,50*2/fs);
x2Est = filter(b,a,x.^2);
% std deviation estimate
xstd = sqrt(x2Est)-xmeanEst;
% and plot it
h = plot([x, xmeanEst sqrt(x2Est) xstd]);
grid on;
legend('x','E<x>','sqrt(E<x^2>)','Std dev');
set(h(2:4),'Linewidth',2);

Jason and Nibot's answer differ in one important aspect: Jason's method calculates the std dev and mean for the the whole signal (since y = 0), while Nibot's is a "running" calculation, i.e. it weighs more recent samples stronger than samples from the distant past.

Since the application seems to require std dev and mean as a function of time, Nibot's method is probably the more appropriate one (for this specific application). However, the real tricky part will be to get the time weighting part right. Nibot's example uses a simple single pole filter.

The proper way to describe this is to that we get an estimate of $E[x]$ by filtering $x[n]$ and an estimate for $E[x^2]$ by filtering $x[n]^2$. Estimation filters are typically low pass filters. These filters should be scaled to have 0dB gain at 0 Hz. Otherwise there is a constant gain error.

The choice of lowpass filter can be guided by what you know about your signal and the time resolution you need for your estimation. Lower cutoff frequencies and higher order will result in better accuracy but slower response time.

To complicate things further one filter is applied in the linear domain and another in the squared domain. Squaring significantly changes the spectral content of the signal so you may want to use a different filter in the squared domain.

Here is an example on how to estimate mean, rms and std dev as a function of time.

%% example
fs = 44100; n = fs; % 44.1 kHz sample rate, 1 second
% signal: white noise plus a low frequency drift at 5 Hz)
x = randn(n,1) + sin(2*pi*(0:n-1)'*5/fs);
% mean estimation filter: since we are looking for effects in the 5 Hz range we use maybe a
% 25 Hz filter, 2nd order so it's not too sluggish
[b,a] = butter(2,25*2/fs);
xmeanEst = filter(b,a,x);
% now we estimate x^2, since most frequency double we use twice the bandwidth
[b,a] = butter(2,50*2/fs);
x2Est = filter(b,a,x.^2);
% std deviation estimate
xstd = sqrt(x2Est)-xmeanEst;
% and plot it
h = plot([x, xmeanEst sqrt(x2Est) xstd]);
grid on;
legend('x','E<x>','sqrt(E<x^2>)','Std dev');
set(h(2:4),'Linewidth',2);
corrected a misconception about nibot's solution
Source Link
Hilmar
  • 9.5k
  • 28
  • 35

Jason and Nibot's answer differ in one important aspect: Jason's method calculate the std dev and mean for the the whole signal (since y = 0), while nibot's is a "running" calculation, i.e. it weighs more recent samples stronger than samples from the distant past.

Since the application seems to require std dev and mean as a function of time, Nibot's method is probably the more appropriate one (for this specific application). However, the real tricky part will be to get the time weighting part right. Nibot's example uses a simple single pole filter.

The proper way to describe this is to that we get an estimate of E by filter x[n] and an estimate for E<x^2> by filtering x[n]^2. Estimation filters are typically low pass filters. These filters should be scaled to have 0dB gain at 0 Hz. Otherwise there is a consant gain error. The single pole filter in Nibot's example does not match this criteria. Here is a little code snippet that demonstrates the problem.

%% estimate mean (x)
n = 8192; x = randn(n,1) + 1; % gaussian noise with unity mean
a = .95;
% method 1: single pole filter
y1 = filter(b,[1 -a],x);
% method 2: real low pass filters with 0 dB at 0 Hz
y2 = filter((1-a)/2*[1 1],[1 -a],x);
plot([y1 y2]);

The choice of lowpass filter can be guided by what you know about your signal and the time resolution you need for your estimation. Lower cutoff frequencies and higher order will result in better accuracy but slower response time.

To complicate things further one filter is applied in the linear domain and another in the squared domain. Squaring changes significantly the spectral content of the signal so you may want to use a different filter in the squared domain.

Here is an example on how to estimate mean, rms and std dev as a function of time.

%% example
fs = 44100; n = fs; % 44.1 kHz sample rate, 1 second
% signal: white noise plus a low frequency drift at 5 Hz)
x = randn(n,1) + sin(2*pi*(0:n-1)'*5/fs);
% mean estimation filter: since we are looking for effects in teh 5 Hz range we use maybe a
% 25 Hz filter, 2nd order so it's not too sluggish
[b,a] = butter(2,25*2/fs);
xmeanEst = filter(b,a,x);
% now we estimate x^2, since most frequency double we use twice the bandwidth
[b,a] = butter(2,50*2/fs);
x2Est = filter(b,a,x.^2);
% std deviation estimate
xstd = sqrt(x2Est)-xmeanEst;
% and plot it
h = plot([x, xmeanEst sqrt(x2Est) xstd]);
grid on;
legend('x','E<x>','sqrt(E<x^2>)','Std dev');
set(h(2:4),'Linewidth',2);

Jason and Nibot's answer differ in one important aspect: Jason's method calculate the std dev and mean for the the whole signal (since y = 0), while nibot's is a "running" calculation, i.e. it weighs more recent samples stronger than samples from the distant past.

Since the application seems to require std dev and mean as a function of time, Nibot's method is probably the more appropriate one (for this specific application). However, the real tricky part will be to get the time weighting part right. Nibot's example uses a simple single pole filter.

The proper way to describe this is to that we get an estimate of E by filter x[n] and an estimate for E<x^2> by filtering x[n]^2. Estimation filters are typically low pass filters. These filters should be scaled to have 0dB gain at 0 Hz. Otherwise there is a consant gain error. The single pole filter in Nibot's example does not match this criteria. Here is a little code snippet that demonstrates the problem.

%% estimate mean (x)
n = 8192; x = randn(n,1) + 1; % gaussian noise with unity mean
a = .95;
% method 1: single pole filter
y1 = filter(b,[1 -a],x);
% method 2: real low pass filters with 0 dB at 0 Hz
y2 = filter((1-a)/2*[1 1],[1 -a],x);
plot([y1 y2]);

The choice of lowpass filter can be guided by what you know about your signal and the time resolution you need for your estimation. Lower cutoff frequencies and higher order will result in better accuracy but slower response time.

To complicate things further one filter is applied in the linear domain and another in the squared domain. Squaring changes significantly the spectral content of the signal so you may want to use a different filter in the squared domain.

Here is an example on how to estimate mean, rms and std dev as a function of time.

%% example
fs = 44100; n = fs; % 44.1 kHz sample rate, 1 second
% signal: white noise plus a low frequency drift at 5 Hz)
x = randn(n,1) + sin(2*pi*(0:n-1)'*5/fs);
% mean estimation filter: since we are looking for effects in teh 5 Hz range we use maybe a
% 25 Hz filter, 2nd order so it's not too sluggish
[b,a] = butter(2,25*2/fs);
xmeanEst = filter(b,a,x);
% now we estimate x^2, since most frequency double we use twice the bandwidth
[b,a] = butter(2,50*2/fs);
x2Est = filter(b,a,x.^2);
% std deviation estimate
xstd = sqrt(x2Est)-xmeanEst;
% and plot it
h = plot([x, xmeanEst sqrt(x2Est) xstd]);
grid on;
legend('x','E<x>','sqrt(E<x^2>)','Std dev');
set(h(2:4),'Linewidth',2);

Jason and Nibot's answer differ in one important aspect: Jason's method calculate the std dev and mean for the the whole signal (since y = 0), while nibot's is a "running" calculation, i.e. it weighs more recent samples stronger than samples from the distant past.

Since the application seems to require std dev and mean as a function of time, Nibot's method is probably the more appropriate one (for this specific application). However, the real tricky part will be to get the time weighting part right. Nibot's example uses a simple single pole filter.

The proper way to describe this is to that we get an estimate of E by filter x[n] and an estimate for E<x^2> by filtering x[n]^2. Estimation filters are typically low pass filters. These filters should be scaled to have 0dB gain at 0 Hz. Otherwise there is a consant gain error.

The choice of lowpass filter can be guided by what you know about your signal and the time resolution you need for your estimation. Lower cutoff frequencies and higher order will result in better accuracy but slower response time.

To complicate things further one filter is applied in the linear domain and another in the squared domain. Squaring changes significantly the spectral content of the signal so you may want to use a different filter in the squared domain.

Here is an example on how to estimate mean, rms and std dev as a function of time.

%% example
fs = 44100; n = fs; % 44.1 kHz sample rate, 1 second
% signal: white noise plus a low frequency drift at 5 Hz)
x = randn(n,1) + sin(2*pi*(0:n-1)'*5/fs);
% mean estimation filter: since we are looking for effects in teh 5 Hz range we use maybe a
% 25 Hz filter, 2nd order so it's not too sluggish
[b,a] = butter(2,25*2/fs);
xmeanEst = filter(b,a,x);
% now we estimate x^2, since most frequency double we use twice the bandwidth
[b,a] = butter(2,50*2/fs);
x2Est = filter(b,a,x.^2);
% std deviation estimate
xstd = sqrt(x2Est)-xmeanEst;
% and plot it
h = plot([x, xmeanEst sqrt(x2Est) xstd]);
grid on;
legend('x','E<x>','sqrt(E<x^2>)','Std dev');
set(h(2:4),'Linewidth',2);
Source Link
Hilmar
  • 9.5k
  • 28
  • 35

Jason and Nibot's answer differ in one important aspect: Jason's method calculate the std dev and mean for the the whole signal (since y = 0), while nibot's is a "running" calculation, i.e. it weighs more recent samples stronger than samples from the distant past.

Since the application seems to require std dev and mean as a function of time, Nibot's method is probably the more appropriate one (for this specific application). However, the real tricky part will be to get the time weighting part right. Nibot's example uses a simple single pole filter.

The proper way to describe this is to that we get an estimate of E by filter x[n] and an estimate for E<x^2> by filtering x[n]^2. Estimation filters are typically low pass filters. These filters should be scaled to have 0dB gain at 0 Hz. Otherwise there is a consant gain error. The single pole filter in Nibot's example does not match this criteria. Here is a little code snippet that demonstrates the problem.

%% estimate mean (x)
n = 8192; x = randn(n,1) + 1; % gaussian noise with unity mean
a = .95;
% method 1: single pole filter
y1 = filter(b,[1 -a],x);
% method 2: real low pass filters with 0 dB at 0 Hz
y2 = filter((1-a)/2*[1 1],[1 -a],x);
plot([y1 y2]);

The choice of lowpass filter can be guided by what you know about your signal and the time resolution you need for your estimation. Lower cutoff frequencies and higher order will result in better accuracy but slower response time.

To complicate things further one filter is applied in the linear domain and another in the squared domain. Squaring changes significantly the spectral content of the signal so you may want to use a different filter in the squared domain.

Here is an example on how to estimate mean, rms and std dev as a function of time.

%% example
fs = 44100; n = fs; % 44.1 kHz sample rate, 1 second
% signal: white noise plus a low frequency drift at 5 Hz)
x = randn(n,1) + sin(2*pi*(0:n-1)'*5/fs);
% mean estimation filter: since we are looking for effects in teh 5 Hz range we use maybe a
% 25 Hz filter, 2nd order so it's not too sluggish
[b,a] = butter(2,25*2/fs);
xmeanEst = filter(b,a,x);
% now we estimate x^2, since most frequency double we use twice the bandwidth
[b,a] = butter(2,50*2/fs);
x2Est = filter(b,a,x.^2);
% std deviation estimate
xstd = sqrt(x2Est)-xmeanEst;
% and plot it
h = plot([x, xmeanEst sqrt(x2Est) xstd]);
grid on;
legend('x','E<x>','sqrt(E<x^2>)','Std dev');
set(h(2:4),'Linewidth',2);