Skip to main content
added 70 characters in body
Source Link
nibot
  • 3.8k
  • 5
  • 29
  • 40

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The right approach in situations like this is typically to compute an exponentially weighted running average and standard deviation. In the exponentially weighted average, the estimates of the mean and variance are biased towards the most recent sample giving you estimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

In the frequency domain, an "exponentially weighted running average" is simply a real pole. It is simple to implement in the time domain.

Time domain implementation

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average. How to choose $a$ is described below in "analysis".

What is expressed above as an imperative program may also be depicted as a signal-flow diagram:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

Condensing the IIR filters into their own blocks, the diagram now looks like this:

enter image description here

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

Choose $a$: $$ a = 1 - \exp \left\{2\pi\frac{T}{\tau}\right\}$$

References

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The right approach in situations like this is typically to compute an exponentially weighted running average and standard deviation. In the exponentially weighted average, the estimates of the mean and variance are biased towards the most recent sample giving you estimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

In the frequency domain, an "exponentially weighted running average" is simply a real pole. It is simple to implement in the time domain.

Time domain implementation

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average. How to choose $a$ is described below in "analysis".

What is expressed above as an imperative program may also be depicted as a signal-flow diagram:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

Condensing the IIR filters into their own blocks, the diagram now looks like this:

enter image description here

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

References

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The right approach in situations like this is typically to compute an exponentially weighted running average and standard deviation. In the exponentially weighted average, the estimates of the mean and variance are biased towards the most recent sample giving you estimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

In the frequency domain, an "exponentially weighted running average" is simply a real pole. It is simple to implement in the time domain.

Time domain implementation

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average. How to choose $a$ is described below in "analysis".

What is expressed above as an imperative program may also be depicted as a signal-flow diagram:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

Condensing the IIR filters into their own blocks, the diagram now looks like this:

enter image description here

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

Choose $a$: $$ a = 1 - \exp \left\{2\pi\frac{T}{\tau}\right\}$$

References

added 373 characters in body
Source Link
nibot
  • 3.8k
  • 5
  • 29
  • 40

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The right approach in situations like this is typically to compute an exponentially weighted running average and standard deviation. In the exponentially weighted average, the estimates of the mean and variance are biased towards the most recent sample giving you estimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

In the frequency domain, an "exponentially weighted running average" is simply a real pole. It is simple to implement in the time domain.

Time domain implementation

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average. How to choose $a$ is described below in "analysis".

If you prefer a graphical depiction of the calculation, hereWhat is expressed above as an imperative program may also be depicted as a mocksignal-up in Simulinkflow diagram:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

Condensing the IIR filters into their own blocks, the diagram now looks like this:

enter image description here

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

References

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The right approach in situations like this is typically to compute an exponentially weighted running average and standard deviation. In the exponentially weighted average, the estimates of the mean and variance are biased towards the most recent sample giving you estimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average.

If you prefer a graphical depiction of the calculation, here is a mock-up in Simulink:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

References

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The right approach in situations like this is typically to compute an exponentially weighted running average and standard deviation. In the exponentially weighted average, the estimates of the mean and variance are biased towards the most recent sample giving you estimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

In the frequency domain, an "exponentially weighted running average" is simply a real pole. It is simple to implement in the time domain.

Time domain implementation

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average. How to choose $a$ is described below in "analysis".

What is expressed above as an imperative program may also be depicted as a signal-flow diagram:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

Condensing the IIR filters into their own blocks, the diagram now looks like this:

enter image description here

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

References

Add some analysis.
Source Link
nibot
  • 3.8k
  • 5
  • 29
  • 40

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The easiest wayright approach in situations like this is typically to simply compute thean exponentially weighted running meanaverage and standard deviation. In the running varianceexponentially weighted average, where the estimates of the mean and variance are biased towards the most recent sample. This way, giving you will get an estimateestimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average.

If you prefer a graphical depiction of the calculation, here is a mock-up in Simulink:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

References

The easiest way is to simply compute the running mean and the running variance, where the estimates of the mean and variance are biased towards the most recent sample. This way, you will get an estimate of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average.

If you prefer a graphical depiction of the calculation, here is a mock-up in Simulink:

enter image description here

References

What would be the ideal way to find the mean and standard deviation of a signal for a real time application. I'd like to be able to trigger a controller when a signal was more than 3 standard deviation off of the mean for a certain amount of time.

The right approach in situations like this is typically to compute an exponentially weighted running average and standard deviation. In the exponentially weighted average, the estimates of the mean and variance are biased towards the most recent sample giving you estimates of the mean and variance over the last $\tau$ seconds, which is probably what you want, rather than the usual arithmetic average over all samples ever seen.

Let mean and meansq be the current estimates of the mean and mean of the square of the signal. On every cycle, update these estimates with the new sample x:

% update the estimate of the mean and the mean square:
mean = (1-a)*mean + a*x
meansq = (1-a)*meansq + a*(x^2)

% calculate the estimate of the variance:
var = meansq - mean^2;

% and, if you want standard deviation:
std = sqrt(var);

Here $0 < a < 1$ is a constant that determines the effective length of the running average.

If you prefer a graphical depiction of the calculation, here is a mock-up in Simulink:

enter image description here

Analysis

The above algorithm computes $y_i = a x_i + (1-a) y_{i-1}$ where $x_i$ is the input at sample $i$, and $y_i$ is the output (i.e. estimate of the mean). This is a simple, single-pole IIR filter. Taking the $z$ transform, we find the transfer function $$H(z) = \frac{a}{1-(1-a)z^{-1}}$$.

To go to the continuous domain, we make the substitution $z = e^{s T}$ where $T$ is the sample time and $f_s = 1/T$ is the sample rate. Solving $1-(1-a)e^{-sT}=0$, we find that the continuous system has a pole at $s = \frac{1}{T} \log (1-a)$.

References

Added link to github gist
Source Link
nibot
  • 3.8k
  • 5
  • 29
  • 40
Loading
Corrected the range requirement for $a$.
Source Link
nibot
  • 3.8k
  • 5
  • 29
  • 40
Loading
Source Link
nibot
  • 3.8k
  • 5
  • 29
  • 40
Loading