12
$\begingroup$

In experimental physics, we often make measurements of linear transfer functions; these are complex-valued functions of frequency. If the underlying system is causal, then the transfer function must be analytic, satisfying the Kramers-Kronig relations. Our measurements, however, are corrupted by random (and perhaps systematic) errors.

Is it possible to improve a measurement of a linear transfer function of a causal system in the presence of noise by applying some kind of constraints derived from the Kramers-Kronig relations?

$\endgroup$
2
  • 1
    $\begingroup$ I'm sure the answer is "yes", but I'm very interested how you would actually make such an improvement. Good question +1 $\endgroup$ Commented Mar 10, 2011 at 21:46
  • 1
    $\begingroup$ I wonder whether this would be equivalent to least-squares fitting an analytic model (trying various numbers of poles and zeroes) à la VECTFIT? $\endgroup$
    – nibot
    Commented Mar 10, 2011 at 22:40

3 Answers 3

8
$\begingroup$

The problem you describe is (mathematically) similar to blind deconvolution. Given a signal which is the result of blurring an image (a linear operation) and adding noise, blind deconvolution tries to estimate the blur and the image.

As described here, the blind deconvolution process consists roughly of:

  1. Guess the blurring function (transfer function)
  2. Construct an image consistent with your signal and your guess of the transfer function
  3. Apply physically reasonable constraints to the constructed image. (For example, non-negativity).
  4. Modify your guess of the blurring function to better satisfy these constraints
  5. Goto 2.

It sounds like your idea would apply to step 3. I've never seen the K-K relations used this way, but I imagine they'd work just fine.

$\endgroup$
6
  • $\begingroup$ Thanks for the link; that looks like a good lead! I like the idea of an iterative technique, but I wonder whether it is really so straight-forward to simply "apply" the K-K relations (i.e. integral relations on sampled data). $\endgroup$
    – nibot
    Commented Mar 10, 2011 at 22:38
  • 2
    $\begingroup$ Possibly naive, but: Fourier transform your transfer function to give an impulse response. Truncate the part of the impulse response that is acausal, inverse transform, and you have your 'constrained' transfer function. $\endgroup$
    – Andrew
    Commented Mar 10, 2011 at 22:41
  • 1
    $\begingroup$ Interesting! Sounds very much like the Fienup iterative phase retrieval algorithm. But I would also like to introduce some additional information: I have estimates of the standard error on each of the measurement points (via a coherence measurement). Unclear how to mix this in. $\endgroup$
    – nibot
    Commented Mar 10, 2011 at 22:48
  • $\begingroup$ Maybe this iteration: 1. In time domain, truncate acausal portion; 2. In freq domain, nudge towards measured data, with nudging weighted by error estimate, 3. repeat. ? $\endgroup$
    – nibot
    Commented Mar 10, 2011 at 22:51
  • $\begingroup$ Also brings up the problem of how to inverse fourier transform an irregularly sampled spectrum (Lomb-Scargle?). $\endgroup$
    – nibot
    Commented Mar 10, 2011 at 22:52
3
$\begingroup$

In spectroscopy it is common to measure the real or imaginary part is the response function and use KK to infer the other. I must say however that I'm slightly horrified by some of the answers above as they relate to causality, which is easy to violate in Fourier domain by simple signal or image processing. You can also introduce unphysical artifacts. If you don't want to fall into this trap you might want to have a look at this tutorial:

Causality and linear response in classical electrodynamics. Alex J Yuffa and John A Scales. Eur. J. Phys. 33 no. 6, 1635 (2012).

These ideas apply to all linear response problems, not just EM.

$\endgroup$
0
1
$\begingroup$

I can think of a method that follows from a numerical Hilbert transformer I came up with for estimating the phase of Raman gains, a problem which is very like many handled in spectroscopy. I've never tried it quite in the way you want, so I can't give any guarantees. First, for background see:

see https://mathoverflow.net/a/69924/14510

In summary, this method maps a complex half plane onto the unit disk and so gets around the numerical headache of handling Cauchy principle values.

Your situation is slightly different. You want to find a function whose Laplace transform is holomorphic in the open right half plane: inverse Laplace transforms yield causal functions by definition and the holomorphicity in right half plane means they are stable (i.e. bounded for all time and either decay or boundedly oscillate as $t\rightarrow\infty$).

So, in your situation, what my method would do map $\mathbb{P} = \left\{\left. z \in \mathbb{C} \right|\; {\rm Re}(z) \geq 0\right\}$ to the closed unit disk $\mathbb{D} = \left\{\left. z \in \mathbb{C} \right|\; |z| \leq 1\right\}$ and the mapping you need is:

$M : \mathbb{P} \rightarrow \mathbb{D};\; M(s) =\frac{\omega_0 - s}{\omega_0 + s}$

what my method relies upon is that all functions holomorphic on the closed unit disk have a convergent Taylor series there, i.e. $f(z) = \sum_{n=0}^\infty f_n \,z^n$. So this is the general form of a function that you want. Transforming the domain of this Taylor series with the inverse Möbius transformation, we find that the general form of your function has Laplace transform:

$F(s) = \sum\limits_{n=0}^\infty f_n \,\left(\frac{\omega_0 - s}{\omega_0 + s}\right)^n$

i.e. your frequency response is of the form

$F(i\,\omega) = \sum\limits_{n=0}^\infty f_n \,\left(\frac{\omega_0 - i \omega}{\omega_0 + i \omega}\right)^n$

This is now in a form that you can least squares best fit to a dataset with linear regression: you assume a finite number of terms and then do a linear regression on the $f_n$.

You will need to experiment with this heaps! The "black art" of this numerical brew is to choose the real parameter $\omega_0$ so that the "interesting bits" of your function you want to transform happen on a "reasonable" portion of the unit circle. Suppose that your function can be thought of as having a compact support - some finite frequency interval centred on nought; within this compact support it does its interesting stuff and outside it is roughly nought. Then is is no good choosing $\omega_0$ so small that the image of this compact support is a few degrees of the unit circle centred on $z = 1$; likewise you might run into hardship if you choose $\omega_0$ too small and the compact support's complement is mapped to some tiny part of the unit circle centred around $z = -1$: if you make this mistake the algorithm may not recognise a zero at infinity.

Let's look at what this response would be like in the time domain. If you invert the Laplace transform above, we get a function of the form:

$f(t) = e^{-\omega_0\,t} \left(a_0 + a_1 t + a_2 t^2 + \cdots + a_n t^n\right) U(t)$

where $U$ is the Heaviside unit step function. So this is simply polynomial fitting, with the exponential function making things "behave well" as $t\rightarrow\infty$. Polynomial fitting can get numerically ill conditioned, especially if you have too many terms, so something else you might want to try is fitting a function of the form:

$f(t) = e^{-\omega_0\,t} \left(a_0 + a_1 T_1\left(\frac{t}{\tau_m}\right) + a_2 T_2\left(\frac{t}{\tau_m}\right) + \cdots + a_n T_n\left(\frac{t}{\tau_m}\right)\right) U(t)$

where now $T_n$ is the $n^{th}$ Tschebyschev polynomial and $\tau_m$ some cutoff time (after which the time response is effectively nought). So in the frequency domain you would fit:

$F(i\,\omega) = \sum\limits_{n=0}^N f_n \,\left.\mathcal{L}\left(U(t)\,e^{-\omega_0\,t} T_n\left(\frac{t}{\tau_m}\right)\right)\right|_{s = i \omega}$

More generally, Any rational function of frequency $F(i\,\omega)$ such that $F(s)$ has no poles in the right half plane will do (it will define a causal, stable function). This knowledge might get you a good fit - especially if your function has obvious resonances. However, fitting the co-efficients to your data is now going to be a nonlinear regression and moreover you will need to test that your poles are always in the left half plane. The linear regression of basis functions above doesn't have this problem - the poles of the fitted function do not shift.

As you are an experimental physicist, i.e. in my own definition someone who lives and breathes the concept of signal to noise ratio, I'm making this comment more for other readers: sometimes you can be too clever processing data and you need to tread with care. Methods like the above which seemingly should work by dint of some simple theory can worsen, not improve, the quality of your data, because they try to draw meaning from something that can be pure noise and really ought to be disregarded. This is true especially of any kind of deconvolution (mentioned in other answers), which boosts the noise power of signals where the system's response is weak and therefore (e.g. trying to offset a lowpass filtered by adding gain to high frequencies).

As for my original method: as far as I know is still used by a major commercial optical systems simulation package and hasn't had too many complaints.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.