7
$\begingroup$

This probably walks a fine line between Cross Validated and this site, but I think the practical application of it lies more on the signal processing side.

I have had the occasion to record some EEG data for an experiment, and determined the mean-squared coherence between each lead:

$$ C_{xy} = \frac{\lvert{G_{xy}}\rvert^2}{G_{xx}G_{yy}} $$

(I also performed an event-related coherence calculation within certain frequency bands, which gives me a power value, but doesn't buy me anything further in the analysis of significance)

What I end up with is a list of values for the pairwise interaction of the electrodes:

Site1-Site2 = 0.24
Site2-Site3 = 0.13
...
Site7-Site45 = 0.37

so the challenge is to determine, within this sample of mean-squared coherence values, whether the MSC of the Site1-Site2 interaction is significantly different from that of the Site7-Site45 interaction, given a specific set of experimental conditions (e.g., the subject is tapping their foot or something). Of course, ultimately comparing between subjects is necessary, but that would follow from being able to compare the results of one subject, and is probably more an issue of statistics.

Knowing that the data are not normally distributed, my question is, what is an optimal method of determining whether these values are significant?

I am familiar with 3 approaches that may be used:

  • Take everything above 0.5 as significant. This is one that people could probably argue about over in stats land, but even with my untrained eye, that seems like its fudging it a bit

  • Use surrogate data to construct a distribution -- the signal processing piece of it would be, what would be a realistic set of surrogate data to use

  • Use the Bootstrap method, or basically derive the distribution from the data. I get the idea of this method, but I don't fully understand whether this is any more realistic than using the surrogate data, or whether or not this would be SOP in signal processing

Would any of these 3 methods be appropriate for this type of analysis? Is there another method that I've missed?

I haven't searched the literature on this in a while, but I never found a consensus anyway. Are these standard sets of tests that you would perform on signal processing data?

$\endgroup$
9
  • $\begingroup$ It just may be a nomenclature gap, but I'm not really sure what you're trying to get at. I'm assuming this is a standard problem in the specific subfield that you're working in. I don't know specifically what to ask, but more detail on what you have and what you're trying to determine could help. $\endgroup$
    – Jason R
    Commented Oct 28, 2011 at 2:03
  • $\begingroup$ @JasonR Let me know if that's closer to what you need. $\endgroup$
    – jonsca
    Commented Oct 28, 2011 at 3:42
  • $\begingroup$ I think your edits have made your question clearer, but I'm not sure what to suggest; what you're doing is outside of my typical area of operation. If I understand you correctly, it sounds like you're really looking to do outlier detection, right? You don't have a priori knowledge of any distribution that you can compare the values to, so you are looking for samples that don't "fit in" with the others. Unless my interpretation is wrong, maybe you can find something using those keywords. $\endgroup$
    – Jason R
    Commented Nov 10, 2011 at 1:40
  • $\begingroup$ @JasonR I appreciate you taking a look at it. In some sense, yes, it would be nice to know if a coherence between two points was n standard deviations off of the mean (which would come in handy for online processing). It's the actual distribution that I'm looking for, though. Analogous to something like this statistic I'd like to be able to gauge whether 0.37 is statistically different from 0.24 in my sample set above with 90% or 99% certainty. $\endgroup$
    – jonsca
    Commented Nov 10, 2011 at 4:34
  • $\begingroup$ Even knowing how other people compare things like correlations between signals would be helpful, as it paves the way for an analogy. $\endgroup$
    – jonsca
    Commented Nov 10, 2011 at 4:34

1 Answer 1

2
$\begingroup$

One approach might be an old method from "An approximation to the cumulative distribution function of the magnitude-squared coherence estimate" by Nutall and Carter (1981), who basically apply a non-linear distortion to transform the MSC to near-Gaussian random variables, then you could apply standard methods.

http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1163657

A more recent approach used overlapping segments to create an (approximate) PDF and CDF for the MSC: "Approximation of statistical distribution of magnitude squared coherence estimated with segment overlapping" by Bortel and Sovka (2006):

http://www.sciencedirect.com/science/article/pii/S0165168406003471

They give a procedure for the comparison of two MSC estimates.

There may be more refined methods, but I'm not aware of them.

$\endgroup$
3
  • $\begingroup$ Great references. I think you pasted in the same link for both, though. The thing that's tough to know is which is better, but this gives me a better starting point (I think that I'd read the Bortel et al work at one point, but it's good to refresh). $\endgroup$
    – jonsca
    Commented Nov 25, 2011 at 15:52
  • $\begingroup$ Oops, right you are, I've updated the link $\endgroup$
    – tdc
    Commented Nov 25, 2011 at 22:15
  • $\begingroup$ And I would generally follow Occam's razor in situations like this ... all things being equal the simplest solution is most likely the best! $\endgroup$
    – tdc
    Commented Nov 25, 2011 at 22:16

Not the answer you're looking for? Browse other questions tagged or ask your own question.