7
$\begingroup$

All of the calculations for signal-to-noise ratio (SNR) of sources in astronomical images include various noise terms of relevance: read noise, quantization noise, and shot noise from each of the background, dark noise, and source. If I'm understanding correctly, though, this SNR calculation provides the SNR of the measurement of the photometry/brightness of the source; that is, the precision at which you measured its brightness.

If, however, all I care about is whether or not a purported source is likely to be real and not caused by noise, then to me it seems that the SNR calculation should compare the signal with only the noise that would be there if the purported source wasn't. That is, only the detector and background noise terms, not the source shot noise. This provides a measure of the signal strength against the noise terms that could contribute to a spurious signal. The shot noise of a signal itself does not factor into whether that signal should be regarded as spurious.

Is this correct, to regard SNR calculations for photometric measurements and raw detections as separate? And to not include the source shot noise in the latter? If so, any good sources for that in case I have to convince someone else of this?

$\endgroup$
1
  • $\begingroup$ Basically, you can calculate system SNR for each possible value of the source signal during your sampling period. But for a given sample, you "know nothing" about the source's variability, so the SNR for that sample depends only on the noise sources present. $\endgroup$ Commented May 12, 2022 at 12:25

1 Answer 1

7
$\begingroup$

Assuming you know the position of the source:

You collect a certain number of counts within an aperture (which could be modelled on a point spread function). You can then test the hypothesis that this many counts arose from a source with zero strength, i.e. from background sources alone.

A crude way to do this is to assume all the noise sources have normal distributions around mean values: the sky noise would be the average sky signal $\pm$ the square root of that (assuming everything has been transformed to detected electrons via a gain); the readnoise has a mean of zero with a standard deviation you know from the bias frames; the dark contribution and noise you get from a bias-subtracted dark frame of the same exposure time as your observation; the source strength is of course zero.

You compare the signal strength with the expected signal from the sky and dark signals and ask how many error bars they are apart and convert that into a significance of detection.

The shot noise features in the noise expected in the sky and dark signals.

This is crude because the sky noise is unlikely to be well-represented by shot noise. The sky background is variable and of course contains other (randomly distributed) astrophysical sources. Any (important) test for significance ought to estimate the variance in the sky a bit more carefully.

You are correct that significance of detection is not the same as SNR, although the difference may be very small if you are dealing with large sky backgrounds. However, a scenario from X-ray astronomy illustrates this: suppose you detect 4 counts but zero were expected from the very low background. The SNR is only $\sim 2$ but the source is hugely significant.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .