2
$\begingroup$

The Supernova 2006cm has a redshift of 0.0153 which translates into a recession speed of 4600 km/s.

It has a distance modulus of 34.71 which translates into a luminosity distance of 87 Mpc.

This gives a value of 53 km/s/Mpc for the Hubble constant.

Why doesn't this anomalous observation create sufficient doubt in the value of the Hubble constant?

enter image description here enter image description here

$\endgroup$
1
  • 1
    $\begingroup$ Can you add where you got your numbers from and also give an uncertainty on the distance modulus/distance, since it is meaningless to compare quantities without uncertainties. $\endgroup$
    – ProfRob
    Commented Dec 8, 2019 at 15:31

2 Answers 2

7
$\begingroup$

At a distance of $d = 87\,\mathrm{Mpc}$, with a Hubble constant of roughly $H_0 = 70\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$ cosmological expansion should make the host galaxy UGC 11723 recede at $v=H_0 \,d\simeq6100 \,\mathrm{km}\,\mathrm{s}^{-1}$.

However, galaxies also move through space, at typical velocities from several $100\,\mathrm{km}\,\mathrm{s}^{-1}$ in galaxy groups (e.g Carlberg et al. 2000) to some $1000\,\mathrm{km}\,\mathrm{s}^{-1}$ in rich clusters (e.g Girardi et al. 1993; Karachentsev et al. 2006).

The observed velocity of $4900\,\mathrm{km}\,\mathrm{s}^{-1}$ (Falco et al. 2006) is hence $\sim1200\,\mathrm{km}\,\mathrm{s}^{-1}$ smaller than the Hubble flow, but quite consistent with what may be expected.

This is why supernovae at such small distances cannot be used to deduce the Hubble constant, unless a very large number is observed such that statistical errors cancel out.

$\endgroup$
1
  • $\begingroup$ I wonder why this was downvoted? 🤷‍♂️ $\endgroup$
    – pela
    Commented Aug 7, 2022 at 9:01
2
$\begingroup$

A big picture reason is: because it's not really clear if it's scientifically sound to include multiple sources of data obtained from different methods.

Astronomical measurements are very difficult, with huge numbers of confounding factors involved. Distance measurements are notoriously difficult because we can't actually measure interstellar distances directly. We have to infer them. The ones we can make are based on various assumptions that certain things work in extremely consistent fashions, yielding "standard candles". But some research casts doubt on how standard and consistent they really are, or if the methodologies in a given piece of research properly accounted for all variables. In some instances you can only observe certain things under convenient circumstances (below average dust extinction; actually detectable by your instruments; planets literally align the right away), and it becomes entirely possible that your results are biased as a result. And once you have doubts on whether the methods and assumptions of the measurements are valid or not, it is no longer clear which data you should include and which you should exclude. If you add in data that's simply incorrect then you're unlikely to end up with a good measure of reality.

But the particular picture here is: a single data point (one supernova's measurements) is statistically irrelevant. Errors in measurements, or random fortune (this supernova won the velocity lottery by entirely normal, non-expansion means), could explain a single data point away. What you'd really want are dozens, or hundreds, or better still thousands of "anomalous" measurements to really be able to say you've got doubts that can be scientifically substantiated. And those would still be subjected to scientific scrutiny as people pour over them to look for inadequacies and alternative hypotheses, and perform follow-up observations with various methods. Moreover, it is difficult to combine data sets obtained from different methods or even experiments, as it is unlikely that they are statistically independent. And if you don't account for their dependencies properly then you can't make a combined error bar correctly.

$\endgroup$
2
  • $\begingroup$ I completely agree. However, shouldn't the measurement enlarge the error bars of the Hubble constant. at least the one measured using SN Ia data? $\endgroup$ Commented Dec 8, 2019 at 7:49
  • 1
    $\begingroup$ @RiteshSingh Only if you throw it into the data, and that has all the potential issues I just described. Plus in a statistics sense it is hard to combine disparate measurements, because it is unlikely that they are independent (added that to the answer). This would be more of a "this is curious, maybe we should study things like this more and see what we find" kind of thing. $\endgroup$ Commented Dec 8, 2019 at 12:05

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .