1
$\begingroup$

The Wikipedia article on the $\Lambda$-CDM model says that the model has six "independent parameters". It also says that the model has several "fixed" parameters and several "calculated values" from those parameters. WMAP and Planck also say that the $\Lambda$-CDM model has six independent parameters.

In my mind, an "independent parameter" of a model is a number whose "real-world" value is determined experimentally, while a "fixed parameter" is a (dimensionless) number whose numerical value is specified directly within the model. Of course, whether a parameter is "independent" or "fixed" depends on exactly how you choose to scope the model.

It seems to me that there's a natural way to best distinguish these two cases. In some cases, changing the value of a "fixed parameter" - usually an integer - would change the model so qualitatively that the new model would naturally be described as a different physical theory. An example would be the number of generations of fermions in the Standard Model. In other cases, changing the value of a "fixed parameter" - usually a real number - would yield a model qualitatively close enough to the original model to naturally be considered a different instantiation of the same basic theory. An example would be the fine-structure constant in QED; clearly QED would be "morally" the same theory if the fine-structure constant happened to experimentally equal ~1/136 instead of ~1/137. Of course, the line between these two cases could be blurry in practice.

The Wikipedia page describes the total density parameter, the equation of state of dark energy, the tensor/scalar ratio, and the running of spectral index as "fixed" parameters of the $\Lambda$-CDM model. This makes sense to me, as they are all integers, and the nature of the model would qualitatively change if their values were different. But it also describes the sum of the three neutrino masses and the effective number of relativistic degrees of freedom as "fixed" parameters, even though they are non-integers (and indeed the former parameter is dimensionful, although that could probably be changed by reexpressing it in terms of the independent parameters).

Why are these latter two parameters considered "fixed"? They need to be determined experimentally. Is it just that they can be calculated from the Standard Model, which is considered a "separate, exogenous" physical theory? That seems pretty arbitrary to me. I would say that the $\Lambda$-CDM model has eight independent parameters.

$\endgroup$
1

1 Answer 1

1
$\begingroup$

I don't quite fully understand your distinctions between "fixed" and "independent" parameters. In particular the ones that the table shows as integer values aren't /really/ necessarily integers and could vary. Going through the parameters a bit...

  • It is highly unlikely that $\Omega_t = 1$, since this would mean that the universe is exactly flat. There is no mechanism I've heard of that would generate a universe like that; it is thought that inflation would flatten out the universe substantially (basically exponentially suppress it) but it wouldn't make it identically zero. In terms of implementation of models, it is very convenient from a numerical sense to make the universe flat so that various integral expressions are much easier.

  • The equation of state of dark energy is $w=-1$, which corresponds to a cosmological constant. This was just a mathematical construct that Einstein invented to keep the universe static back before we knew the universe was expanding. Many quantum theories that could explain dark energy predict some variation away from $w=-1$.

  • The tensor to scalar ratio is $r=0$, which corresponds to no tensor perturbations in the early universe. As far as I know, there is no known way to generate density perturbations in the early universe without also generating tensor perturbation. They might get suppressed by some exotic mechanism, but they won't be identically zero. In this case finding $r \neq 0$ is a big goal of modern cosmology, as it would give insight into what caused the original density perturbations (likely something like inflation).

  • Similarly, running of the scalar index is predicted by many (all?) inflation theories. But its value can be very small.

Now onto the two parameters of interest, neutrinos and their degrees of freedom. In this case, we do set these to less "convenient" values since there is experimental evidence of their existence/mass from other physical probes (neutrino oscillations). Like the tensor to scalar ratio is a major goal of cosmology constrain these parameters. However, as of right now there is no "detection" of the neutrino mass by any cosmological probe. Doing so at significance would certainly be a Nobel-prize winning achievement and lots of teams are "rushing" to be the first.

Similarly there is a calculation for the effective number of neutrinos from standard cosmology (it is non-zero due to certain thermal coupling physics that aren't important for this answer).

So in terms of definitions, I don't think there is anything much different in terms of the integer fixed parameters and the neutrino fixed parameters, other than that we have some well-founded prior from the neutrino ones which is why they are chosen with some particular values.

In the end however, cosmologists do run versions fitting various combinations of these "fixed parameters" as "independent parameters" under the heading of "extensions" to $\Lambda$CDM. The Planck paper you cited has a lot of these fits in section 7, where turn by turn each parameter is added and fitted. In all cases there is no detection of any of these extended models. You can find literally hundreds (maybe thousands?) of papers where people throw all sorts of crazy physics in and try to constrain them using data.

You might ask why not just fit everything all at once to begin with. There are two answers to this. The first answer is that it is quite numerically taxing to run the sampling algorithm to get these constraints and this cost scales quadratically with number of parameters to be tested. This is a bit of a "cheat" answer since by this point in 2024 there is enough computer power out there you can run everything at once if you want (and some people do, without any exciting results). The more nuanced answer is that is very hard to gain physical insight when you have so many extensions going at once; it is hard enough to think in 7 dimensions (6+1), let alone 12. This gets even more complex when you add in the systematics that might affect these measurements. Probably once we start getting an actual detection people will dive even more into degeneracies between these extension models.

$\endgroup$
3
  • $\begingroup$ Let's set aside my proposed distinction between "independent" and "fixed" parameters, which may not make any sense. What exactly is the distinction between the two sets of parameters that Wikipedia is drawing? Or is there no useful distinction between the two sets? $\endgroup$
    – tparker
    Commented Jun 25 at 2:23
  • 1
    $\begingroup$ The distinction is just that when people say "Lambda-CDM" they talk about the fixed parameters as fixed at those values, and the independent parameters are constrained. "Everyone" in the field agrees that these parameters will probably be slightly different (and in may cases would be a sign of cool new physics) but the "Lambda-CDM" model is the benchmark to test your new physical model against. $\endgroup$ Commented Jun 30 at 3:25
  • $\begingroup$ Also, the wikipedia page is strangely/poorly formatted, I don't know why it is showing the fixed parameters as having error bars for the 2018 Planck values. I think it is showing the constraints if you look at extended models beyond Lamda-CDM. $\endgroup$ Commented Jun 30 at 3:26

Not the answer you're looking for? Browse other questions tagged or ask your own question.