32
$\begingroup$

I am not a physicist, but I'm trying to get a little bit of an understanding of why it is hard to extend the standard model with quantum gravity (i.e. why it's hard to combine QM and GR), cf. e.g. A list of inconveniences between quantum mechanics and (general) relativity?

I have read in various places that the problem is that when one tries to quantize gravity, one obtains a "non-normalizable theory", cf. e.g. Why is quantum gravity non-renormalizable? I don't know exactly what this means. Is it possible to explain this to a non-expert in physical terms? I am not even sure whether the issue is that the model cannot be defined, or that it can be defined but we do not have the mathematical tools to solve it.

I'm hoping there's an answer that explains this at a level of a generic math/science literate person. Existing questions assumes a bunch of background knowledge that is beyond me. I am really hoping there is an answer for a non-physicist that can explain what the issue is without going into a lot of technical detail, and without assuming a lot of QFT-specific or GR-specific background knowledge.

$\endgroup$
7
  • 4
    $\begingroup$ @JohnRennie, No because it assumes a bunch of background knowledge that is beyond me, like "Consider a quantum theory of fields 𝜙 with a hard momentum space cutoff Λ", and generally is quite technical. It in fact already assumes that the questioner knows what renormalization is or what the context is in which it pops up. Then again I am really hoping there is an answer for a non-physicist that can explain what the issue is without going into a lot of basic math (some math is ok though). I'm hoping there's an answer that explains this at a level of a generic math/science literate person. $\endgroup$
    – user56834
    Commented Jun 4 at 10:57
  • 8
    $\begingroup$ OK, I have reopened your question. But if you asking for a renormalisation for non-nerds article then I'm not sure this is the best place. $\endgroup$
    – John Rennie
    Commented Jun 4 at 11:15
  • 9
    $\begingroup$ @user56834 You can't understand this issue if you don't possess the necessary background knowledge. $\endgroup$
    – AfterShave
    Commented Jun 4 at 11:22
  • 4
    $\begingroup$ Don Lincoln of FermiLab made a number of videos you might find helpful. Start with Quantum Gravity $\endgroup$
    – mmesser314
    Commented Jun 4 at 15:21
  • 6
    $\begingroup$ @user56834 (+1) you're not the only one! I have quite a bit of physics background, and statements like "Consider a quantum theory of fields 𝜙 with a hard momentum space cutoff Λ" is still too technical for me. $\endgroup$
    – Allure
    Commented Jun 5 at 2:58

8 Answers 8

48
$\begingroup$

“What exactly goes wrong when trying to quantise gravity?” There is no problem specific to quantum gravity! I know this isn’t conventional way to look at it, but the physics is actually not controversial
 professional physicists just jump ahead to the “real problem” without explaining that the pop-sci problem with gravity is more general.

First “Quantum Gravity is not renormalisable”. True, but so what? You can still quantise gravity, using the standard machinery of effective field theory, and get perfectly meaningful answers — in the “low energy” limit, up to first- and second-order quantum corrections. Here’s a paper that does it!

The fact that QG is not renormalisable just means that this theory fails in the high-energy limit, so it cannot be the full truth. But in principle we could just measure those first- and second-order quantum corrections and see if experiment lines up to theory. The first-order quantum correction to the gravitational potential is proportional to $1/r^2$.

What’s unique about gravity is that we can’t test the prediction in that paper because gravity is so weak that those quantum corrections are untestably small. To test “quantum gravity”, we need to go to high energies, which is where the fact that QG is not renormalisable kills us.

To compare with quantum electrodynamics (QED), I would point out that epistemologically physics has exactly the same problem with QED as QG; it’s just this doesn’t cause us any trouble.

With QED, we also don’t know the high-energy behaviour of the theory, with nonsensical results at very high energy; it has a Landau pole. However, for QED, renormalisability gives us the ability to make low-energy predictions, which we can test, and which are stunningly correct and accurate. The fact that QED has a Landau pole shows that “something happens” at a higher energy, and that something is electroweak theory. Electroweak theory also has a Landau pole, so again, “something happens” at a higher energy. But that Landau pole would be at stupidly high energy.

The underlying reason why physicists have a beef with Quantum Gravity, is that when you run the numbers, it’s actually QG that blows up first. High as it is, the Planck energy associated with gravity is so much lower than the implied energies of blowup of electroweak and strong forces that it’s the one which determines our “level of ignorance”. The assumption is that if we understood gravity fully, it would fix the high energy behaviour of all the forces, but that might not be true.

$\endgroup$
3
  • 2
    $\begingroup$ "But in principle we could just measure those first- and second-order quantum corrections and see if experiment lines up to theory." What would these experiments look like? $\endgroup$
    – Jagerber48
    Commented Jun 5 at 7:16
  • 3
    $\begingroup$ There's a lot of words here burying the lede: "[Current] theory fails in the high-energy limit". $\endgroup$
    – Xerxes
    Commented Jun 5 at 13:33
  • $\begingroup$ Excellent answer. Just one small correction to the quote "the Planck energy associated with gravity is so much lower than the implied energies of blowup of electroweak and strong forces": strong force (QCD) blows up on the low energy end, which is the QCD scale $\Lambda_{QCD}$. On the other hand, strong force behaves perfectly fine on the high energy end due to asymptotic freedom. $\endgroup$
    – MadMax
    Commented Jun 7 at 16:58
27
$\begingroup$

I think, like a lot of technical questions, you can get different answers by "zooming in" to different levels of technical detail.

Here is highest level, least technical, shortest explanation I can come up with:

Feynman described the process of science as (1) making a guess for a specific model, (2) computing the consequences, and (3) comparing with experiment. Experience over the past 100 odd years of quantum mechanics has given us a set of tools to do (1) -- that is, for ways of "guessing" quantum theories. However, when we apply these rules to Einstein's theory of general relativity for gravity, we run into problems in step (2). If we try to compute predictions at or above the Planck scale, we find that the predictions are nonsense. Whether that's because the theory we guess is not well defined, or whether it's because our standard tools aren't good enough to understand the theory, is debatable, although most people would probably argue for the theory being ill-defined. As an additional, although logically separate, problem, we also have problems with (3), because to date no one has found a situation where both quantum mechanics and general relativity are necessary to explain an observation.

As an aside, the fact that the problem is with step (2), and not step (1), is one of the reasons this subject is so technical and difficult to explain at a "non expert physics" level. The issue isn't some grand philosophical question about what kind of guess makes sense. We have a specific theory -- general relativity -- and we have a specific procedure to follow -- quantization - the issue is that when carrying out the procedure in detail, we don't arrive at useful predictions at high energies. On some level, the "right answer" to this is that we're doing something wrong, and the obvious way to write an "easy non-expert" answer would be to simply tell you what the wrong thing is and how to fix it, but we don't know how to fix the problem. So, that maybe gives some context for why it's so hard to give a non-technical explanation; all we can really say for sure is what goes wrong in the calculation if we try to follow the usual rules.

That may be unsatisfying, because I'm not really saying what the issue with (2) is, so I will try to go one level deeper (still only just scratching the surface though).

  • "Quantization" is a series of heuristic rules (aka, guesses) that we apply to define a quantum theory that is guaranteed to reduce to a given classical theory in the limit that quantum effects are small.
  • These rules have been very successful when applied to theories of particles with spin-0, 1/2, and 1, at relativistic energies (meaning when special relativity is relevant). The Standard Model is an example theory like this.
  • The rules can be applied to gravity, which is described by a spin-2 particle coupled to other matter. When we apply the rules, we find that we can make predictions in the sense of computing the first few terms of a Taylor series of the energy scale of the process divided by Planck scale. (Technically the Taylor series I'm referring to is called the "Wilson action" at scale $\Lambda$, and I'm supposing the cutoff scale $\Lambda$ is around the Planck scale).
  • However, for energies at or above this energy scale, it is not enough to compute a few terms. This causes several problems.
    • First, most obviously, we don't know how to compute the infinite number of terms and "resum" them into something useful.
    • Second, we need to account for effects that don't appear in Taylor series at all. This might include effects of quantum black holes, for example.
    • Additionally (I'm not saying "finally" because this subject is so complicated that I'm sure someone can come up with more problems), according to our normal rules, we expect that for each term in the series, we need to introduce a new parameter to make that term well defined. The details of how you introduce these parameters are what people call "renormalization" -- a "renormalizable" theory is one where you only need to introduce a finite number of parameters. Renormalizable theories are predictive because we can do a finite number of experiments to measure those parameters, then the results of all other experiments are predicted by the theory. Non-renormalizable theories, on the other hand, require an infinite number of parameters, so no matter how many measurements you do, you can always adjust a parameter in the theory to fit that experiment. Therefore, non-renormalizable theories are not predictive, so not useful for science. (More precisely, they are not predictive in the regime when all the terms in the Taylor series are important.)

The above explanation likely leads to many more questions (like, "what is it about a spin-2 field that is different from spin-0, 1/2, and 1 fields"), but to answer those would require "zooming in" to deeper levels, and I will not try to do that now.

However, I will try to give you a broad idea of some approaches people have tried to fix this problem (not at all comprehensive):

  • One approach is to say that the equations of general relativity really are correct quantum mechanically, and we either need to understand quantum mechanics better, or modify quantum mechanics.
    • One version of the idea that the equations of general relativity are correct, and we "just" need to understand quantum mechanics better, goes by the name of "asymptotic safety", where it is hypothesized that the series I am describing above actually is something meaningful that does not require an infinite number of parameters to define. The main problem with this approach is to show that the "something meaningful" (a conformal fixed point) actually exists, and that is very hard, and not guaranteed to work.
    • One version of the idea that we need to modify quantum mechanics to "accommodate" general relativity is loop quantum gravity, which states that if we work with the right variables, and modify the standard quantization procedure, we can get a well defined theory. One problem with this approach is that, because the standard quantization procedure is modified, it is not guaranteed to recover classical general relativity as an appropriate limit when quantum effects are small, and indeed (as far as I know) no one has been able to show this limit exists.
  • Another approach is to say that the equations of general relativity are just an approximation to something else that takes over at the Planck scale, and we can apply standard quantum mechanics to whatever this new thing is. Of course, there are a lot of new equations you could guess, and no experiments we can use to rule them out, so the trick is to find a compelling reason to consider any particular guess.
    • The classic example of this approach is string theory. The way string theory gets around the problem of "why did you guess that" is basically that string theory wasn't originally invented as a theory of quantum gravity. It started as an exercise to guess a formula with certain properties that would be useful in the context of the strong interactions, and eventually people realized that formula drops out of a theory of a relativistic string, along with a massless spin-2 particle (the graviton), 10 dimensions (if you include supersymmetry), and so on. At a fundamental level, string theory is very tightly constrained and has a lot of internal logic, which is very good for this kind of approach, because it gives a reason for why this particular set of equations is worth studying. However, a major problem with string theory is trying to construct a model that reproduce the world we live in. Among other challenges, a problem is that when you compactify ("get rid of") the extra dimensions we don't observe, you typically introduce massless particles (moduli) which would exert forces we don't observe. You can try to solve this problem, but it leads you down a rabbit hole that I would say no one has made sense of yet -- at least, there is not a compelling model that solves that issue plus reproduces the vacuum we observe.

Finally, I'd like to end by pointing out that there are some things we'd like a theory of quantum gravity to do, like explain what happens at the singularity of a black hole, or at the big bang singularity in cosmology. So far, none of the approaches have a compelling explanation for the singularities (and of course, even if they did theoretically, we are very far from being able to test the explanation empirically).

$\endgroup$
3
  • $\begingroup$ That was the uncomplex answer! $\endgroup$
    – mtyson
    Commented Jun 6 at 21:41
  • 1
    $\begingroup$ @mtyson I guess the least complex answer would be: "If you take the rules that work for the Standard Model, and apply them to general relativity, we find you get useless predictions at energies above the Planck scale." But, I think that doesn't actually go deep enough to answer the question, just kind of reframes it. To get deep enough to say what the problem is while not using math is a tricky balancing game! (And my answer can certainly be improved in terms of clarity and accuracy!) $\endgroup$
    – Andrew
    Commented Jun 6 at 21:49
  • $\begingroup$ Wow! This is a great structure and explanation! Especially dividing up the parts for different depths gives one a good break point to think through the high-level explanation before diving into the details - thank you! $\endgroup$
    – Falco
    Commented Jun 7 at 9:42
8
$\begingroup$

I'll give a brief explanation of renormalization. Suppose you have a single electron. According to electrostatics it produces a potential: $$V(r)=\frac{e}{r}$$

This potential seems to indicate that the potential increases without limit as you get closer to the electron. In reality, that doesn't happen. Rather, as the energy increases other kinds of particles are created and adding up their effects gives a different potential that gives finite answers to questions about the electron's energy depending on how you're coupling to it. If you send a low energy electron in its direction you'll get some finite answer if you ask what energy they both end up with. If you send a higher energy electron, you'll get different answers but they will still be finite. Understanding how the answers change with energy as a result of such interactions is called renormalization.

To do renormalization calculations you have to add up series of terms and if those series give a finite answer the theory is said to be renormalizable. Otherwise it is non-renormalizable. For quantum gravity the relevant series don't give a finite answer.

For a book on renormalization that is relatively easy, see "Renormalization Methods: A Guide for Beginners" by W. D. McComb, which requires knowing the sort of maths you would need for a quantum mechanics course: multivariable calculus, expansion of functions in series (Taylor series etc.), solutions of ordinary linear differential equations, simultaneous equations, determinants, eigenvalues and eigenvectors and Fourier transforms. If you're willing to make more effort than that see "Quantum Field Theory for the Gifted Amateur" by Lancaster and Blundell or "Quantum Field Theory in a Nutshell" by Zee for a good introduction to quantum field theory.

$\endgroup$
5
$\begingroup$

Trying to be very concise here:

  • Quantum Mechanics brings minimal uncertainty (fluctuations, fuzziness) to quantities.
  • According to general relativity, gravity is an effect of mass curving spacetime.
  • Using quantum mechanics or quantum field theories (such as QED, the quantized theory of electromagnetism) in any fixed geometry (Euclidean/flat as in the non-relativistic limit, but also in a fixed curvature) is straightforward.
  • It becomes troublesome when masses with an uncertain position curve spacetime: this means that the curvature of spacetime is itself uncertain! This makes it hard to even work in a fixed coordinate system, and gives complicated backactions between the spacetime and the massive particles.

Renormalisation is a very technical issue of making sense of infinities in all quantum field theories, but the above is the main conceptual difference that distinguishes gravity from the other fundamental interactions (electromagnetism and the strong and weak interaction), that are described by fluctuating fields in a fixed geometry.

$\endgroup$
4
$\begingroup$

The problem arises when trying to quantize gravity the same way as other fields are quantized. The standard way to solve such a quantum field theory is start from an easy-to-solve (linear) model where the interaction force is absent, and write the corrections to this as a power series in the parameter that describes the strength of the interaction.

In the case of gravity, this parameter is the gravitational constant $G$. When we account for relativity and quantum mechanics, the speed of light and Planck's constant appear so that the actual parameter is $\hbar G/c^3$, which has dimensions of area and is denoted by $l_{\mathrm{P}}^2$ (the square of the Planck length).

So formally, a (dimensionless) prediction for a measurement in quantum gravity will be a power series $$b_0 + b_1 l_{\mathrm{P}}^2 + b_2 l_{\mathrm{P}}^4 + \cdots$$ where the coefficients $b_i$ require detailed calculation.

Because quantum fields have fluctuations on all length scales, it turns out that the $b_i$ are mainly dependent on the smallest length scale present in the theory, call it $a$. We don't know what this length scale should be, other than it's smaller than anything our experiments have probed so far.

In order to be dimensionally consistent, the coefficients must scale as $b_i \propto 1/a^{2i}$ (and detailed calculation confirms this). So for $i \ge 1$, the coefficients grow ("diverge toward infinity") more and more rapidly as $a$ becomes small. Since we don't know just how small $a$ is, and there are infinitely many terms that are important for small $a$, it is hopeless to get any definite predictions.

For certain other quantum field theories, where the series has only finitely many terms that grow as $a$ becomes small, it is possible to constrain the theory by fitting to observations and get nontrivial predictions without knowing just how small $a$ is. This is called renormalizability, and the lack of it is part of the difficulty of quantum gravity.

$\endgroup$
2
  • 1
    $\begingroup$ +1, nice answer. Just one question: "Because quantum fields have fluctuations on all length scales, it turns out that the $b_i$ are mainly dependent on the smallest length scale present in the theory, call it $a$." Why is that the case? . $\endgroup$ Commented Jun 6 at 22:39
  • 1
    $\begingroup$ @AccidentalTaylorExpansion Some intuition is give by the ultraviolet catastrophe. There are a lot more degrees of freedom in a field as we go to smaller scales. In the case of blackbody radiation, the quantization of energy sets an effective smallest wavelength of radiation that can occur at a given temperature. In the higher-order corrections of quantum field theories, we deal with "virtual" radiation that is not limited in this way, so we have a similar "ultraviolet catastrophe" unless we put some other lower limit on wavelengths. $\endgroup$
    – nanoman
    Commented Jun 7 at 0:06
3
$\begingroup$

I'll take a stab at an oversimplification for a partial answer here. More or less I consider this a comment addendum to JF10356's answer.

Renormalization can be thought of as a very fancy resizing tool. Its meant to convert unusable infinities into useful quantities and put everything on the same scale. Its somewhat like a math trick. You can make infinity disappear with it as long as you follow some rules.

This allows theories to be tested and compared apples to apples.

So a theory will have its own math developed into it. The theory gets applied to a model. It is the math into the models and testing that gets renormalized.

So when a theory is "non-renormalizable" it means that the theory contains terms incompatible with this math trick.

This leaves the models open to things breaking, or terms blowing up.

When people speak of a term "blowing up" they mean that it basically becomes an unusable infinity.

$\endgroup$
0
$\begingroup$

I am adding this answer because although @Andrew and @JF10356 wrote perfect answers, I think there are a few things that need to be explained at the "non-technical" level. I would like to talk about the near (static) field, the far (radiation) field, the different quantization, and the fact that while a single photon is easily detectable, a single graviton is not.

You can see in the case of EM, and it's quantization, we could talk about two types of fields:

  1. the near field, that is, an electron has a near field around itself, that is technically described by the so-called infamous virtual photons, and it is perfectly quantizable, we did it, it works, and although it is experimentally testable, we cannot really test whether quanta are being transferred on-by-one between two near fields, when two electrons repel, we just see in the experiments that they do repel, and the model predicts that perfectly. When we interact with the near field, do we really check that energy is being transferred in these quanta (photons)? Not really, we just describe the field in terms of virtual photons.

  2. the far field, that is the radiation, and we experimentally tested our quantized models, that say that this energy is transferred in these quanta, the photons, we even have detectors, that can detect single photons. We do know how the atomic system absorbs, and emits this energy (and how it stores it). The model works.

You can try to do this with gravity, first with the near field:

  1. To do that we need big objects (well, really a lot of stress-energy), because simply the near gravitational field of an electron (or smaller objects) is undetectable (except we have the Cavendish experiment, but that does not help with quantization). Theoretically we could do quantization and describe the near gravitational field in terms of virtual gravitons, but to do that, we would need a working, tested, experimentally verified model on the far field first where we can actually detect the quanta (gravitons) being transferred. We do not have a model like that, and neither can we detect single gravitons. Here comes another problem, that was not mentioned. While photons do not emit photons, gravitons (theoretically) do emit gravitons. You can say this in many ways, gravity causes more gravity or spacetime curvature causes more curvature or gravity is self-interacting.

  2. the far field, that is the radiation field, that is, gravitational waves. We do detect them, but we cannot detect single quanta (gravitons), and this is the real problem here. In the case of EM, we simply detect single photons, experimentally verify the model, and it works out. With gravity, we cannot do that.

(Here are two more things to mention, but this really is just a interesting side, not a technical description, so do not take it too seriously:

  1. One more problem is, that historically, EM was quantized and tested for the radiation field first, and then we "applied" that model to the near field using virtual particles (really just a mathematical model). We cannot do that with gravity because we can't even detect a single graviton in the gravitational waves and experimentally test whether this graviton description is a good one or not.

  2. Usually if you look at answers about renormalization and scales, they talk about small scales (and mentioning the Planck scale), but it is worth to look at the other end of the scale, celestial sized objects. It is worth to mention at the non-technical level is infinities (and the fancy word renormalization) and that the gravitational force is the dominant one when building celestial size (especially black holes) objects. In the case on the EM force, you have opposite charges, and when you start building for example bigger objects, some will repel so you start having objects falling apart (like big nuclei, but it is more complicated than that, you can use covalent bonding, but it won't help after a certain size), and the strong force is limited by its range-dependent force (repels at very short range, attracts at medium and no force at long range), so you cannot build really big objects using only that force either (not to mention confinement). The only force that allows the universe to build extreme objects is gravity, and here comes the problem, the stress-energy of these objects can grow without limit (like black holes), and this combined with the fact that gravitons emit gravitons, gravity is self interacting (or that curvature creates more curvature) creates an infinity problem. It is very hard to renormalize (cut off parts of an infinite series of a mathematical model to make it useful for calculations) a theory that uses gravitons that are self interacting (infinite interactions) or use it for bodies that can grow in stress-energy without limit and this causes the effects of gravity to grow without limit (infinite curvature).)

$\endgroup$
0
$\begingroup$

Callculation in th3 Quantum field theories tends to give infinite answers to various questions. This is a problem, but it can be fixed with renornalisation. It basically works like this

  • Measure some quantities
  • Express results of some callculations in terms of things you measured
  • Obtained final results by using the values obtained in the experiment

This works for gravity, BUT the number of different infinity types is infinity. Therefore, the theory lacks any predictive power because it can explain everything by fitting that infinity many parameters

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.