5
$\begingroup$

I have put my actual questions in bold normal font, and the rest of this question clarifies what I mean to be asking, especially in terms of the depth and kinds of information I am looking for. Partial answers would still be appreciated.

Of course, if any premise of my question or my understanding leading up to the question is incorrect in some way, please let me know. I understand from tutoring math and physics, at a lower level, that lots of questions in physics flow from incorrect assumptions.

What I understand about parton distribution functions.

Parton distribution functions are clearly an important component of making predictions at the LHC. As a pre-print from today explains:

The parton distribution functions (PDFs) which characterize the structure of the proton are currently one of the dominant sources of uncertainty in the predictions for most processes measured at the Large Hadron Collider (LHC).

From here.

I can understand what a graph or chart showing a PDF means and have a pretty good idea of how it fits into the calculations used make predictions in high energy physics. As Wikipedia explains in the first link above:

A parton distribution function within so called collinear factorization is defined as the probability density for finding a particle with a certain longitudinal momentum fraction x at resolution scale Q2. Because of the inherent non-perturbative nature of partons which cannot be observed as free particles, parton densities cannot be calculated using perturbative QCD.

But, I'd like to understand the details of this better.

How Are PDFs Measured In Practice?

I understand at a high level of generality that at the LHC and other high energy physics experiments, parton distribution functions (PDFs) are, in practice, experimentally measured somehow and then used to predict future collider experiment results based upon the Standard Model. But, my understanding of how PDFs are experimentally measured pretty much ends there.

So, my first and core question is:

How are PDFs measured in practice?

By way of example of the kind of responses I could imagine being in an answer might address the following (given that I don't know how it is actually done), without looking for any specific actual answer (and not knowing if any or all of these questions would actually be useful to include in an answer):

  • Do high energy physicists trying to measure PDFs try to fit parameters of some function for a PDF, or do they determine this purely numerically and empirically, measuring all values that they can and then interpolating between measured values to reflect binning effects and the like?

  • Is the measurement process done from scratch in each round of measurement, or is it an iterative process where the scientists start with a Bayesian prior based upon previous experimental results and theoretical constraints, and then update the previous PDFs to add the new data?

  • Exactly what observables are they looking for and what kind of data is extracted from collision results to determine PDFs?

  • How do they figure out what the source particle is and from there what partons it produces in many repeated collisions?

  • How correlated does the output about some partons produced need to be with other data from the same collision?

  • To what extent is the measurement automated and to what extent is individualized involvement of experimenters required (either in data collection or data analysis)?

  • What are the main sources of uncertainty in the measurements used to establish PDFs?

  • Is this where jet energy scale uncertainty comes into play in measurement uncertainties?

I recognize that a good answer might not address all of these.

How Are PDFs Determined In Theory From First Principles?

Apparently PDFs can be determined, in theory, from first principles in the Standard Model.

Despite the fact that PDFs are experimentally measured inputs (somehow or other) into the calculations used to make predictions regarding what happens when there is a collision at high energy physics experiments like the LHC, PDFs are not considered to be physical constants of the Standard Model (which basically consist of the fundamental particle masses/Higgs Yukawas, the three Standard Model force coupling constants, and the parameters of the CKM and PMNS matrix plus a few more general fundamental constants not really specific to the Standard Model like Planck's constant and the speed of light).

So, it follows, that there ought to be some way, in principle, to determine PDFs from first principles using only the fundamental physical constants, the Standard Model Lagrangian and any other basic assumptions of the Standard Model that don't fit neatly into a formula itself, or a physical constant (e.g. all probabilities must add up to 100%, all observables need to be real numbers, the Standard Model assumes three dimensions of space and one of time, Lorentz transformations need to be done in appropriate circumstances, etc.).

What I Don't Know

But, I don't really understand, at even a very high conceptual level, how one would use the fundamental formulas and physical constants of the Standard Model to determine a PDF. I imagine it would have some parallels to computing decay widths with appropriate consideration of the Heisenberg Uncertainty Principle thrown in somehow, but really, I can't see how the dots get connected.

I presume that it must be computationally intense (if it wasn't, this would presumably be the norm and the reliance of experimental measurements of PDFs would be much weaker), and apparently (per the first link in this question) "it has been found that they can be calculated directly in lattice QCD using large-momentum effective field theory." But, I would hope for some greater insight into how someone could using lattice QCD to calculate a PDF than this brief snippet provides.

Honestly, even though it should be possible, I don't even know whether or not anyone has ever even actually calculated any PDF for even the simplest conceivable physical system to which it would apply (with admitted high margins of errors owing to uncertainties in the physical constant inputs and the fact that not all theoretically relevant terms can be included in the calculation) as opposed merely thinking about what would go into that calculation in order to gain qualitative insight into questions like how PDFs change with energy scale.

The Question

So, I am also asking:

At a very high conceptual level, what is the gist of how a PDF would be determined from first principles in the Standard Model?

A corollary question (which should have basically a yes or no answer and a description of what PDF was done and the research group name and year, and/or article citation if the answer is "yes") is:

Has anyone ever actually calculated a PDF from first principles?

A citation answering the second question, if it is "yes", would give me a way to determine the answer to how one determines it from first principles, if I had it. So the easier corollary answer could be just as helpful as the primary question I am seeking to answer with an article like that.

Depth Of Desired Answer

I presume that there are many levels of depth at which these questions can be considered and that at a sufficient level of depth and detail that it would take a full grad school level course to describe. But, an answer the hits the most high level highlights in a few paragraphs and/or formulas and the provides references to open access sources that explore the question in greater depth would be greatly appreciated.

I am looking for an answer equivalent in detail, perhaps, to the verbal description of what one is adding up in a path integral when when is calculating the propagator function for a photon or electron, or to the way the width of a particle is calculated by adding up the width of each possible decay of a particle and the factors that go into determining one individual decay width.

I understand, of course, that since a PDF in inherently non-perturbative in nature, that the first principles determination would probably look very different in kind from the examples of the propagator function for a photon or lepton, or a width determination for a particle in the Standard Model which are basically in the realm of perturbative QCD.

But, I really don't have even a meaningful inkling of how one would go about in principle determining the PDF using non-perturbative QCD via Lattice QCD.

My hope is that understanding how a PDF could be calculated from first principals in the Standard Model will provide better insight into the way it is, in fact, measured empirically.

$\endgroup$
5
  • $\begingroup$ While this is a good question, I feel as though any answer would have to be cumbersome and lengthy as to suggest that it would really be easier to take a full course on QFT that goes over deep inelastic scattering. Though I may be proven wrong owing to the people who understand DIS better than I do. Perhaps maybe first ask the question about where PDFs come from and then make related posts asking the other questions so they can be linked together and answers can be more focused. One good place to start is the history of the Parton Model of QCD. $\endgroup$
    – Triatticus
    Commented May 15, 2019 at 1:00
  • 2
    $\begingroup$ David Z has written several in depth answers about PDFs and structure functions elsewhere on the site. These are typically on what I would call a "practical basis" (meaning experiment). Last time I heard there was nothing remotely "Practical" about first principle calculations ... though admittedly that's getting to be several year ago and progress in cQCD has been impressive in the last decade. $\endgroup$ Commented May 15, 2019 at 3:55
  • 2
    $\begingroup$ Here is a white paper , recent arxiv.org/abs/1711.07916 discussing two methods with an enormous number of references arxiv.org/abs/1711.07916 . $\endgroup$
    – anna v
    Commented May 15, 2019 at 9:30
  • 1
    $\begingroup$ I think the Wikipedia article is over-reaching to claim that PDFs "can be calculated directly in lattice QCD". Instead the direct calculation is of so-called "quasi-PDFs" that are related to the light-front PDFs in the infinite-momentum limit. One big obstacle for lattice calculations is that PDFs are defined from light-like-separated fields while numerical lattice calculations are carried out in euclidean spacetime. The nice white paper anna pointed out mentions this in section 2.2.2. $\endgroup$ Commented May 15, 2019 at 9:53
  • $\begingroup$ I love the conclusion of the White Paper's abstract: "This document represents a first step towards establishing a common language between the two communities, to foster dialogue and to further improve our knowledge of PDFs.." The "two communities" being different subsets of HEP physicists specializing in QCD calculations. The narcissism of small differences indeed. $\endgroup$
    – ohwilleke
    Commented May 15, 2019 at 12:31

0