4
$\begingroup$

In the collinear factorisation equation for a QCD cross-section, schematically $$ \sigma_{AB\to X} = f^A_a \otimes \hat{\sigma}_{ab\to X} \otimes f^B_b , $$ we essentially convolute the probability of producing a final-state $X$ given initial-state partons $a$ and $b$ with the probability (ok, number) density of finding $a$ in hadron $A$ and $b$ in hadron $B$ with the requisite initial-state momenta, expressed through the parton distribution functions $f^A_a$ and $f^B_b$.

Is there a intuitive way of understanding why it's ok to adopt this almost-classical approach, of multiplying probabilities, rather than working with probability amplitudes as is ubiquitous elsewhere in QFT?

$\endgroup$
1

2 Answers 2

2
+50
$\begingroup$

Firstly there is the usual intuitive physical argument, which are probably aware of and is also explained in the post by @Ratman. Another version of this same argument is that, because of the asymptotic freedom of QCD, and the high energy scale of the process, the partons inside the hadron are approximately free particle moving collinear to the hadron, since the QCD coupling decreases going to higher energy scales. Then the partons have approximate momenta $\xi P$, where $\xi \in (0,1)$ and $P$ is the hadron momentum. Hence if two hadrons collide it is as if the partons just scatter of each other and for the total cross section you just have to take into account the distribution of the partons inside the hadrons, given by the PDFs.

Now this is a useful but of course very simplified picture. Proving QCD factorization from first principles is quite complicated. But it can be proven for many processes, with rigor at the level of physics, and there are a number of different ways to access factorization theorems. The usual way is the operator product expansion. Then there is also soft collinear effective theory. But as I think the most intuitive way is the perturbative QCD approach which operates at the level of Feynman graphs and I will try to summarize this briefly. For a really in depth treatment, Collins book "Foundations of perturbative QCD" is definitely the right source.

Explaining this in a few words is not easy, so you should take what I write as a crude approximation of what is really going on. The argument is, very rougly, that there are certain regions of the loop integration of the Feynman graphs of the given process, which gives the leading power in $m/Q$, where $Q$ is the characteristic large scale of the process and $m$ is a small mass scale, of order $\Lambda_{\text{QCD}}$. And the cool thing is that these leading regions are precisely those regions, where the lines of the Feynman graph close to the hadron external lines are collinear to the hadron momentum $P$ and the lines close to the interaction vertex carry lines of large virtuality, of order $Q^2$. Then, in those leading integration regions, we can factorise any graph into a convolution of a factor, contributing to the hard function, corresponding to a subgraph close to the interaction vertex and a function, the PDF, corresponding to the subgraph close to the hadron external lines. The convolution integration comes from the loop momentum connecting these two subgraphs. See the following picture for DIS

enter image description here

So indeed the Feynman diagrams give an intuitive understanding which goes beyond the level of perturbation theory. The leading contributions correspond to collinear "parton" lines on Feynman diagrams, reproducing the classical picture. The coefficient function (or hard subgraph) is just the amplitude for the partonic scattering, corresponding to the $\hat{\sigma}$ in your equation. The PDFs can be defined as hadronic matrix elements of so-called light-ray operators, as you know they can not be calculated in perturbation theory. The number density interpretation of the PDFs can be justified using light-front quantization, also explained in Collins book.

$\endgroup$
10
  • $\begingroup$ Thanks for this. I have a copy of Collins' book and that's where I've been trying to find the answer to this, but I find it rather hard to follow - and especially to convert this subgraph idea into the usual picture of PDFs convoluted with a partonic cross-section. Is there any chance that you could expand on this and/or add references to specific parts of Collins' book? What I'm really specifically looking for is the form of the 'missing'/suppressed interference term you would naively expect to contribute, and a good reason why we think it's small. $\endgroup$
    – JCW
    Commented Feb 9, 2021 at 18:52
  • $\begingroup$ It is indeed very hard to follow. I spent months reading it and still have trouble understanding most of it. I think trying to reproduce the calculations in chapter 6, while accepting the power-counting results from chapter 5, helps a lot in understanding the main idea. If you are just looking for a proof, the OPE is surely more straightforward (for example chapter 32 in Schwartz seems good to me), but as said, the pQCD approach is more intuitive and generalizes more readily to a wide variety of processes.... $\endgroup$
    – jkb1603
    Commented Feb 9, 2021 at 20:18
  • $\begingroup$ So I am not sure what you mean by "interference term". If you mean the "interference terms" you get from squaring the scattering amplitudes to get the cross section, this is taken into account, e.g. in DIS, by the cut through the diagram (see above diagram in my post), i.e. the diagram is actually a contribution to the cross section. If you mean the power-suppressed terms, i.e. by powers of $m/Q$, that correct the factorization theorem, this is another story, which is not even covered in Collins book, and I do know how to obtain them from pQCD, you need other methods for that. $\endgroup$
    – jkb1603
    Commented Feb 9, 2021 at 20:25
  • $\begingroup$ Yes, I mean the interference term arising from the use of probability amplitudes to represent 'or', rather than the classical use of probabilities. To clarify my thinking, classically we would perhaps write P(happens) = P(happens via quark) + P(happens via gluon). Analogously to the double slit experiment I would expect that here we should instead add amplitudes representing | p to X via quark > and | p to X via gluon > and take the modulus-squared to get a probability, which would give two interference terms. In the sum over partonic channels we're explicitly neglecting these cross-terms. $\endgroup$
    – JCW
    Commented Feb 11, 2021 at 10:44
  • $\begingroup$ I am not sure if we are neglecting those cross terms. For example the factorization theorem for DIS applies at the level of cross sections, which are well-defined probabilities. Maybe Schwartz 32.2 will help you with this question. Below (32.35) he explicitly mentions those interference terms and takes them into account (at NLO). $\endgroup$
    – jkb1603
    Commented Feb 11, 2021 at 11:55
1
$\begingroup$

I am not an expert, this is the basic idea I found till now. Think about the DIS process ( $l(k)+h(p) \rightarrow l(k')+X(p_{X})$ ), which is conceptually analagous to hadron-hadron collisions. Assume the virtual photon probes the hadron with a scale $Q^2=-q^2=(k-k')^2 \gg \Lambda_{QCD}$, so the interaction between the virtual photon $\gamma^*$ and the parton found inside the hadron is characterized by a timescale $\tau \sim 1/Q$. Instead the dynamics inside the hadron, described by the PDF, is characterized by the timescale of the order $\tau_{QCD} \sim 1/\Lambda_{QCD}$. So you have that $\tau_{QCD} \gg \tau$. The different time scales allows us to say that the two events happens independently. Because we assume that the regular QCD dynamics which happens inside the hadron, given it's larger timescale, can't influence the interaction. Sometimes this is even explained by saing that the probes sees a snapshot of the hadron, if the energy scale is large enough. So the interaction is considered to happen between free particles. Being the interaction part independent you can multiply the partonic cross section and the PDF's to get the total probability of the process. This is just an intuitive approach, but for what I know there isn't a general mathematical proof of this method.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.