27
$\begingroup$

I am wondering whether there is some systematical approach to find Feynman diagrams for S-matrix (or to be more precise for $S-1$ since I am interested in scattering amplitude). For example in $\phi^3$ theory and its variations (e.g. $\phi^2\Phi$) there is ridiculous amount of diagrams even on two-loop level. I am particularly interested in $\phi\phi \rightarrow \phi\phi$ or $\phi\Phi \rightarrow \phi\Phi$ scattering.

What I usually do (for this kind of scattering) is this:

  1. I draw tree level diagrams
  2. One-loop diagrams are obtained from tree level diagrams by connecting lines together with single additional line in every possible manner (e.g. adding loop on internal line or connecting external leg and internal line ...)
  3. Two-loop diagrams are obtained from one-loop diagrams by adding a line as in previous point. I do not add loops on external legs since those are irrelevant (at least for S-matrix).
  4. Some of the options generated with this algorithm result in the same diagrams - I use Wick's theorem to check if diagrams are in correspondence with the same contraction or not, if yes - then redundant diagrams are erased.

I think that above algorithm should work (please correct me if I am wrong), however it is very cumbersome and impractical. It also does not work for $\phi^4$ theories since one can not simply "connect lines" there - but this does not make much trouble because $\phi^4$ has pretty simple diagrams up to two-loop level.

So my question is - is there some useful method how to obtain Feynman diagrams at least up to two-loop level in scalar field theory(ies)? Keep in mind I am beginner in QFT.

$\endgroup$
4
  • 2
    $\begingroup$ Generation of graphs with $n$ vertices is a problem whose complexity grows as $n!$, so you shouldn't expect any "good" algorithm. Your method is as good as it gets (well, it can be improved, but not much). $\endgroup$ Commented Mar 26, 2018 at 0:20
  • 1
    $\begingroup$ I wanted to say a few more things, but this happened. Oh well... $\endgroup$ Commented Mar 26, 2018 at 18:54
  • $\begingroup$ For example, you say "it does not work for $\phi^4$". Yes: it does work, but you must keep the disconnected diagrams as well, at least in the intermediate calculations. Only at the very end can you drop them. $\endgroup$ Commented Mar 26, 2018 at 19:02
  • $\begingroup$ see Chap 14 of users.physik.fu-berlin.de/~kleinert/kleiner_reb8/psfiles/… $\endgroup$ Commented Mar 26, 2018 at 20:45

1 Answer 1

37
$\begingroup$

OP has discovered on their own a primitive application of the Schwinger-Dyson equations. Congratulations!

A very gentle introduction to the Schwinger-Dyson equations.

... or how to calculate correlation functions without Feynman diagrams, path integrals, operators, canonical quantisation, the interaction picture, field contractions, etc.

Note: we will include operators and Feynman diagrams anyway so that the reader may compare our discussion to what they already know. The diagrams below have been generated using the LaTeX package TikZ. You can click on the edit button to see the code. Feel free to copy, modify, and use it yourself.

Note: we will not be careful with signs and phases. Factors of $\pm i$ may be missing here and there.

Consider an arbitrary QFT defined by an action $S$. The most important object in the theory is the partition function, $Z$. Such an object can be defined either in the path-integral formalism or in the operator formalism (cf. this PSE post): \begin{equation} Z[j]\equiv N^{-1}\int\mathrm e^{iS[\varphi]+j\cdot\varphi}\mathrm d\varphi\equiv\langle \Omega|\mathrm T\ \mathrm e^{ij\cdot\phi}|\Omega\rangle\tag1 \end{equation} where $N$ is a normalisation constant; $\Omega$ is the vacuum state; and $\mathrm T$ is the (covariant) time ordering symbol.

In either case, one can show that $Z[j]$ satisfies the functional differential equation \begin{equation} \color{red}{(iS'[\delta]-j)Z[j]\equiv 0}\tag2 \end{equation} known as the Schwinger-Dyson (SD) equation. (Here, $\delta=\frac{\delta}{\delta j}$ denotes functional differentiation with respect to $j$.)

A fascinating fact about the SD equation is that it can be used to introduce a third formulation of QFT, together with the path-integral and the operator formalisms. In the SD formulation, one forgets about path-integrals and operators. The only object is the partition function $Z[j]$, which is defined as the solution of the SD equation. The only postulate is SD, and everything else can be derived from it.

In this answer we shall illustrate how the standard perturbative expansion of QFT is contained in SD. Intuitively speaking, the method is precisely OP's algorithm "take the lower order, and connect any two lines in all possible ways". For completeness, we stress that SD also contains all the non-perturbative information of the theory as well (e.g., the Ward-Takahashi-Slavnov-Taylor identities), but we will not analyse that.

Scalar theory.

Our main example will be so-called $\phi^4$ theory: \begin{equation} \mathcal L=\frac12(\partial\phi)^2-\frac12 m^2\phi^2-\frac{1}{4!}g\phi^4\tag3 \end{equation} where $\phi\colon\mathbb R^d\to\mathbb R$ is a real scalar field.

The SD equation for the partition function is \begin{equation} \left[\partial^2\frac{\delta}{\delta j(x)}+m^2\frac{\delta}{\delta j(x)}+\frac{1}{3!}g\frac{\delta^3}{\delta j(x)^3}-ij(x)\right]Z[j]\equiv 0\tag4 \end{equation}

If take a functional derivative of this equation of the form \begin{equation} \frac{\delta}{\delta j(x_1)}\frac{\delta}{\delta j(x_2)}\cdots \frac{\delta}{\delta j(x_n)}\tag5 \end{equation} and then set $j\equiv 0$, we get \begin{equation} \begin{aligned} (\partial^2+m^2)G(x,x_1,&\dots,x_n)+\frac{1}{3!}gG(x,x,x,x_1,\dots,x_n)=\\ &=i\sum_{m=1}^n\delta(x-x_m)G(x_1,\dots,\hat x_m,\dots,x_n) \end{aligned}\tag6 \end{equation} where the hat $\hat\cdot$ over an argument means that it is to be omitted. Here, $G(x_1,\dots,x_n)$ is the $n$-point function, \begin{equation} G(x_1,\dots,x_n)\equiv \langle 0|\mathrm T\ \phi(x_1)\cdots\phi(x_n)|0\rangle\tag7 \end{equation} which in the SD formalism is defined as \begin{equation} G(x_1,\dots,x_n)\equiv \frac{\delta}{\delta j(x_1)}\frac{\delta}{\delta j(x_2)}\cdots \frac{\delta}{\delta j(x_n)}Z[j]\bigg|_{j=0}\tag8 \end{equation}

We see that the SD equations are nothing but a system of partial differential equations for the correlation functions. In general, these equations are impossible to solve explicitly (essentially, because they are non-linear), so we must resort to approximation methods, i.e, to perturbation theory.

Let us begin by introducing the inverse of $(\partial^2+m^2)$, the propagator: \begin{equation} \Delta(x)\equiv (\partial^2+m^2)^{-1}\delta(x)=\int\frac{\mathrm e^{ipx}}{p^2-m^2+i\epsilon}\frac{\mathrm dp}{(2\pi)^d}\tag9 \end{equation}

We may use the propagator to integrate the SD equations as follows: \begin{equation} \begin{aligned} \color{red}{G(x,x_1,\dots,x_n)}&\color{red}{=\frac{1}{3!}g\int\Delta(x-y)G(y,y,y,x_1,\dots,x_n)\,\mathrm dy+}\\ &\color{red}{+i\sum_{m=1}^n\Delta(x-x_m)G(x_1,\dots,\hat x_m,\dots,x_n)} \end{aligned}\tag{10} \end{equation} which is a system of coupled integro-differential equations of the Fredholm type, whose solutions can formally be written as a Liouville-Neumann series in powers of $g$. This is the basis of perturbation theory. Moreover, these equations are precisely the formalisation of OP's algorithm.

We stress that the whole paradigm of perturbation theory is contained in equation $(10)$. In particular, one need not introduce Feynman diagrams at all: the perturbation series can be extracted directly from $(10)$. That being said, and to let the reader compare our upcoming discussion to the standard formalism, let us introduce the following graphical notation: a four-vertex is represented by a node with four lines, and a propagator is represented by a line

tikz code: https://pastebin.com/ciYjYwAF

and the $n$-point function is represented as a disk with $n$ lines:

tikz code: https://pastebin.com/E4Wg1y8X

In graphical terms, one typically represents the SD equations $(10)$ as follows:

tikz code: https://pastebin.com/Z49jqAei

Perturbation theory is based on the (somewhat unjustified) assumption that a formal power series of the form \begin{equation}\tag{14} G\sim G^{(0)}+g G^{(1)}+g^2G^{(2)}+\cdots+\mathcal O(g^k) \end{equation} should be, in a certain sense, a good approximation to the real $G$. In practice, this series is observed to be asymptotic, so things work rather well as long as $g\ll 1$.

The first thing we notice is that, due to equation $(10)$, the term of order zero in $g$ satisfies \begin{equation}\tag{15} G^{(0)}(x,x_1,\dots,x_n)=i\sum_{m=1}^n\Delta(x-x_m)G^{(0)}(x_1,\dots,\hat x_m,\dots,x_n) \end{equation} which, by iteration, leads to \begin{equation}\tag{16} \color{red}{G^{(0)}(x_1,\dots,x_n)=\sum_\mathrm{pairings}\prod i\Delta(x_i-x_j)} \end{equation} which is usually known as Wick's theorem.

The higher orders satisfy \begin{equation} \begin{aligned} G^{(k)}(x,x_1,\dots,x_n)&=\frac{1}{3!}\int\Delta(x-y)G^{(k-1)}(y,y,y,x_1,\dots,x_n)\,\mathrm dy+\\ &+i\sum_{m=1}^n\Delta(x-x_m)G^{(k)}(x_1,\dots,\hat x_m,\dots,x_n) \end{aligned}\tag{17} \end{equation}

With this, we see that we may calculate any correlation function, to any order in perturbation theory, as an iterated integral over combinations of propagators. To calculate the $n$-point function to order $k$, we need the $(n-1)$-point function to order $k$, and the $(n+3)$-function to order $k-1$, which can be iteratively calculated, by the same method, in terms of the corresponding correlation functions of lower $k$. When $k$ becomes zero we may use Wick's theorem, which means that the algorithm terminates after a finite number of steps. Let us see how this works in practice.

We begin by the zero order approximation to the two-point function. By Wick's theorem, we see that the propagator provides us with a very crude approximation to the two-point function, \begin{equation}\tag{18} G^{(0)}(x_1,x_2)=i\Delta(x_1-x_2) \end{equation} which, as expected, agrees with the diagram

tikz code: https://pastebin.com/jSxt1maW

By a similar analysis (Wick's theorem), the four-point function is given, to zero order in perturbation theory, by \begin{equation} \begin{aligned} G^{(0)}(x_1,x_2,x_3,x_4)&=i\Delta(x_1-x_2)i\Delta(x_3-x_4)\\ &+i\Delta(x_1-x_3)i\Delta(x_2-x_4)\\ &+i\Delta(x_1-x_4)i\Delta(x_2-x_3) \end{aligned}\tag{20} \end{equation} which, once again, agrees with the diagrams

tikz code: https://pastebin.com/acvdQuKr

We next calculate the first order approximation to the two-point function; using $(17)$, we see that it is given by \begin{equation}\tag{22} G^{(1)}(x_1,x_2)=\frac{1}{3!}\int\Delta(x_1-y)G^{(0)}(y,y,y,x_2)\,\mathrm dy \end{equation}

We already know the value of the factor $G^{(0)}(y,y,y,x_2)$: \begin{equation}\tag{23} -G^{(0)}(y,y,y,x_2)=3\Delta(y-y)\Delta(x_2-y) \end{equation} so that \begin{equation}\tag{24} G^{(1)}(x_1,x_2)=\frac{i}{2}\int\Delta(x_1-y)\Delta(y-y)\Delta(x_2-y)\,\mathrm dy \end{equation} which is precisely what the one-loop diagram predicts:

tikz code: https://pastebin.com/VCkJwxps

We can use the same technique to compute the first order correction to the four-point function. The reasoning is the same as before; equation $(17)$ reads \begin{equation} \begin{aligned} G^{(1)}(x_1,x_2,x_3,x_4)&=\frac{1}{3!}\int\Delta(x-y)G^{(0)}(y,y,y,x_2,x_3,x_4)\,\mathrm dy\\ &+i\Delta(x_1-x_2)G^{(1)}(x_3,x_4)\\ &+i\Delta(x_1-x_3)G^{(1)}(x_2,x_4)\\ &+i\Delta(x_1-x_4)G^{(1)}(x_2,x_3) \end{aligned}\tag{26} \end{equation}

From our previous calculation, we already know the value of $G^{(1)}(x_1,x_2)$; on the other hand, the term $G^{(0)}(y,y,y,x_2,x_3,x_4)$ can be efficiently computed using Wick's theorem; in particular, \begin{equation} \begin{aligned} iG^{(0)}(y,y,y,x_2,x_3,x_4)=\,&3\Delta(y-y)\Delta(y-x_2)\Delta(x_3-x_4)\\ +&3\Delta(y-y)\Delta(y-x_3)\Delta(x_2-x_4)\\ +&3\Delta(y-y)\Delta(y-x_4)\Delta(x_2-x_3)\\ +&6\Delta(y-x_2)\Delta(y-x_3)\Delta(y-x_4) \end{aligned}\tag{27} \end{equation} so that \begin{equation} \begin{aligned} -G^{(1)}(x_1,x_2,x_3,x_4)&=\frac12\Delta(x_1-x_2)\int\Delta(x_3-y)\Delta(y-y)\Delta(x_4-y)\,\mathrm dy\\ &+\frac12\Delta(x_1-x_3)\int\Delta(x_2-y)\Delta(y-y)\Delta(x_4-y)\,\mathrm dy\\ &+\frac12\Delta(x_1-x_4)\int\Delta(x_2-y)\Delta(y-y)\Delta(x_3-y)\,\mathrm dy\\ &+\frac{1}{2}\Delta(x_3-x_4)\int\Delta(x_1-y)\Delta(y-y)\Delta(y-x_2)\,\mathrm dy\\ &+\frac{1}{2}\Delta(x_2-x_4)\int\Delta(x_1-y)\Delta(y-y)\Delta(y-x_3)\,\mathrm dy\\ &+\frac{1}{2}\Delta(x_2-x_3)\int\Delta(x_1-y)\Delta(y-y)\Delta(y-x_4)\,\mathrm dy\\ &+\int\Delta(x_1-y)\Delta(y-x_2)\Delta(y-x_3)\Delta(y-x_4)\,\mathrm dy \end{aligned}\tag{28} \end{equation} which, as expected, agrees with the value of the one-loop diagrams:

tikz code: https://pastebin.com/c04ZrGqW

As a final example, let us compute the second order correction to $G(x_1,x_2)$, to wit, \begin{equation}\tag{30} G^{(2)}(x_1,x_2)=\frac{1}{3!}\int\Delta(x_1-y)G^{(1)}(y,y,y,x_2)\,\mathrm dy \end{equation} where $G^{(1)}(y,y,y,x_2)$ is given by $(26)$. The final result is \begin{equation} \begin{aligned} G^{(2)}(x_1,x_2)&=\frac{1}{3!}\int\Delta(x_1-y)\Delta(y-z)\Delta(z-y)\Delta(z-y)\Delta(z-x_2)\,\mathrm dy\,\mathrm dz\\ &+\frac{1}{3}\int\Delta(x_1-y)\Delta(y-y)\Delta(y-z)\Delta(z-z)\Delta(z-z)\,\mathrm dy\,\mathrm dz\\ &+\frac{1}{3}\int\Delta(x_1-y)\Delta(y-z)\Delta(z-z)\Delta(z-y)\Delta(y-z)\,\mathrm dy\,\mathrm dz \end{aligned}\tag{31} \end{equation} which, once again, agrees with the value of the diagrams

tikz code: https://pastebin.com/1nXqq4Sg

Continuing this way, we may calculate any correlation function to any order in perturbation theory. It is interesting to note that this method allows one to compute any correlation function, to any order in perturbation theory, by a rather efficient method. In particular, we didn't need to draw any Feynman diagram (although we drew them anyway, for the sake of comparison), and neither did we have to compute any symmetry factor. In fact, I have a strong suspicion that numerical computations of higher order loop corrections use some variation of this algorithm. A simple application of this algorithm in Mathematica can be found in this Mathematica.SE post.

The reader will also note that no vacuum bubbles have been generated in the calculation of correlation functions. Recall that when working with path integrals or the Dyson series, such diagrams are generated and subsequently eliminated by noticing that they also appear in the denominator. Such graphs are divergent (both at the level of individual diagrams and at the level of summing them all up), so their cancellation is dubious. Here, the diagrams simply don't appear, which is an advantage of the formalism.

Yukawa theory.

For completeness, let us mention how this works in more general theories: those with non-scalar fields. The philosophy is exactly the same, the main obstacle being the notation: indices here and there make the analysis cumbersome.

Assume you have a field $\phi_a(x)$ which satisfies \begin{equation}\tag{33} \mathscr D\phi(x)=V'(x) \end{equation} for some matrix-valued differential operator $\mathscr D$, and some vector-valued operator $V'$. In term of the action, $\mathscr D\phi=S_0'$ and $V'=S'_\mathrm{int}$, where $S_0$ is the quadratic part of $S$ and $S_\mathrm{int}$ is the rest of terms.

With this, the SD equations read \begin{equation}\tag{34} \mathscr D_1 \langle\phi_1\cdots\phi_n\rangle=i\langle V'_1\phi_2\cdots\phi_n\rangle+\sum_{m=2}^n\delta_{1m}\langle \phi_2\cdots\hat\phi_m\cdots\phi_n\rangle \end{equation} where I have introduced the short-hand notation $i=(x_i,a_i)$. Also, $\delta_{ij}=\delta_{a_ia_j}\delta(x_i-x_j)$.

By analogy with our discussion above, we see that the algorithm is essentially the same, but now the propagator is $\mathscr D^{-1}$, and there is a factor of $V$ on every vertex.

Let me sketch how this works in the Yukawa theory with a scalar field $\phi$ and a Dirac field $\psi$, interacting through $V=g\phi\bar\psi\psi$. The Lagrangian reads \begin{equation}\tag{35} \mathcal L=i\bar \psi\!\!\not\!\partial\psi-m\bar\psi \psi+\frac{1}{2}(\partial \phi)^2-\frac{1}{2}M^2\phi^2-g\phi\bar\psi\psi \end{equation} with $\psi\colon\mathbb R^d\to \mathbb C_a$ and $\phi$ as before.

The equations of motion are \begin{equation} \begin{aligned} -(-i\!\!\not\!\partial+m)\psi=g\phi\psi\equiv U\\ -(\partial^2+M^2)\phi=g\bar\psi\psi\equiv V \end{aligned}\tag{36} \end{equation}

As usual, we define the propagators as \begin{equation} \begin{aligned} (-i\!\!\not\!\partial+M) S_{12}&=\delta_{12}\\ (\partial^2+m^2) \Delta_{12}&=\delta_{12} \end{aligned}\tag{37} \end{equation} that is, \begin{equation} \begin{aligned} S(p)&=\frac{1}{\!\!\not\!p-m+i\epsilon}\\ \Delta(p)&=\frac{1}{p^2-m^2+i\epsilon} \end{aligned}\tag{38} \end{equation}

We now need to introduce the correlation functions. Let me use a hybrid notation which I hope will simplify the notation as much as possible: \begin{equation}\tag{39} iG(1^\alpha,2_\beta,3,\dots)\equiv\langle \Omega|\mathrm T\ \psi^\alpha(x_1) \bar\psi_\beta(x_2)\phi(x_3)\cdots|\Omega\rangle \end{equation} or, in other words, every upper index corresponds to $\psi$; every lower index corresponds to $\bar\psi$; and every space-time point with no indices corresponds to $\phi$. In terms of the partition function, the correlation function is defined as \begin{equation}\tag{40} iG(1^\alpha,2_\beta,3,\dots)\equiv \left[\frac{\delta}{\delta \eta_\alpha(x_1)}\frac{\delta}{\delta \bar\eta^\beta(x_2)}\frac{\delta}{\delta j(x_3)}\cdots\right]Z[\eta,\bar\eta,j]\bigg|_{j=\eta=\bar\eta=0} \end{equation}

With this, the SD equations of the theory read \begin{equation} \begin{aligned} iG(1^\alpha,2_\beta,3,\dots)&=\int S_{1y}^\alpha{}_\gamma\langle U^\gamma(y)\bar\psi_\beta(x_2)\phi(x_3)\cdots\rangle \mathrm dy\\ &+iS^\alpha_{12\beta}\langle\phi(x_3)\cdots\rangle+\cdots\\ iG(1,2,3^\alpha,\dots)&=\int \Delta_{1y}\langle V(y)\phi(x_2)\psi^\alpha(x_3)\cdots\rangle\mathrm dy\\ &+i\Delta_{12}\langle\psi^\alpha(x_3)\cdots\rangle+\cdots \end{aligned}\tag{41} \end{equation}

More generally, given an arbitrary correlation function $G$, the corresponding SD equations are obtained by replacing any field by its propagator and vertex function, and adding all possible contact terms with the same propagator. In fact, the general structure of the SD equations is rather intuitive: it is simply given by what index placement suggests; in general there is one and only one way to match up indices on both sides of the equation so that the propagators and fields are contracted in the correct way.

The calculation of $G$ is rather similar to that of the scalar theory above. As before, we assume it makes sense to set up a power series in $g$, \begin{equation}\tag{42} G=G^{(0)}+gG^{(1)}+g^2G^{(2)}+\cdots \end{equation}

Perturbation theory is obtained by constructing $G^{(k)}$ from the known value of the correlation functions of lower order. For example, the one point function $iG(1)=\langle\phi(x_1)\rangle$ satisfies \begin{equation}\tag{43} G(1)=g\int \Delta_{1y} G(y_\alpha, y^\alpha)\mathrm dy \end{equation}

To lowest order, $G^{(0)}(1)=0$; the first correction reads \begin{equation} \begin{aligned} iG^{(1)}(1)&=i\int \Delta_{1y} G^{(0)}(y_\alpha, y^\alpha)\mathrm dy=\\ &=-i\int \Delta_{1y}\text{tr}(S_{yy})\mathrm dy \end{aligned}\tag{44} \end{equation} where the negative sign is due to the fermionic statistics of $\psi,\bar\psi$ (or, equivalently, of their corresponding sources, $\eta,\bar\eta$). In particular, $iG^{(0)}(1_\alpha, 2^\alpha)=\langle\bar\psi_\alpha(x_1)\psi^\alpha(x_2)\rangle=-\langle\psi^\alpha(x_2)\bar\psi_\alpha(x_1)\rangle=-\text{tr}(S_{21})$; more generally, we always have a negative sign associated to traces over fermionic indices.

The expression above agrees with the standard one-loop Feynman diagram, to wit

tikz code: https://pastebin.com/SFMQFcrd

where the dashed line represents a scalar propagator and a solid one a spinorial one.

Similarly, the two point function $i G(1,2)=\langle\phi_1\phi_2\rangle$ satisfies \begin{equation}\tag{46} i G(1,2)=ig\int \Delta_{1y} G(y_\alpha, y^\alpha,2)\mathrm dy+i\Delta_{12} \end{equation}

As usual, to lowest order we have $G^{(0)}(1,2)=\Delta_{12}$; the first correction is \begin{equation}\tag{47} i G^{(1)}(1,2)=i\int \Delta_{1y} G^{(0)}(y_\alpha, y^\alpha,2)\mathrm dy=0 \end{equation} since $G^{(0)}(y_\alpha, y^\alpha,2)=0$. In order to calculate the next correction we need the three point function, which satisfies \begin{equation}\tag{48} i G(1^\alpha,2_\beta,3)=ig\int S_{1y}^\alpha{}_\gamma G(y,y^\gamma,2_\beta,3)\mathrm dy+S^\alpha_{12\beta} G(3) \end{equation} that is, \begin{equation} \begin{aligned} i G^{(1)}(1^\alpha,2_\beta,3)&=i\int S_{1y}^\alpha{}_\gamma G^{(0)}(y,y^\gamma,2_\beta,3)\mathrm dy+S^\alpha_{12\beta} G^{(1)}(3)=\\ &=-\int \Delta_{3y}(S_{1y}S_{y2})^\alpha{}_\beta -\Delta_{3y}S^\alpha_{12\beta}\text{tr}(S_{yy})\mathrm dy \end{aligned}\tag{49} \end{equation} which agrees with the one-loop diagrams

tikz code: https://pastebin.com/CQgr1XZv

With this, we now have what we need in order to compute the first non-trivial correction to the two-point function $G(1,2)$: \begin{equation} \begin{aligned} i G^{(2)}(1,2)&=i\int \Delta_{1y} G^{(1)}(y_\alpha, y^\alpha,2)\mathrm dy=\\ &=\int \Delta_{1y}\Delta_{2z}\text{tr}(S_{yz}S_{zy})-\Delta_{1y}\Delta_{z2}\text{tr}(S_{yy})\text{tr}(S_{zz})\ \mathrm dy\,\mathrm dz \end{aligned}\tag{51} \end{equation} which agrees with the diagrams

tikz code: https://pastebin.com/xDDdcj6B

Finally, the fermionic two-point function $G(1^\alpha,2_\beta)$ satisfies \begin{equation}\tag{53} i G(1^\alpha,2_\beta)=ig\int S_{1y}^\alpha{}_\gamma G(y,y^\gamma,2_\beta)\mathrm dy+iS^\alpha_{12\beta} \end{equation} which, to order zero in $g$, becomes $G^{(0)}(1^\alpha,2_\beta)=S^\alpha_{12\beta}$, as expected. The first correction is \begin{equation}\tag{54} iG^{(1)}(1^\alpha,2_\beta)=i\int S_{1y}^\alpha{}_\gamma G^{(0)}(y,y^\gamma,2_\beta)\mathrm dy=0 \end{equation} since $G^{(0)}(y,y^\gamma,2_\beta)=0$.

To second order in $g$, \begin{equation} \begin{aligned} i G^{(2)}(1^\alpha,2_\beta)&=i\int S_{1y}^\alpha{}_\gamma G^{(1)}(y,y^\gamma,2_\beta)\mathrm dy=\\ &=-\int \Delta_{yz} (S_{1y}S_{yz}S_{z2})^\alpha{}_\beta-(S_{1y}S_{y2})^\alpha{}_\beta\Delta_{yz}\text{tr}(S_{zz})\mathrm dy\,\mathrm dz \end{aligned}\tag{55} \end{equation} which, as one would expect, agrees with the one-loop Feynman diagram

tikz code: https://pastebin.com/ChMUk1sx

The calculation of higher order correlation functions, to higher loop orders, is analogous. Hopefully, the worked out examples above are enough to illustrate the general technique. It's a nice formalism, isn't it?

$\endgroup$
4
  • 2
    $\begingroup$ Dear lord, how fast do you type? If my answers were as comprehensive as yours it would be more than a full-time job for me! (+1) $\endgroup$
    – knzhou
    Commented Mar 26, 2018 at 13:43
  • 2
    $\begingroup$ @knzhou why thank you! I actually learnt about the Schwinger-Dyson equations because of your comment here, so you're to blame for this answer :-P $\endgroup$ Commented Mar 26, 2018 at 13:48
  • $\begingroup$ To fix: we usually pick the normalisation $G_0\equiv1$, which is equivalent to $\langle0|0\rangle\equiv1$ (and effectively eliminates all bubble diagrams). Note also that perturbation theory is a purely algebraic exercise, and the asymptotic series is best regarded as a formal power series, without notion of covergence or as an approximation method. $\endgroup$ Commented Nov 17, 2018 at 3:29
  • $\begingroup$ Thank you very much! You have saved me a lot of work time to make these diagrams in latex. There is not much information about these topics, I will include your page in my thesis bibliography. $\endgroup$ Commented Sep 26, 2021 at 3:50

Not the answer you're looking for? Browse other questions tagged or ask your own question.