360
$\begingroup$

In the past, I've come across statements along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:

  • addition/subtraction
  • multiplication/division
  • raising to powers and roots
  • trigonometric functions
  • exponential functions
  • logarithmic functions

which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.

What sort of techniques are used to prove statements like this? What is this branch of mathematics called?


Merged with "How to prove that some functions don't have a primitive" by Ismael:

Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.

I wonder how we can prove that kind of assertion?

$\endgroup$
5
  • 28
    $\begingroup$ Fwiw, I think this question would also be ok at MO. $\endgroup$ Commented Jul 20, 2010 at 22:25
  • 1
    $\begingroup$ It sounds like it would be part of Analysis, which deals with things like differentiability, continuous functions and the like. It's also worth noting that new operations can actually change what is representable in a closed form (for example, before logs were invented/discovered then functions involving them obviously couldn't be formulated in a closed form). $\endgroup$
    – workmad3
    Commented Jul 20, 2010 at 22:33
  • 5
    $\begingroup$ workmad3 - new operations do, of course, change what is representable in a closed form. For example, if we invent notation for a function whose value at t is \int_0^t x^x dx, many more things become representable! But for various mathematical, historical, and computational reasons, it is of interest what functions are expressible in terms of elementary functions as they've already been defined - field operations (arithmetic), log, exp, and compositions of these. $\endgroup$ Commented Jul 20, 2010 at 22:37
  • 70
    $\begingroup$ the branch of mathematics that studies this sort of thing is differential galois theory. $\endgroup$ Commented Jul 20, 2010 at 23:02
  • 1
    $\begingroup$ @Eric Not true for the functions typically considered in calculus. The theory of integration in finite terms doesn't employ any Galois theory at all. It plays a nontrivial role when one studies higher-order linear differential equations (see the comments on this answer below). $\endgroup$ Commented Jan 8, 2019 at 15:12

7 Answers 7

159
$\begingroup$

It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative of

$$f(x)\exp(g(x)) \, \mathrm dx$$

can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution of

$$f = h' + hg'$$

$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.

I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: https://ksda.ccny.cuny.edu/PostedPapers/liouv06.pdf

Liouville's original paper:

Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.

Michael Spivak's book on Calculus also has a section with a discussion of this.

$\endgroup$
5
  • 12
    $\begingroup$ do you mean Spivak's elementary Calculus (and not on manifolds, shudders)? And if so, what page (or section)? $\endgroup$ Commented Apr 22, 2013 at 3:15
  • 1
    $\begingroup$ You mean $e^{x^2}$ antiderivative.... $\endgroup$
    – Brethlosze
    Commented Jun 22, 2017 at 1:33
  • 1
    $\begingroup$ As I recall, Liouville's theorem uses a more general definition of "elementary function" ... not only "powers and roots" but also general "algebraic functions" are included. $\endgroup$
    – GEdgar
    Commented Sep 21, 2018 at 14:20
  • 2
    $\begingroup$ So, for the case of antidifferentiating $x^x$, one would take $f(x)=1$ and $\exp g(x) = \exp\ln(x^x)$ and hence look for a solution $1=h’+h\ln(x^x)$? $\endgroup$ Commented Feb 15, 2019 at 19:51
  • 1
    $\begingroup$ I think it's $f = h' + hg'$. $\endgroup$
    – lhf
    Commented Jul 28, 2020 at 16:01
85
$\begingroup$

Have you ever heard of Galois theory? It is a theory that studies the solutions of equations over fields.

As it turns out, there is a special type of Galois theory called Differential Galois theory, which studies fields with a differential operator on them:

http://en.wikipedia.org/wiki/Differential_galois_theory

Using this theory, one can prove that functions like $\frac{\sin(x)}{x}$ and $x^x$ don't have an indefinite integral.

$\endgroup$
4
  • 15
    $\begingroup$ This is a good proof sketch: math.niu.edu/~rusin/known-math/97/nonelem_integr2 $\endgroup$
    – Aryabhata
    Commented Aug 13, 2010 at 7:58
  • 14
    $\begingroup$ In fact the theory of integration in finite terms doesn't employ any Galois theory at all. That only comes into play when one studies higher-order linear differential equations. $\endgroup$ Commented Jun 2, 2011 at 16:38
  • 15
    $\begingroup$ @Aryabhata Your link is dead. Here is an archived version. $\endgroup$
    – lynn
    Commented Sep 4, 2015 at 2:57
  • $\begingroup$ @BillDubuque is correct. The Galois group of an integral is just the additive group of the constant subfield. Why? Well, the Galois group is the group of automorphisms that fix the base field and permute the solutions around amongst themselves. For an integral, the solutions differ only by the constant of integration "C". So, the integral's Galois group is the additive group of the real (or complex) numbers. Actually that's one of two cases - the case when we need to extend the original field to construct the integral. If we don't need to extend, then the Galois group is nil. $\endgroup$ Commented Dec 13, 2016 at 5:35
46
$\begingroup$

The techniques used for indefinite integration of elementary functions are actually quite simple in the transcendental (vs. algebraic) case, i.e. the case where the integrand lies in a purely transcendental extension of the field of rational functions $\rm\mathbb C(x)$. Informally this means that the integrand lies in some tower of fields $\rm\mathbb C(x) = F_0 < \cdots < F_n = \mathbb C(x,t_1,\ldots,t_n)$ which is built by adjoining an exponential or logarithm of an element from the prior field, i.e $\rm\ t_{i+1} =\: exp(f_i)\ $ or $\rm\ t_{i+1} =\: log(f_i)\ $ for $\rm\ f_i \in F_i$ where $\rm t_{i+1}$ is transcendental over $\rm F_i\:.\ $ For example $\rm\ exp(x),\ log(x)\ $ are transcendental over $\rm\mathbb C(x)$ but $\rm\ exp(2\ log(x)) = x^2\ $ is not. Now, because $\rm\ F_{i} = F_{i-1}(t_{i})$ is transcendental it has particularly simple structure, viz. it is isomorphic to the field of rational functions in one indeterminate $\rm\:t_i\:$ over $\rm\ F_{i-1}\ $. In particular, this means that one may employ well-known rational function integration, techniques such as expansions into partial fractions. This, combined with a simple analysis of the effect of differentiation on the degree of polynomials $\rm\ p(t_i)$, quickly leads to the fundamental result of Liouville on the structure of antiderivatives, namely they must lie in the same field $\rm F$ as the integrand except possibly for the addition of constant multiples of log's over $\rm F$. With this structure theorem in hand, the transcendental case reduces to elementary computations in rational function fields. This transcendental case of the algorithm is so simple that it may be easily comprehended by anyone who has mastered a first course in abstract algebra.

On the other hand, the full-blown algebraic case of the algorithm requires nontrivial results from the theory of algebraic functions. Although there are some simple special case algorithms for sqrt's and cube-roots (Davenport, Trager) the general algorithm requires deep results about points of finite order on abelian varieties over finitely generated ground fields. This algebraic case of the integration algorithm was discovered by Robert Risch in 1969 - who did his Berkeley Ph.D. on this topic (under Max Rosenlicht).

For a very nice introduction to the theory see Max Rosenlicht's Monthly paper, available from JSTOR and also here. This exposition includes a complete proof of the Liouville structure theorem along with a derivation of Liouville's classic criterion for $\rm\int f(z)\: e^{g(z)}\: dz\ $ to be elementary, for $\rm\: f(z),\: g(z)\in \mathbb C(x)$. For algorithms see Barry Trager's 1984 MIT thesis and Manual Bronstein: Symbolic Integration I: Transcendental Functions.

Disclaimer: I implemented the integration algorithm in Macsyma (not the much older Maxima) so, perhaps due to this experience, my judgment of simplicity might be slightly biased. However, the fact that the basic results are derived in a handful of pages in Rosenlicht's paper yields independent evidence for such claims.

$\endgroup$
8
  • 6
    $\begingroup$ I'm not sure if I agree with "easily comprehended". There's quite a bit of differential algebra to learn to really understand what is going on, and the whole if it is pretty large and complex. So "comprehended" yes, but "easily," maybe not. $\endgroup$
    – asmeurer
    Commented Jun 26, 2011 at 7:02
  • 3
    $\begingroup$ @asm The amount of differential algebra required for the classical transcendental case is minimal. One could learn it in a few hours. $\endgroup$ Commented Jun 26, 2011 at 13:13
  • 3
    $\begingroup$ I'm not sure if I agree with that. There are two whole chapters in Bronstein's book (chapters 3 and 4) dedicated to the differential algebra techniques required to prove the correctness of the algorithm. And that doesn't even include the more advanced stuff you need to prove the structure theorems (chapter 9). I suppose you don't really need all of this unless you are interested in understanding the proofs of the theorems, so I guess it depends on how define "comprehended". $\endgroup$
    – asmeurer
    Commented Jun 28, 2011 at 3:42
  • 3
    $\begingroup$ @asm But the transcendental case can be presented much more succinctly, e.g. see the expositions by Rosenlicht and his student Risch. $\endgroup$ Commented Jun 28, 2011 at 4:05
  • 2
    $\begingroup$ @cla It's necessary if you wish to understand the algorithms. The gist is simple, e.g. a couple pages in the linked Rosenlicht article, using nothing deeper than partial fractions (and it can be presented in more elementary language). We can only go so far with ad-hoc methods. It's akin to attempting ts solve linear Diophantine equations without knowing the (extended) Euclidean algorithm (or equivalent). We can use ad-hoc methods for small problems, but there is no hope for less trivial instances, e.g. Archimedes Cattle Problem (206545 digits) $\endgroup$ Commented Jan 8, 2019 at 18:07
38
$\begingroup$

Surprisingly, there is a procedure for determining this, called the Risch Algorithm. However, it is apparently very complicated to implement, and furthermore it is not a true algorithm: if the input does have an anti-derivative, it can find it in a finite amount of time, but demonstrating there is no anti-derivative requires solving the constant problem, which can fail to halt. (In that case, probabilistic methods can be used.)

$\endgroup$
2
  • 12
    $\begingroup$ It is an algorithm modulo the ability to compute effectively in the constant field. If the constant field is complex enough, e.g. involving transcendentals such as e, \pi, etc, then this leads to difficult problems in transcendental number theory - which are unsolvable in general. However, rarely if ever do such problems arise in practice. If they did they would not cause halting problems but instead might cause the algorithm to return an incorrect answer, e.g. declaring that no elementary closed form exists when one in fact does exist. $\endgroup$ Commented Aug 12, 2010 at 2:00
  • 5
    $\begingroup$ @Bill Dubuque: That's exactly correct. I've implemented the transcendental version of the Risch Algorithm in SymPy, and you can trick it into thinking something like exp((sin(y)**2 + cos(y)**2 - 1)*x**2) has no elementary integral with respect to x (when in fact it does, because it reduces to just 1). $\endgroup$
    – asmeurer
    Commented Jun 26, 2011 at 3:17
38
$\begingroup$

A key result in this area is a theorem by Liouville. You'll find easy expositions of this result in the references below:

  • Antoine Chambert-Loir. A Field Guide to Algebra, last chapter, Springer, 2005.
  • M Rosenlicht. Integration in finite terms, Amer. Math. Monthly, (1972), pp 963-972.
  • Toni Kasper. Integration in finite terms: the Liouville theory. Math. Mag., 53(4):195–201, 1980.
  • D. H. Potts. Elementary integrals. Amer. Math. Monthly, 63:545–554, 1956.
  • A. D. Fitt and G. T. Q. Hoare. The closed-form integration of arbitrary functions, The Mathematical Gazette, Vol. 77, No. 479 (Jul., 1993), pp. 227-236
  • Elena Anne Marchisotto and Gholam-Ali Zakeri. An invitation to integration in finite terms, The College Mathematics Journal, Vol. 25, No. 4 (Sep., 1994), pp. 295-308
  • D. G. Mead. Integration, The American Mathematical Monthly, Vol. 68, No. 2 (Feb., 1961), pp. 152-156
$\endgroup$
1
28
$\begingroup$

Brian Conrad explains this in the following:

Impossibility theorems on integration in elementary terms (archived PDF)

$\endgroup$
3
  • $\begingroup$ This sounds interesting but is currently 403 Forbidden. On Conrad's home page he notes: "These may be updated without warning. Links to files undergoing revision may be temporarily disabled." So hopefully it will return in new and improved form before too long. $\endgroup$
    – walkytalky
    Commented Aug 13, 2010 at 12:56
  • 2
    $\begingroup$ Works now, at least. $\endgroup$ Commented May 31, 2017 at 17:17
  • 7
    $\begingroup$ Works forever now, because I archived the pdf link :) $\endgroup$
    – HFBrowning
    Commented Feb 14, 2019 at 23:30
0
$\begingroup$

The general pattern is always the same: (1). Study properties of derivatives of closed form functions. (2). If your function $f(x)$ fails one of those properties, then it has no closed form antiderivative.

Here is how to get started: First, consider derivatives of rational functions. Notice that their pole orders at finite points are never 1 (more generally: the residue is 0). If $f(x)$ fails this property, then you know that its antiderivative can not be a rational function. Sounds simple, but the surprising part is that this can be generalized quite easily to logarithmic extensions (handling exponential extensions in Risch algorithm is more involved, and handling algebraic extensions is more involved still).

Take for example a function $f(x) \in \mathbb{C}(x, \ln(x))$. For each irreducible factor $p \in \mathbb{C}(x)[ \ln(x) ]$ in the denominator of $f(x)$, you can define a corresponding residue (which will be algebraic over $\mathbb{C}(x)$). A necessary condition for $f(x)$ to have an elementary anti-derivative is for these residues to be constants (in the more basic case, $f(x) \in \mathbb{C}(x)$, residues are always constants, and thus, rational functions always have an elementary antiderivative). See "Liouville's theorem (differential algebra)" on wikipedia for more.

$\endgroup$
1
  • $\begingroup$ Are you aware of the computational algorithms used to determine if there is an elementary function that can be differentiated to the integral? $\endgroup$
    – jimjim
    Commented Jun 27, 2020 at 11:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .