100
$\begingroup$

In the comments to David Speyer's answer here, he points out that "the distinction between 'if there is a formula, it is this one' and 'this formula works' is subtle."

Does anyone have any simple, natural examples where there is a "unique candidate for a thing satisfying properties", but that candidate does not actually work? This is really a particular kind of reductio ad absurdum, but somehow different from most. Most reductio ad absurdum arguments do not produce a "reasonable looking" candidate for the nonexistent object, if you know what I mean.

$\endgroup$
8
  • 2
    $\begingroup$ I'm making this a comment rather than an answer in hopes that someone else will fill in some specific examples, but differential equations (ODE or PDE) and Fourier analysis would be some nice sources for this kind of example. You have a lot of situations where there's a natural "candidate function" but it fails to "exist" or to be in the space your operators are acting on. $\endgroup$ Commented Sep 8, 2014 at 1:00
  • $\begingroup$ (Pedagogically) related Unique steps leading to a non-unique answer $\endgroup$ Commented Dec 24, 2019 at 21:02
  • 2
    $\begingroup$ This probably doesn't count but if you have a system of four equations with three variables, the first three giving you a unique solution and the fourth one disqualifying it, you hvae an example of a unique candidate that doesn't work $\endgroup$
    – David
    Commented Jul 13, 2020 at 11:51
  • 1
    $\begingroup$ Note that your question is a special case of the question “When is proof of existence more difficult than proof of uniqueness?”, namely, when the difficulty of the proof of existence is infinite. An example of the non-infinite case is the derivation of the integral representation of the logarithm. $\endgroup$
    – user10552
    Commented Jan 3, 2021 at 22:59
  • 1
    $\begingroup$ @Steven Gubkin: Sure. I’m referring to the way that Tom Apostol develops the logarithm in his Calculus textbook. Namely, he first defines the properties of a logarithm function, then shows that ‘if there is such a thing, it is this one’, and then, separately, afterwards, shows that ‘this thing works’. The first step involves only simple differentiation and integration, whereas the second step involves invoking the Substitution Theorem for Integrals, which makes it ‘harder’. $\endgroup$
    – user10552
    Commented Jan 4, 2021 at 15:06

20 Answers 20

89
$\begingroup$

For a positive real number $x$ consider $$x^{x^{x^{\dots}}}$$ or formally (the limit of) the sequence $a_n= x^{a_{n-1}}$ (and $a_0= 1$).

Determine the value $x$ (if it exists) such that $$x^{x^{x^{\dots}}}=4.$$ Uniqueness can be proved like this: since for such an $x$ we ought to have $x^4 = 4$ we get that $4^{1/4} = \sqrt{2}$ is the unique candidate for a solution.

But this value does not work. A way to appreciate this (though not strictly a proof of it) is to do the same for $2$ instead of $4$, getting also $x=\sqrt{2}$.

So $$\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\dots}}}$$ would have to be both $2$ and $4$, which is absurd, so (at least) for one of the two there is no solution. (It is indeed $4$ where there is no solution.)

$\endgroup$
9
  • 29
    $\begingroup$ Funny that $\left(\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\dots}}}\right)^{\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\dots}}}}=4$ though. $\endgroup$ Commented Sep 22, 2014 at 4:55
  • 19
    $\begingroup$ @alex.jordan This observation leads, of course, to the distinction between the ordinal $\omega$ and the ordinal $\omega + \omega$. $\endgroup$ Commented Oct 2, 2014 at 20:58
  • 6
    $\begingroup$ @rpb The comment was (sort of) a joke. But if you read anything about ordinal numbers you will learn that if $\omega$ represents the ordinal of the natural numbers, then, for example, $1+\omega = \omega$, while $\omega+1>\omega$. The total order of $\omega+1$ looks like $0,1,2,3,...,0'$. The order type of $\omega+\omega$ looks like $0,1,2,...,0',1',2',...$, which is not the same as $\omega$ since $0'$ has no immediate predecessor. This is the same distinction occurring in the expression alex.jordan wrote above: the "order" of the exponents is of type $\omega+\omega$. $\endgroup$ Commented Oct 3, 2014 at 19:39
  • 2
    $\begingroup$ I have not seen a formal calculus handling such "ordinal valued sequences" before. Might be fun to work out, or at least see if anyone has done anything with it before. But I will not personally use much of my time to pursue this. $\endgroup$ Commented Oct 3, 2014 at 19:41
  • 1
    $\begingroup$ I found a blog post that explains why the correct value is 2 instead of 4, via a gentle introduction of the Lambert W function. I can imagine this exposition easily being adapted into a project for a calculus course: luckytoilet.wordpress.com/2010/03/13/… $\endgroup$ Commented Dec 31, 2017 at 16:29
115
$\begingroup$

The obvious example I immediately thought of is that, if the divergent geometric series $$1 + 2 + 4 + 8 + \dotsb = \sum_{k=0}^\infty 2^k$$ converged, it would converge to $-1$.

Proof: If the series converged to some number $x = 1 + 2 + 4 + 8 + \dotsb$, then clearly this number $x$ would have to satisfy $2x = 2 + 4 + 8 + \dotsb = x - 1$. Solving $2x = x-1$ for $x$ yields $x = -1$, Q.E.D.


Like many other such examples of "unique false solutions", this one also hints at deeper things. For example:

  • Whenever the geometric series $1 + x + x^2 + x^3 + \dotsc$ converges, its limit equals $\frac{1}{1-x}$. This expression is defined for all values $x \ne 1$, even those for which the original series does not converge; for $x = 2$, it equals $-1$. This is, essentially, an example of analytic continuation.

  • In the system of 2-adic numbers, which have a different notion of distance, and thus of convergence, than the ordinary real numbers, the series $1 + 2 + 4 + 8 + \dotsb$ does converge. The number it converges to is $-1$.

  • In fact, the proof above is sufficient to show that any stable and linear summation method that produces a finite sum for this series must yield the answer $-1$.

  • Another way of looking at this is in terms of iterated functions. Specifically, the partial sums $a_n = \sum_{k=0}^n 2^k$ satisfy the recurrence relation $a_{n+1} = f(a_n) = 2a_n + 1$. The unique fixed point of the map $f$ is $f(-1) = -1$.

  • Yet another, and possibly the most natural, way is to think of the series as converging to $-1$ when run backwards. Specifically, the sequence of partial sums $a_n = \sum_{k=0}^n 2^k = 2^n-1$ can be naturally extended to negative values of $n$. (Indeed, the same can be done for any geometric series.) And, clearly, $\displaystyle \lim_{n\to-\infty} a_n = -1$.

  • Finally, if you naïvely sum the powers of two on a computer using signed $n$-bit arithmetic, you will normally get the answer $-1$. For a demonstration, try the following Java code:

    int sum = 0, term = 1;       // these are 32-bit signed integers
    while (sum + term != sum) {
        sum = sum + term;
        term = term * 2;
    }
    System.out.println(sum);     // this prints "-1"
    

    The reason this happens is because most computers handle signed integers using the two's complement representation, in which a bitstring consisting of all 1 bits represents the number −1.

    Further, the reason why this representation is so commonly used is because, in a sense, it is the natural one — in particular, it allows basic arithmetic operations like signed addition and multiplication to be implemented using the exact same code / circuitry as their unsigned counterparts.

    The underlying mathematical reason is that both the usual representation of unsigned $n$-bit integers and the two's complement representation of signed integers correspond to arithmetic modulo $2^n$, just with different representatives chosen for the equivalence classes. The bitstring that in signed arithmetic represents $-1$ corresponds to the unsigned number $2^n-1$ $=$ $1 + 2 + 4 + \dotsc + 2^{n-1}$, which is equivalent to $-1$ modulo $2^n$.

For more information and links, see the Wikipedia article on this series.

$\endgroup$
1
  • 5
    $\begingroup$ this is really a brilliant answer. thank you. $\endgroup$
    – rbp
    Commented Oct 3, 2014 at 18:39
63
$\begingroup$

Here is a more elementary example.

Sometimes, extraneous solutions in algebra are of the type you describe.

For instance, to solve the equation $$\sqrt {2x-1}=-x, \quad (x \in \mathbb R)$$ we might take the square on both sides and get the equation $$2x-1=x^2,$$ which has a unique solution $x=1$. Substituting back in the original equation we see that our unique candidate is actually not a valid solution.

$\endgroup$
3
  • 2
    $\begingroup$ This is a great example! It also shows that often these situations invite us to expand our notion of solution: in this case, the problem arises from considering the square root as a single valued, rather than a multivalued function. $\endgroup$ Commented Sep 21, 2014 at 18:52
  • 8
    $\begingroup$ A similar, more extreme yet less familiar example: $\sqrt{\frac{x}{x}}={-\sqrt{\frac{x}{x}}}$ has no solutions at all, but squaring both sides leads to all real numbers except $0$ being extraneous solutions. $\endgroup$ Commented Oct 2, 2014 at 22:03
  • 3
    $\begingroup$ @alex.jordan I think in both your example and mine it is a little too obvious that one side of the equation is positive while the other is negative. It would be nice to have an example where this is somehow hidden. $\endgroup$ Commented Oct 5, 2014 at 22:26
40
$\begingroup$

If there were a linear formula for $\int_0^1 f(x) dx$ in terms of $f(0)$ and $f(1)$, it would be $\int_0^1 f(x) dx = \frac{1}{2}(f(0) + f(1))$. Proof: Suppose $\int_0^1 f(x) dx = af(0)+bf(1)$. Taking $f(x) = 1$, we deduce $a+b=1$; taking $f(x) = x$ we deduce $b=1/2$.

Of course, there is no such formula, but this computation does hint at something useful: If all you know is $f(0)$ and $f(1)$, then $\frac{1}{2}(f(0) + f(1))$ is a reasonable guess. This leads to the trapezoidal rule.

If you were teaching Simpson's rule or Gaussian quadrature, you could similarly ask "Can we get a rule which works for a few more functions by incorporating $f(1/2)$?" or "Can we get a rule which woks for even more functions by using $f(a)$ and $f(b)$ instead of $f(0)$ and $f(1)$, for some well chosen $a$ and $b$?"

$\endgroup$
34
$\begingroup$

A very simple example (if you know calculus) is finding the local minimum of a downward-opening parabola, like $y = -x^2$. The unique candidate comes from setting the derivative equal to zero, but then you still have to check the second derivative to see whether it is, in fact, a minimum.

This is also a very physically relevant example, because it comes up pretty much any time you are trying to see whether a physical system has a stable equilibrium. Checking that the minimum is really a minimum makes the difference between the system settling into a stable configuration and blowing itself apart because it's unstable.

$\endgroup$
1
  • 8
    $\begingroup$ A strong candidate for my least-favourite calculus-problem wording: "Find the minimum of [ function with a unique critical point ]", because it trains students to stop once they've found the critical point. I always ask such a question for a function such as yours, just to try to drive home the point that you make! $\endgroup$
    – LSpice
    Commented Mar 31, 2015 at 15:46
30
$\begingroup$

The "Only Critical Point in Town" test

Suppose I have a nice function $f : \mathbb R^n \to \mathbb R$. Suppose it has only one critical point, and that is a local maximum. Then (of course) it is the global maximum of the function.
FALSE

$\endgroup$
4
  • 10
    $\begingroup$ It's embarrassing how surprising I found this. Here's an article about a nice counterexample with some good pictures (I didn't really "get it" from the Mathworld graphs) and some intuitition. Along any curve leading from the critical point to a point with a greater value, the curve must reach a local minimum; if you look at the set of all such minima in nice cases it will form another curve, and along this curve, the function is unlikely to be monotonic, right? Well, there's no reason it can't be. $\endgroup$ Commented Mar 31, 2015 at 23:11
  • 6
    $\begingroup$ Maybe add "where $n > 1$". $\endgroup$ Commented Oct 23, 2015 at 18:06
  • $\begingroup$ Some Calculus texts consider saddle points to be critical points, which would then make your statement true. Is that correct? $\endgroup$
    – Opal E
    Commented Oct 23, 2015 at 22:27
  • 7
    $\begingroup$ @OpalE ... no, some counterexamples for this have no saddle points. $\endgroup$ Commented Apr 10, 2016 at 18:08
21
$\begingroup$

Theorem: The largest positive integer is $1$.

Proof: If $n$ is a positive integer and $n \not= 1$ then $n^2 > n$, so there is an integer larger than $n$. Thus the largest integer has to be $1$. QED

EDIT: I learned from the answer by John B. here that this example goes back at least to Oskar Perron in 1913. See p. 532 of the reference in John's answer.

$\endgroup$
4
  • $\begingroup$ Should the proof start with "If $n$ is the largest positive integer ..." $\endgroup$
    – user182601
    Commented Jan 2 at 8:47
  • $\begingroup$ @user182601 No, since the initial sentence does not use that property. $\endgroup$
    – KCd
    Commented Jan 2 at 9:19
  • $\begingroup$ Then I don't understand: I thought the point is that "there is an integer larger than $n$" is a contradiction and so we can then conclude that "the largest integer has to be $1$"? If we don't begin by claiming that $n$ is the largest positive integer, how does "there is an integer larger than n" imply "the largest integer has to be 1"? $\endgroup$
    – user182601
    Commented Jan 2 at 10:18
  • $\begingroup$ @user182601 That's the point of the second sentence. I had shown no integer bigger than $1$ can be maximal, so the largest integer is $1$. $\endgroup$
    – KCd
    Commented Jan 2 at 14:50
18
$\begingroup$

Solve $$0x=1$$

where $x\in\mathbb{R}$.

If $x$ is a solution, then $$x=x1=x(0x)=0.$$

I have used this example myself with great effect, since at the very first instance the students realise something absurd is about to happen.

$\endgroup$
1
10
$\begingroup$

Suppose we want a genuine tensor product $X\otimes Y$ of Hilbert spaces $X,Y$, meaning a Hilbert space (or, more generously, some locally convex topological vector space) $Z$ and continuous bilinear $b:X\times Y\to Z$ so that, for every continuous bilinear $B:X\times Y\to V$ (with $V$ a Hilbert space, most cautiously) there is a unique continuous linear $Z\to V$ making the obvious diagram commute.

The "obvious" candidate is to give the algebraic tensor product of $X,Y$ the inner product $$ \langle x\otimes y,\;x'\otimes y'\rangle\;=\; \langle x,x'\rangle_X\cdot \langle y,y'\rangle_Y $$ and complete to form a Hilbert space. The obvious map of $X\times Y$ to that Hilbert space is continuous. However, if both $X,Y$ are infinite-dimensional, then this construction fails to have the desired universal property. (Whence the terminology "projective" and "injective" tensor products, in the literature, meaning TVS's which have one half or the other, but not both, the properties of a genuine tensor product.)

$\endgroup$
3
  • 2
    $\begingroup$ Oop, I see now the "undergrad" tag, so maybe this example is inappropriate. Nevertheless, to my mind, it is one of the few scenarios wherein what seems natural from a universal-mapping viewpoint is impossible, as opposed to having an anti-climactic existence proof. $\endgroup$ Commented Sep 11, 2014 at 14:15
  • 1
    $\begingroup$ Do not worry about the tag: I am grateful to have seen this interesting example. I guess it fails since just asking a for a bilinear map is not good enough: we do not ask for our Hilbert space maps to be merely linear, but also bounded, so why should we not expect more out of our Hilbert space bilinear maps as well? $\endgroup$ Commented Sep 11, 2014 at 15:38
  • $\begingroup$ Shall we remove the undergrad tag? I see no need for it... $\endgroup$
    – Sue VanHattum
    Commented Feb 21, 2015 at 15:36
8
$\begingroup$

How about having three distinct points on a plane, so there should exist a unique parabola that passes through all of them, but it turns out that the points are collinear so the parabola does not exist after all?

Comments:

You probably mean something like "having three distinct points on the xy-plane, there should exist a unique parabola with vertical axis that passes through all of them". The coordinate-free presentation allows parabolas in so many directions that three points do not determine the shape. – Matt F. Sep 6 '14 at 11:36

Depending on your point of view, the parabola does exist, but it has infinite focal length. A straight line can be so many things, it is an arc of oh so many infinitely sized curves :-) – steveverrill Sep 6 '14 at 13:41

@MattF. FWIW by Bezout's theorem, two parabolas intersect in 4 points, so 5 points determine a parabola in the plane (which are not necessarily vertically aligned). – Steven Gubkin Sep 6 '14 at 14:48

$\endgroup$
0
8
$\begingroup$

I'm not sure if this fits, but it seems related to me:

You know the old circle cutting problem? What is the maximum number of pieces a disc can be cut into using n straight cuts?

Well, let us number these regions using a binary string according to if the region is "above" or "below" the lines. (The lines may be constructed with all the same signed slope) So "1101" corresponds to a region (in the n=4 case) that is above line a, above line b, below line c, and above line d.

Here is the n=3 case illustrated: 3 cuts of a disc

So where is the "010" region? If there is an 8th region, it must be below a, above b, below c, thus 010... but those areas do not intersect, so that 8th region does not exist.

Here is n=4, four cuts obtain 11 regions maximally: Four cuts, 11 regions

We "miss" five combinations of half-planes with four lines: those that correspond to 0010, 0100, 0101, 0110, 1010. (I also have translated the binary strings to base 10 numerals as a easy and interesting way to keep track of what gets left out)

$\endgroup$
8
$\begingroup$

Given the equation $$\lvert2x\rvert=x-1,$$ rewriting the left-hand side as $\pm2x$ results in the unique candidate solution set $\left\{-1,\frac13\right\}$ that nonetheless fails to satisfy the equation:

\begin{align}\forall x\in\mathbb R \Bigg[\quad\quad\quad&x\in\emptyset\\\iff{}&\bigg(x<0 \,\text{ and}\,-2x=x-1\bigg) \:\text{ or }\: \bigg(x\geq0 \,\text{ and }\, 2x=x-1\bigg)\\\iff{}&\lvert2x\rvert=x-1\\\implies{}&\pm2x=x-1\\\iff {}&x\in\left\{-1,\frac13\right\}\quad\Bigg].\end{align}

Due to deductive explosion, performing a valid operation on an inconsistent equation, which has no solution, has given rise to an extraneous solution set.

$\endgroup$
1
  • $\begingroup$ Thanks! This is a nice one. $\endgroup$ Commented Mar 6, 2021 at 17:06
6
$\begingroup$

Russel's paradox "Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition", akin to the halting problem from computer science seems to fit the bill

$\endgroup$
6
$\begingroup$

Here is another example, inspired by this video: Solving a crazy iterated floor equation, where assuming a solution exists leads to the wrong conclusion.

Solve the equation $$x \lfloor x \lfloor x \rfloor \rfloor=4,$$ where $\lfloor \; \rfloor$ denotes the floor function.

As an approximation $x \approx \sqrt[3] 4=1.587 \dots$.

Let $f(x)=x \lfloor x \lfloor x \rfloor \rfloor$. Since $f(1)=1$, $f(2)=8$, and $f$ is an increasing function, such an $x$ (if it exists!) must satisfy $1 < x <2$.

Let $m=\lfloor x \lfloor x \rfloor \rfloor$, an integer. Then $xm=4$, so $x = \frac 4 m$. This gives $1 < \frac 4 m < 2$, and therefore $$2 < m < 4.$$ Since $m$ is an integer we must have $m=3$, and hence our candidate solution is $$x = \frac 4 m = \frac 4 3.$$ But we can easily check $$ \frac 4 3 \lfloor \frac 4 3 \lfloor \frac 4 3 \rfloor \rfloor=\frac 4 3 \neq 4,$$ and the solution is therefore not valid.

$\endgroup$
4
$\begingroup$

The derivative of $x \mapsto |x|$ at zero, or, say, the mean (not in a principal-value sense) of a Cauchy-distributed variable or the limit (in the finite real line) of $x_n = 2x_{n-1}; x_0 \neq 0$. Symmetry considerations show the result must be 0 if it were to exist, but to casually assume existence/convergence gets one burnt in places such as these.

This can also be used to show that while $f''(x) = \lim_{h \to 0; h > 0} \frac{f(x-h) - 2f(x) + f(x+h)}{h^2}$ and similar formulae can be used to calculate higher-order derivatives, they don't make good definitions -- applied to, e.g., the function that is -1 for negative arguments, +1 for positive arguments, and 0 at zero, you find the pathologies cancel and reach an absurd conclusion. (Or more simply $f'(x) = \lim_{h \to 0} \frac{f(x + h/2) - f(x - h/2)}{h}$ on the absolute value, as OP notes below.)

One can interpret the fundamental theorem of the calculus in various forms as another instance of this phenomenon. The statement typically requires that the integrand be integrable0, so one has, example,

Let $-\infty < a \leq b < +\infty$, and let $f,F : [a,b] \rightarrow \mathbb{R}$ be functions. If

(I) $F$ is continuous (everywhere), if
(II) $F'(x)$ exists and equals $f(x)$, except at a finite number (possibly none) of points $x \in [a,b]$, and if
(III) $f$ is Riemann integrable,

then $\int_a^b f(x) \, dx = F(b) - F(a)$.

Viewing this as $\big( \text{(I) and (II)} \big) \Rightarrow \big( \text{(III)} \Rightarrow \text{formula} \big)$, the theorem rules out all but one number as a possible value of the integral, but kids tend to neglect or forget the subtle difference and claim, e.g., that $f : [0, 1/\pi] \rightarrow \mathbb{R}$ given by $f(0) := 0 , f(x) := \sin (1/x) - (1/x) \cos (1/x)$ has Riemann integral $\lim_{t \to 0; t > 0} \left( \left( x \sin (1/x) \right) \bigg|_t^{1/\pi} \right) = 0$. (No it doesn't! If the integral exists then it is zero as we have so lovingly described. Except it doesn't! (At least not properly, i.e., according to the definition they wrote down yesterday.) It's not even Lebesgue integrable, we just went over the $x \sin (1/x)$ function in an example on total variation. Aarg!) Similarly, if a real function has an (relative) interior extremum, then either it fails to be differentiable at that point, or the derivative there is zero. (I tend by nature to put failure and 'strings-attached' clauses and hypotheses first. It helps very little if any, but at least it seems not to impede. Oh well.)


0 even assuming (global) differentiability of the antiderivative, unless your formulation uses the gauge integral, in which case integrability very nicely becomes a conclusion as opposed to an assumption, but this integral is unpopular or passe or something (granted, it doesn't generalise as well as Lebesgue's owing to the structure it requires on the domain space, but (a) so long as one already is interested only in sticking to the real line, why not? and (b) I love it!! Are those alone not reasons enough??).

$\endgroup$
3
  • $\begingroup$ Oh -- I hadn't noticed the first time that the diverging sequence example is more or less Ilmari Karonen's, but I will throw the rest out here. $\endgroup$ Commented Oct 24, 2015 at 7:11
  • 1
    $\begingroup$ Hmm. The symmetric difference quotient also fails to be a good definition for the first derivative. Are you sure that some other expression does not work as a "direct expression" for the second derivative? $\endgroup$ Commented Oct 25, 2015 at 1:44
  • $\begingroup$ @StevenGubkin: Ah, so it doesn't work for the first derivative either. I was thinking about the points straddling x with none of them actually being x and that something special to do with parity would happen. Not sure where that came from. Regarding trying to characterise double differentiability without going through the first derivative: that's what I want to know, too! $\endgroup$ Commented Oct 25, 2015 at 4:29
4
$\begingroup$

I stumbled on one of these today:

The results of partial fraction decomposition, when used incorrectly, provide a unique candidate that fails.

For example, if you try to decompose $\frac{1}{x^2+x^3}$ as

$\frac{1}{x^2+x^3} = \frac{A}{x^2}+\frac{B}{x+1}$,

you can clear fractions to get

$1 = A(x+1) + Bx^2$.

Of course there are no $A$ and $B$ that satisfy this, but if you don't notice this and instead try any two values of $x$:

$x=0 \implies 1 = A$

$x = -1 \implies 1 = B$

you get a unique candidate for $A$ and $B$, and it fails.

$\endgroup$
2
  • 3
    $\begingroup$ I guess the most basic variant of this is "Find the value of $A$ which makes $1 = Ax$ true for all $x$. Solution: if such an $A$ existed, then setting $x = 1$ yields $A = 1$ ". $\endgroup$ Commented Jan 31 at 16:14
  • $\begingroup$ Yea... this is a fair point :D $\endgroup$ Commented Jan 31 at 23:17
2
$\begingroup$

The enigmatic "field with one element" comes to mind. This is a non-existent object that behaves like a finite field, but which has only one element. There is a rich history of deep mathematics that grapples with it.

$\endgroup$
4
  • 6
    $\begingroup$ I'm not sure that this answer is what the OP is looking for. Maybe you could provide more information about why you think it is a good candidate? $\endgroup$ Commented Sep 8, 2014 at 19:58
  • 1
    $\begingroup$ It is a good candidate for what the OP is looking for because there is a unique object that would satisfy the field axioms, namely {0}, but this object does not exist as a field. But this non-existent object in its own right is very interesting and has generated much research. $\endgroup$
    – user52817
    Commented Sep 8, 2014 at 21:19
  • 1
    $\begingroup$ @user52817: What research would that be? I have to admit I have no idea. $\endgroup$
    – Frunobulax
    Commented Sep 9, 2014 at 11:27
  • 3
    $\begingroup$ @Frunobulax--Start with the wikipedia page for "Field with one element." Another good source is the mathoverflow question "What is the field with one element?" mathoverflow.net/questions/2300/… $\endgroup$
    – user52817
    Commented Sep 9, 2014 at 12:16
1
$\begingroup$

Here's a great application of the little-mentioned Abel's theorem: If $\sum_{i=0}^\infty a_i,\sum_{i=0}^\infty b_i$ and their convolution product $\sum_{i=0}^\infty \left(\sum_{j=0}^i a_ib_{j-i}\right)$ all converge in $\mathbb{R}$ (absolutely or not!) to sums $A,B,C$ then $AB=C$. In other words, the product series can't converge to something other than my first guess.

It's gotten as a corollary by powers-seriesifying (generating-functionifying?) the given series in $x$ and using absolute convergence on the interior where $x \in (-1,+1)$.

$\endgroup$
0
$\begingroup$

Does anyone have any simple, natural examples where there is a "unique candidate for a thing satisfying properties", but that candidate does not actually work?

A bit of a loose(r) answer, but one coud say that $\aleph_1$ was (is?) the 'unique natural candidate' for the property of "being the cardinality of $2^{\aleph_0}$", this both from the very naïve standpoint of "the only operation increasing cardinalities is 'taking powers', so it must be that $\aleph_1 = |2^{\aleph_0}|$!" and from the more elaborate one of "uncountable projective subsets of reals have cardinality continuum"

$\endgroup$
-1
$\begingroup$

Another example is string theory as a candidate for the theory for quantum gravity. After numerous consistency checks (the so-called "theoretical experiments") it still fails to be validated by the experimental data.

$\endgroup$
2
  • 4
    $\begingroup$ I doubt this is the kind of thing OP had in mind. Notice that all the other answers involve some mathematical object(s) where it is clear whether or not certain properties apply. $\endgroup$ Commented Sep 8, 2019 at 19:43
  • 5
    $\begingroup$ Moreover, string theory is not the "unique candidate" for a theory of quantum gravity. In particular, the actual universe does work, so there probably is a "unique candidate" for the actual laws of physics, but this "unique candidate" could not possibly "fail" (or if it does, there will be really interesting philosophical consequences). $\endgroup$ Commented Sep 8, 2019 at 20:32

Not the answer you're looking for? Browse other questions tagged or ask your own question.