53
$\begingroup$

Using mathematical induction, I have proved that

$$1+ 5+ 9 + \cdots +(4n-3) = 2n^{2} - n$$

for every integer $n > 0$.

I would like to know if there is another way of proving this result without using PMI. Is there any geometric solution to prove this problem? Also, are there examples of problems where only PMI is our only option?

Here is the way I have solved this using PMI.

Base Case: since $1 = 2 · 1^2 − 1$, the formula holds for $n = 1$.

Assuming that the formula holds for some integer $k ≥ 1$, that is,

$$1 + 5 + 9 + \dots + (4k − 3) = 2k^2 − k$$

I show that

$$1 + 5 + 9 + \dots + [4(k + 1) − 3] = 2(k + 1)^2 − (k + 1).$$

Now if I use hypothesis I observe.

$$ \begin{align} 1 + 5 + 9 + \dots + [4(k + 1) − 3] & = [1 + 5 + 9 + \dots + (4k − 3)] + 4(k + 1) −3 \\ & = (2k^2 − k) + (4k + 1) \\ & = 2k^2 + 3k + 1 \\ & = 2(k + 1)^2 − (k + 1) \end{align} $$

$\diamond$

$\endgroup$
8
  • 27
    $\begingroup$ For everybody who wonders it like I did, Google told me PMI stands for "Principle of Mathematical Induction". $\endgroup$
    – JiK
    Commented May 28, 2016 at 17:36
  • 3
    $\begingroup$ Not really what the OP wants, probably, but it might be possible to determine whether this result requires induction by figuring out whether it can be proved in Robinson arithmetic. $\endgroup$ Commented May 28, 2016 at 17:43
  • 1
    $\begingroup$ @Nate: It surely doesn't; e.g. it should be straightforward to axiomatically define a summation operator in fairly weak contexts for which most of these answers are a simple manipulation of identities that prove the theorem for anything satisfying the definition. Induction, if it even comes into play, would be relegated to merely proving one particular interpretation is a model of the axioms. $\endgroup$
    – user14972
    Commented May 29, 2016 at 1:41
  • 2
    $\begingroup$ That seems delicate. You can develop, for $n$ up to any constant bound, a theory of finite sums of $x\choose n$, in a finitely axiomatized extension of Robinson arithmetic. Or an equivalent theory of solutions of finite difference equations $\Delta^k = 0$. But we are usually presented with the polynomials written in the basis of powers $x^n$ and the equivalence between that and the basis that works for finite-difference problems seems to use principles of uniqueness of summation that are a form of induction. @Hurkyl $\endgroup$
    – zyx
    Commented May 29, 2016 at 3:49
  • 2
    $\begingroup$ It seems to me that the LHS cannot be defined without assuming something equivalent to the well-ordering principle (that < is a well-order on N). $\endgroup$ Commented May 30, 2016 at 3:13

14 Answers 14

209
$\begingroup$

Follow the rainbow!

$$ \Large \color{PaleVioletRed}1 + \color{DarkViolet}5 + \color{DodgerBlue}9 + \dots + \color{LightCoral}{(4n-3)} = n(2n-1) = 2n^2-n$$

enter image description here

$\endgroup$
11
  • 25
    $\begingroup$ Quite the most unexpectedly beautiful answer I’ve seen on any Stackexchange site, ever! Whimsical, pretty, and mathematically lucid, all at the same time. $\endgroup$ Commented May 28, 2016 at 22:30
  • 5
    $\begingroup$ A fantastic demonstration that geometric and pictorial methods are still a very concise and complete way of proving results. $\endgroup$
    – Nij
    Commented May 29, 2016 at 8:07
  • 14
    $\begingroup$ I agree that this is a neat, concise answer. However, for it to be an actual proof I'm sort of missing an argument as to how we know that the geometric interpretation is even correct (for arbitrary n). $\endgroup$
    – Ingo Bürk
    Commented May 29, 2016 at 14:56
  • 2
    $\begingroup$ @IngoBürk: I would read it as describing a partition of the set $[1…n] \times [1…2n-1]$, along with a numbering of the partition classes, such that the $i$th class contains $4i-3$ elements. (Here $[a…b]$ denotes the “integer interval” $\{ x \in \mathbb{Z}\ |\ a \leq x \leq b \}$.) Filling in the details of this sort of argument is an interesting and non-trivial exercise if you haven’t before; once you’ve done it a couple of times, it quickly becomes routine. $\endgroup$ Commented May 30, 2016 at 10:16
  • 4
    $\begingroup$ @ColinK: Google Spreadsheets! puu.sh/pagwd/0fd85b246c.png $\endgroup$
    – lynn
    Commented May 30, 2016 at 13:15
31
$\begingroup$

Below we show how to give a rigorous interpretation of some of the other "proof by picture" answers. Namely, we describe how a $2$-dimensional form of telescopy allows us to view a sequence of rectangles as being built-up layer-by-layer from successive differences of prior rectangles - as if built by a $2$-D printer. Further, applying a simple product rule for differences allows us to simplify these differences into a form more amenable to geometric visualization. This yields a simple and purely mechanical way to generate such pictorial proofs. Moreover, it provides a rigorous interpretation of such proofs using standard techniques for telescopic sums. Finally we mention an analogy with calculus: such telescopic summation is a discrete analog of the Fundamental Theorem of Calculus (but you don't need to know any calculus to understand the much simpler discrete analog that we describe below).

Let $\,f_n,\, g_n\,$ be sequences of naturals and $\bar R_n$ a sequence of $f_n\times g_n$ rectangles of area $ R_n = f_n g_n.$ Below we recall the product rule for the difference operator $\,\Delta h_n := h_{n+1} - h_n$ then we apply the rule to the special case $\,f_n = n,\ g_n = 2n\!-\!1\,$ in Lynn's picture below.

$$ \large \color{PaleVioletRed}1 + \color{DarkViolet}5 + \color{DodgerBlue}9 + \dots + \color{LightCoral}{(4n-3)}\, =\, n(2n-1) = 2n^2-n$$

enter image description here

Note: below we give related objects the same color (not related to the coloring above).

$ \begin{eqnarray}{\bf Product\ Rule}\qquad\ \: &&\Delta(f_n g_n ) &=&\ f_{n+1}\ \ \ \Delta g_n &+&\qquad\ \Delta f_n\cdot g_n\\[.3em] {\bf Proof}\quad\ \ \,f_{n+1}\ g_{n+1}\ \ &-&\ \ f_n\ g_n &=&\ f_{n+1}(g_{n+1}\!-g_n) &+&\, (f_{n+1}\!-f_n)\, g_n \\[.5em] {\rm e.g.}\quad\, \smash[b]{\underbrace{(n\!+\!1)(2n\!+\!1)}_{\large R_{\,\Large{n+1}}}}&-&\smash[b]{\underbrace{n(2n\!-\!1)}_{\large R_{\,\Large{n}}}} &=&\ \smash[b]{\underbrace{(\color{#c00}{n\!+\!1})\cdot 2}_{\large \rm arch\ \color{#c00}{sides}}} &+&\quad\ \smash[b]{\underbrace{1\,\cdot\, (\color{#0a0}{2n\!-\!1})}_{\large\rm arch\ \color{#0a0}{top}}}\ \ \ {\rm in\ picture}\\[.2em] \phantom{} \end{eqnarray}$

So to increment an $\,n\!\times\! (2n\!-\!1)$ rectangle $\bar R_n$ to its successor $\bar R_{n+1}$ of size $\,(n\!+\!1)\!\times\!(2n\!+\!1)$ we can add $\,\color{#c00}{n\!+\!1}$ squares on each $\rm\color{#c00}{side}$ of $\bar R_n$ and $\,\color{#0a0}{2n\!-\!1}\,$ on $\rm\color{#0a0}{top}$ of $\bar R_n.\,$ For example, let $\,n=3.\,$ In the picture, to increment the $3\times 5$ blue $\bar R_3$ to the $4\times7$ green $\bar R_4$ rectangle, the rule says we can append $\,\color{#c00}{n\!+\!1} = 4$ green squares to each side of blue $\bar R_3,\,$ plus $\color{#0a0}{2n\!-\!1} = 5$ green squares on top of blue $\bar R_3$ yielding the $\,7\times 4$ green arch - which constitutes the (area) difference between the green $4\times7\ (\bar R_4)$ and blue $3\times 5\ (\bar R_3)$ rectangles. Thus the $\rm\color{#c00}{sides}$ and $\rm\color{#0a0}{top}$ of the arch are precisely the summands in the product rule, so the rule shows how to construct successive differences of $2$-D rectangles $f_n\times g_n$ via differences $\,\Delta f_n,\, \Delta g_n$ of their $1$-D sides.

Thus we can view each rectangle as being built-up layer-by-layer from the differences (= arches) of successive prior rectangles (the separately colorered arches in the picture). Dynamically, we can think of a $2$-D printer building the rectangle arch-layer-by-layer (similar to a $3$-D printer).

The sought equality follows by computing the $n(2n\!-\!1)$ area of rectangle $\bar R_n$ a second way, using said layer-by-layer inductive construction of $\bar R_n.$ By the rule, the difference arches have area $\,R_{k+1}\!-R_k = 2(\color{#c00}{k\!+\!1)} + \color{#0a0}{2k\!-\!1} = 4k\!+\!1.\,$ Adding these to initial area $= 1_{\phantom{\frac{i}i}}\!\!\!$ of $\,1\!\times\! 1$ $\,\bar R_1$

$\qquad\qquad\qquad\begin{eqnarray} {\underbrace{1}_{\color{#c0f}{\large R_{\Large1}}} +\!\!\! \underbrace{5}_{\large{\color{#48f}{R_{\Large 2}}-\color{#c0f}{R_{\Large 1}}}}\!\! +\! \underbrace{9}_{\large{R_{\Large 3}-\color{#48f}{R_{\Large 2}}}}\!\! +\, \cdots + \!\!\underbrace{4n\!-\!3}_{\large{R_{\Large n}-R_{\Large{n-1}}}} \!=\, \smash{\sum_{\large k\,=\,0}^{\large{{n-1}}}\,(4k\!+\!1)}}\\[-.2em] \end{eqnarray}$

is an equal value for the area of $\,n\!\times\!(2n\!-\!1)\,$ rectangle $\,\bar R_n$. This is the sought equality.

The above LHS sums to $ R_n$ since it is a telescoping sum so $\rm\color{#c0f}{successive}\ \color{#48f}{terms}$ cancel out, which becomes clearer if we reorder the sum by rewriting $\,R_{k+1}-R_k\,$ as $\ {-}R_k\! + R_{k+1}$ yielding

$\qquad \begin{eqnarray}{\underbrace{\color{#c0f}{R_1}\! + (\color{#c0f}{-R_1}}_{\large =\ 0}\! + \underbrace{\color{#48f}{R_2}) + (\color{#48f}{-R_2}}_{\large =\ 0}\! +\underbrace{ R_3) + (-R_3}_{\large =\ 0}\! +\cdots +\underbrace{R_{n-1}) +(-R_{n-1}}_{\large =\ 0}\! + R_n)\, =\, R_n}\\[-.1em] \end{eqnarray} $

Here lies the hidden use of induction. Though such telescopic cancellation may seem "obvious", it does require rigorous proof. But the inductive step is easy: adding $\, -R_n\! + R_{n+1}$ to both sides of the above statement $P(n)\,$ yields $P(n\!+\!1)\ $ so $\:P(n)\,\Rightarrow\,P(n\!+\!1)\,$ [to be completely rigorous we should also eliminate the ellipses, replacing them by a more precise summation operator].

Many inductive proofs have a very natural telescopic form, and expressing them in this format often leads to great simplification. You can find many examples of telescopy in my prior answers (both in additive and multiplicative form).

Finally we briefly mention that analogy with calculus. The above remarks on telescopic sums show that $\,f(n)\,$ is the sum of its differences. This is the difference analog of the Fundamental Theorem of Differential Calculus, i.e. we have the analogous theorems

$$\begin{eqnarray} \sum_{k=0}^{n-1}\ \Delta_k f(k)\ \, &=&\ f(n)-f(0)\\[.2em] \int_0^x\! D_t f(t)\, dt &=&\ f(x) - f(0) \end{eqnarray}$$

This is but one small part of the calculus of finite differences, a discrete analog of differential calculus. Many things familiar from calculus have discrete difference analogs, and their proofs are often much simpler (such as the difference product rule above). For example, to verify that $F(n) = \sum_{k=0}^{n-1} f(k)$ it suffices to show that they have the same difference and same initial condition, i.e $\,F(n\!+\!1)-F(n) = f(n)\,$ and $F(0) = 0$. When, as in the OP, both $F$ and $f$ are polynomials this reduces to simply checking the equality of two polynomials, which can be done purely mechanically (so no intuition or visual analogies are needed). E.g. for the OP we have $\,F(n) = n(2n\!-\!1)$ so the proof reduces to verifying $\,F(n\!+\!1)-F(n) = 4n\!+1,\,$ and $\,F(n)= 0,\,$ which is trivial polynomial arithmetic - so trivial we can program calculators to perform all such proofs. In fact there are general summation algorithms due to Karr, Gosper and others that are discrete analogs of the Risch-Bronstein integration algorithms implemented in computer algebra systems such as Macsyma, Maple and Mathematica.

Therefore such difference calculus proofs remain completely mechanical even for higher degree polynomials, but generalizing the geometrical picture-based proofs to higher dimensions will prove much more difficult because we typically lack (real-world) intuition for high dimensional spaces. So difference calculus serves to algebraicize these geometrical proofs in a "calculus" that is so mechanical that it can be performed by a computer - freeing our intuition to work on less trivial matters.

$\endgroup$
8
  • 2
    $\begingroup$ I didn't downvote, but I don't understand what's going on here. What are $f$ and $g$? What is $R$? Why is $n+1$ red? Are you proving the product rule, or something else? Those are some things that confused me, if you're interested. $\endgroup$
    – David
    Commented May 29, 2016 at 4:54
  • 4
    $\begingroup$ I didn't downvote either, but this post doesn't appear to provide an answer to the OP's question - OP asked "How can one prove these two functions are equal, besides induction", and the post appears to give one proof, which is explicitly inductive. $\endgroup$
    – psmears
    Commented May 29, 2016 at 8:12
  • 5
    $\begingroup$ @psmears Many students make the mistake of thinking that such "proofs by picture" are not inductive. The point of the answer is to (1) show how to formalize the proof, (2) show that the resulting formal proof is in fact inductive, (3) show how to mechanically derive such proofs using 2-D telescopy - by way of the product rule for differences. $\endgroup$ Commented May 29, 2016 at 16:01
  • 2
    $\begingroup$ @BillDubuque: I'm not disputing that there's a lot of useful information in the answer (and indeed, as I said, I didn't downvote!) - I'm just pointing out that other people may do so as there is no direct answer to the question asked. $\endgroup$
    – psmears
    Commented May 30, 2016 at 15:01
  • 1
    $\begingroup$ @BillDubuque I've found your answers about telescopy to be quite intriguing. I am wondering, however, if there is an easy, algorithmic way to compute the function whose forwards difference yields 4k+1 (here $2k^2-k$), the same way we do with integration. I notice it is very close to the "integral" from 0 to n with respect to k which yields $2n^2+n$. However, this doesn't work very well for more complicated formulas, although I noticed it is often semi-close to the real solution. In other words, can telescopy be used without knowing the closed form in the first place ? $\endgroup$
    – Evariste
    Commented Jun 1, 2016 at 21:58
30
$\begingroup$

$$\sum\limits_{k = 1}^n {\left( {4k - 3} \right)} = 4\sum\limits_{k = 1}^n k - 3 \cdot \sum\limits_{k = 1}^n 1 = 4\frac{{n\left( {n + 1} \right)}}{2} - 3 \cdot n = 2n^2 - n$$

$\endgroup$
6
  • 16
    $\begingroup$ Which still uses induction to compute the sums. $\endgroup$ Commented May 28, 2016 at 2:11
  • 9
    $\begingroup$ @Bill: Looks to me like it's plugging in known formulas, not using induction to reprove the corresponding theorem. $\endgroup$
    – user14972
    Commented May 28, 2016 at 5:18
  • 6
    $\begingroup$ @BillDubuque there are other ways to prove the known sums. $\endgroup$
    – zz20s
    Commented May 28, 2016 at 12:13
  • 3
    $\begingroup$ @Hurkyl The point is that the proofs of those formulas typically employ induction. $\endgroup$ Commented May 28, 2016 at 12:36
  • 1
    $\begingroup$ Except there's also a very nice proof of that formula with pictures, just like the question asks: jeremykun.com/2011/10/02/n-choose-2 $\endgroup$
    – mdenton8
    Commented May 31, 2016 at 6:45
23
$\begingroup$

Hint:

Let $$\begin{alignat}{10}S_n&=&1+&&5+&&9+&\cdots+&(4n-3)\\&=&(4n-3)+&&(4n-7)+&&(4n-11)+&\cdots+&1.\end{alignat}$$ Thus $$2S_n=(4n-2)+(4n-2)+(4n-2)+\cdots+(4n-2).$$


Additional:

As pointed out in the comments, the proof above relies on mathematical induction. So I want to explain what role induction plays in the proof.

For a given number $n$, say $10000$, the proof is valid, though it involves ellipsis, since one can write down all numbers omitted.
Now we claim that the proposition is true for all natural numbers. This would be difficult to prove without using mathematical induction, although I have no idea whether this is possible. Intuitively, we can see that this proof is valid for all natural numbers, but how do we prove that $$\mathbb{Z}_+=\{n\in\mathbb{N}:S_n=2n^2-n\}?$$

One can start from Peano axioms or axiom of infinity, or other equivalent assumptions, and prove that the set on the right-handed side is $\mathbb{Z}_+$ easily.

I don't think that there exists any proposition we can prove without mathematical induction that it is true for every natural numbers, since when we talk about natural numbers, we are using mathematical induction.

$\endgroup$
1
  • $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ Commented May 28, 2016 at 20:01
20
$\begingroup$

You can use the “trick” attributed to Gauss for finding the sum of the first $n$ integers. Call your sum $S_n$, and consider this.

$$\begin{align} &\quad S_n=+1&+ 5 +& \quad\cdots&+(4n-7)&+(4n-3)\\ +&\quad S_n=+(4n-3)&+(4n-7)+&\quad\cdots&+5&+1 \\ &\hline\\ \quad&2S_n\quad =+\underbrace{(4n-2)}_{1^{st}} &+\underbrace{(4n-2)}_{2^{nd} }+&\quad\underbrace{\cdots}_{3^{rd}\mbox{ to }{(n-2)}^{nd}}&+\underbrace{(4n-2)}_{(n-1)^{st}}&+\underbrace{(4n-2)}_{n^{th}}\\\end{align}$$ Thus $2S_n$ equals $n(4n-2)$, and $S_n=2n^2-n$.

$\endgroup$
10
  • 13
    $\begingroup$ The ellipses amount to using induction. $\endgroup$ Commented May 28, 2016 at 2:06
  • 3
    $\begingroup$ @Bill: No, they really don't. The places where you might be able to play that game are the relevant form of associativity and the conversion between repeated addition and multiplication. $\endgroup$
    – user14972
    Commented May 28, 2016 at 5:16
  • 5
    $\begingroup$ @ClementC. I don't know whether it is avoidable in some way, but the usual proof of commutativity of addition (for two summands) on $\mathbb{N}$ is by induction. So avoiding all vestiges of induction may be impossible, and if it is possible, it's very cumbersome. $\endgroup$ Commented May 28, 2016 at 9:28
  • 2
    $\begingroup$ After all, the construction of $\mathbb{N}$ is inductive! (at least the Peano one) So it's not surprising that we get to induction somewhere $\endgroup$
    – marmistrz
    Commented May 28, 2016 at 12:30
  • 2
    $\begingroup$ @Clement: The usual formulation of summation and proof of that property does depend on induction... but then it wasn't trying to avoid induction. This doesn't preclude other formulations from avoiding it; e.g. one definition of summation for cardinal numbers is just the (cardinal class of the) union of an (unordered) collection of disjoint sets; you get commutativity for free! $\endgroup$
    – user14972
    Commented May 28, 2016 at 23:57
20
$\begingroup$

A geometrical proof. A rectangle formed by a sum of gnomons.

$\endgroup$
5
  • 15
    $\begingroup$ ...which are not a very gnomonly used shape. $\endgroup$
    – Micah
    Commented May 28, 2016 at 17:48
  • $\begingroup$ @Micah I explain in my answer how this proof can be discovered very simply from the product rule for differences. $\endgroup$ Commented May 28, 2016 at 22:17
  • $\begingroup$ what software did you use to create the image? $\endgroup$
    – danimal
    Commented May 28, 2016 at 23:13
  • 11
    $\begingroup$ @BillDubuque: I'm not sure what you think that has to do with my bad pun, but okay... $\endgroup$
    – Micah
    Commented May 29, 2016 at 0:00
  • 1
    $\begingroup$ @danimal I used GeoGebra. $\endgroup$ Commented May 29, 2016 at 6:31
11
$\begingroup$

Here is a method based upon formal power series. We encode thereby sequences $\left(a_n\right)_{n\geq 0}$ by formal power series $\sum_{n=0}^\infty a_nx^n$ and show equality of sequences by showing equality of corresponding power series.

Using the summation symbol $\sum$ we claim

The following is valid \begin{align*} \sum_{j=1}^{n}\left(4j-3\right)=2n^2-n\qquad\qquad n\geq 1 \end{align*}

We encode the sequence
\begin{align*} \left(\sum_{j=1}^{n}\left(4j-3\right)\right)_{n\geq 1}=\left(1,6,15,\cdots\right)\quad\rightarrow\quad \sum_{n=1}^\infty\left(\sum_{j=1}^n\left(4j-3\right)\right)x^n=\color{blue}{1}x+\color{blue}{6}x^2+\color{blue}{15}x^3+\cdots \end{align*} by the power series and in the same way we encode \begin{align*} \left(2n^2-n\right)_{n\geq 1}=\left(1,6,15,\cdots\right) \quad\rightarrow\quad\sum_{n=1}^{\infty}\left(2n^2-n\right)x^n=\color{blue}{1}x+\color{blue}{6}x^2+\color{blue}{15}x^3+\cdots \end{align*} and show the equality of the corresponding power series.

We obtain \begin{align*} \sum_{n=1}^\infty\left(\sum_{j=1}^n\left(4j-3\right)\right)x^n &=\sum_{n=1}^\infty\left(\sum_{j=0}^{n-1}\left(4j+1\right)\right)x^n\tag{1} \\ &=\sum_{n=0}^\infty\left(\sum_{j=0}^n\left(4j+1\right)\right)x^{n+1} \tag{2}\\ &=\frac{x}{1-x}\sum_{n=0}^\infty\left(4n+1\right)x^n\tag{3}\\ &=\frac{x}{1-x}\left(4\sum_{n=0}^\infty nx^n+\sum_{n=0}^\infty x^n\right)\tag{4}\\ &=\frac{x}{1-x}\left(4(xD_x)\sum_{n=0}^\infty x^n+\frac{1}{1-x}\right)\tag{5}\\ &=\frac{4x^2}{1-x}D_x\left(\frac{1}{1-x}\right)+\frac{x}{(1-x)^2}\tag{6}\\ &=\frac{x(3x+1)}{(1-x)^3}\tag{7}\\ \end{align*}

Comment:

  • In (1) we shift the index $j$ by one to start with $j=0$

  • In (2) we shift the index $n$ by one to start with $n=0$

  • In (3) we use the nice fact that summing up elements is encoded in formal power series by multiplication with $\frac{1}{1-x}$. This is due to the Cauchy product formula.

  • In (4) we do some rearrangement to be prepared for further steps

  • In (5) we use the formula for the geometric power series and also that differentiating a power series and multiplication with $x$ results in \begin{align*} xD_x \sum_{n=0}^\infty a_n x^n = \sum_{n=0}^\infty na_nx^n \end{align*} Here we denote with $D_x:=\frac{d}{dx}$ the differential operator.

  • In (6) we use the formula for the geometric power series again and multiply out

  • In (7) we perform the differentiation and simplify the expression

on the other hand we obtain with similar reasoning \begin{align*} \sum_{n=1}^\infty\left(2n^2-n\right)x^n &=2\sum_{n=1}^\infty n^2x^n-\sum_{n=1}^\infty nx^n\\ &=2(xD_x)^2\sum_{n=1}^\infty x^n-(xD_x)\sum_{n=1}^\infty x^n\\ &=2(xD_x)^2\frac{x}{1-x}-(xD_x)\frac{x}{1-x}\\ &=\frac{2x(1+x)}{(1-x)^3}-\frac{x}{(1-x)^2}\\ &=\frac{x(3x+1)}{(1-x)^3}\\ \end{align*} and the claim follows.


Note: Of course applying this method here is to take a sledgehammer to crack a nut. But these techniques are useful also in more difficult context. See e.g. this answer.

$\endgroup$
2
  • $\begingroup$ Is this really easier than using induction? $\endgroup$ Commented May 28, 2016 at 17:57
  • 8
    $\begingroup$ @MarcvanLeeuwen: Of course not - see my note at the end of the answer! :-) But OP was asking for alternatives, not for easier variants. This technique opens up doors to attack more challenging problems which could be helpful for OP. Regards, $\endgroup$ Commented May 28, 2016 at 18:17
9
$\begingroup$

My favorite proof makes use of a telescoping sum. Let's recall how that works: given a sequence $a_k$ then: $$ \sum_{k=1}^n (a_{k+1}-a_k)=a_{n+1}-a_1, $$ because intermediate terms cancel out pairwise.

Just take now $a_k=k^2$ and you have: $$ \sum_{k=1}^n [(k+1)^2-k^2]=(n+1)^2-1=n^2+2n. $$ But, on the other hand: $$ \sum_{k=1}^n [(k+1)^2-k^2]=\sum_{k=1}^n (2k+1)= \sum_{k=1}^n 2k+\sum_{k=1}^n 1=\sum_{k=1}^n 2k+n. $$ By equating you get then $$ \sum_{k=1}^n 2k+n=n^2+2n, \quad\hbox{that is}\quad \sum_{k=1}^n 2k=n^2+n \quad\hbox{and}\quad \sum_{k=1}^n 4k=2n^2+2n, $$ so that: $$ \sum_{k=1}^n (4k-3)=2n^2+2n-3n=2n^2-n $$

$\endgroup$
2
  • $\begingroup$ This is a valid method to avoid induction, but at the end, it uses the same idea. $\endgroup$
    – Emre
    Commented May 29, 2016 at 22:04
  • $\begingroup$ @E.Girgin Yes, telescopy is a special (but ubiquitous) form of induction. Presumably this is the same method used in the pictorial proofs (as I explain at length in my answer), though we cannot be sure since "picture proofs" are not uniquely readable. $\endgroup$ Commented May 30, 2016 at 18:35
7
$\begingroup$

If you know the expression for the sum of an arithmetic sequence (or can prove it, it's not too hard, see below) then that's another way. For an arithmetic series with first term $a$ and common difference $d$, the sum to $n$ terms, $S_n$ is given by: $$S_n = \frac{n}{2}\left(2a+(n-1)d\right)$$ or alternatively, if the last term is $l$ then $$S_n = \frac{n(a+l)}{2}$$

For your question, we have $a=1$ and $d=4$ so $$S_n = \frac{n}{2}\left(2+4(n-1)\right) = n(2n-1) = 2n^2-n$$


To get the first expression for $S_n$, write the sum out twice, in opposite orders, and then take their sum:

$$S_n = a + (a+d) + (a+2d) + ... + [a+(n-2)d] + [a+(n-1)d]$$ $$S_n = [a+(n-1)d] + [a+(n-2)d] + ... + (a+2d) + (a+d) + a$$

$\implies$ $$2S_n = [2a+(n-1)d] + [2a+(n-1)d] + ...+[2a+(n-1)d] + [2a+(n-1)d]$$ where there are $n$ terms, so that

$$2S_n = n [2a+(n-1)d] $$

i.e.

$$S_n = \frac{n}{2}\left(2a+(n-1)d\right)$$

$\endgroup$
1
  • $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ Commented May 28, 2016 at 20:03
5
$\begingroup$

Let

$$a_k := 4 k - 3 \qquad \qquad \qquad b_k := k - 1$$

Using summation by parts,

$$\begin{array}{rl} \displaystyle\sum_{k=1}^n a_k &= \displaystyle\sum_{k=1}^n a_k (b_{k+1} - b_k)\\ &= \left(a_k \, b_k \,\bigg|_1^{n+1}\right) - \displaystyle\sum_{k=1}^n (a_{k+1} - a_k) \, b_{k+1}\\ &= a_{n+1} \, b_{n+1} - a_1 \, b_1 - \displaystyle\sum_{k=1}^n 4 k\\ &= a_{n+1} \, b_{n+1} - \left(\displaystyle\sum_{k=1}^n (4 k - 3)\right) - 3n\\ &= (4 n + 1) n - 3 n - \displaystyle\sum_{k=1}^n a_k\\ &= 4 n^2 - 2n - \displaystyle\sum_{k=1}^n a_k\end{array}$$

Hence,

$$\displaystyle\sum_{k=1}^n a_k = 2 n^2 - n$$

$\endgroup$
4
$\begingroup$

The most straightforward way is of course

$$\frac n2(a+\ell))=\frac n2 \big(1+(4n-3)\big)=\color{red}{2n^2-n}\qquad\blacksquare$$


Another method:

The well-known result of the sum of the first $n$ integers: $$1+2+3+4+\cdots+n=\frac{n(n+1)}2$$ Subtract $1$ from each term ($n$ in total): $$0+1+2+3+\cdots+(n-1)=\frac{n(n-1)}2$$ Multiply by $4$: $$0+4+8+12+\cdots+4(n-1)=2n(n-1)$$ Add $1$ to each term ($n$ in total): $$1+5+9+13+\cdots+(4n-3)=\color{red}{2n^2-n}\quad\blacksquare$$

$\endgroup$
1
$\begingroup$

By counting dots (of Hexagonal number) in two ways.

enter image description here

On the left: there are four "outer" sides of each hexagon $j = 2, \dots, n$, where each side has $j$ dots but 3 of them are shared, in addition the trivial hexagon $j=1$ is one dot. Therefore, in total there are $\sum_{j=1}^n4j-3$ dots.

On the right: there are $2n-1$ rows (since a hexagon has both top and bottom sides, except the trivial case), and each row adds $n$ dots. Hence $n(2n-1) = 2n^2-n$ dots.

*Despite being geometric, induction may well be needed to justify the counting arguments, as Bill Dubuque noted.

$\endgroup$
0
$\begingroup$

There are some very clever answers here, but what if you don't spot the clever trick? It would be nice to be able to mechanically grind out the answer, and in order to do that, you can use the discrete calculus.

So here the problem is to evaluate $ \sum_{1\le i\le n}{4i-3} $. First, expand all powers of the summation variable (here, $i$) into sums of falling powers (by inspection in simple cases, or using Stirling cycle numbers). This is trivial here, since $i = i^{\underline 1}$, so $$ \sum_{1\le i\le n}{4i-3} = \sum_{1\le i\le n}{4i^{\underline 1}-3}. $$ Now, summation of falling powers is just like integration of ordinary powers, except for the upper evaluation limit: $$ \sum_{1\le i\le n}{4i^{\underline 1}-3} = \bigg({4\over2}i^{\underline 2}-3i^{\underline 1}\bigg)\bigg\rvert^{n+1}_1. $$ Expand the falling powers and apply the evaluation limits, getting $$ \bigg({4 \over 2}(n+1)(n) - 3(n+1)\bigg) - \bigg({4 \over 2}(1)(0) - 3(1)\bigg) $$ which can be simplified to $ 2n^2 - n $.

No insight required, just turn the handle and compute!

$\endgroup$
0
$\begingroup$

Induction is fundamental to the structure of mathematics. The $``\cdots"$ in the expression $1+ 5+ 9 + \cdots +(4n-3)$ cannot be explained rigorously without using some form of induction. All of the picture proofs here have an implied $``\cdots"$ in them. Similarly, $\text{$\Sigma$-notation}$ and telescoping sums cannot be defined without some form of induction.

Putting that aside, I have always been curious about the process of converting sums to integrals, so I offer the following flawed solution.

Note that $\int_\limits k^{k+1}(4x-5)\,dx = \left. (2x^2-5x) \right|_k^{k+1} = (2k^2+4k+2-5k-5)-(2k^2-5k) = 4k-3$.

So \begin{align} \sum\limits_{k = 1}^n (4k - 3) &= \sum\limits_{k = 1}^n \left( \int_\limits k^{k+1}(4x-5)\,dx \right) \\ &= \int_\limits 1^{n+1}(4x-5)\,dx &\text{(This step requires induction.)} \\ &= \left. (2x^2-5x) \right|_1^{n+1} \\ &= (2n^2 + 4n+2 -5n - 5) - (2 - 5) \\ &= 2n^2-n \end{align}

Addendum

I got the integrand by solving $4k - 3 = \int_\limits k^{k+1}(2ax+b)\,dx = 2ak + (a+b)$

The implication is that

\begin{align} 1 &= u_2 - u_1 \\ 5 &= u_3 - u_2 \\ 9 &= u_4 - u_3 \\ &\vdots \\ 4k-3 &= u_{k+1} - u_k \\ &\vdots \end{align}

where $u_k = 2k^2 - 5k$.

Which converts the sum to the sum of a collapsing series.

This implies Lynn's graphical proof shown below and, though it still requires induction, I thought it shows an interesting way to compute sums.

$\endgroup$
10
  • $\begingroup$ If you've to ask whether a step requires induction, it means you haven't even proven it... Yes it does, and all the answers that rely on properties of summation also do. Worse still, your answer involves integration, and you won't be able to prove most properties of integrals without induction!! $\endgroup$
    – user21820
    Commented Jun 7, 2016 at 2:40
  • 1
    $\begingroup$ You can't. That's my point, namely that it is pointless to use something that is strictly harder to prove than the trivial proof by induction, unless that something also gives some deeper insight. Otherwise, hiding the induction somewhere not only obscures the fact that it is necessary but also misleads people into thinking they have avoided induction even though it's intrinsic. What I say can be made precise; compare first-order PA (with its induction schema) and the discrete ordered semiring, and note that integer polynomials with positive leading coefficient satisfy the latter. $\endgroup$
    – user21820
    Commented Jun 9, 2016 at 14:15
  • 1
    $\begingroup$ The alternative way I know of that gives an insight into an intrinsic structure of summations is via the forward difference $Δ$, which act $R$-linearly on sequences from an $R$-module (usually we use $R = \mathbb{R}$), and hence we can factor $R$-polynomials of $Δ$, which gives the general solution to the recurrence relation. One can use this to solve summations. We can also observe that $Δ$ is a left-shift on the sequence of binomial coefficients $k \mapsto ( n \mapsto C(n,k) )$, which gives Newton's formula and hence an $O(d^2)$ algorithm to compute the summation of degree-$d$ polynomials. $\endgroup$
    – user21820
    Commented Jun 9, 2016 at 14:39
  • 1
    $\begingroup$ In your case, your method gives no insight whatsoever because: (1) the very proof of the integral requires much more than induction. The fastest way I know that avoids Riemann sums is via anti-derivatives and yet it has to first prove the product rule. Then after that you use the property that the integral of a finite sum is the sum of the separate integrals, about which you asked whether it uses induction (indeed it does, and the essential case of two integrals is highly non-trivial if you use Riemann integrals!). [continued] $\endgroup$
    – user21820
    Commented Jun 9, 2016 at 14:45
  • 2
    $\begingroup$ [continued] So even if we ignore all the induction used in proving the properties of integrals, the final induction that your method hides inside this property of integrals is really no different from the induction used in the conventional proof of the original summation. The technique you used gives little insight too, being inferior to finding the anti-difference (which is the 'right' notion here rather than the anti-derivative). $\endgroup$
    – user21820
    Commented Jun 9, 2016 at 14:48

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .