88
$\begingroup$

Context: I'm a high school student, who has only ever had an introductory treatment, if that, on combinatorics. As such, the extent to which I have seen combinatoric applications is limited to situations such as "If you need a group of 2 men and 3 women and you have 8 men and 9 women, how many possible ways can you pick the group" (They do get slightly more complicated, but are usually similar).

Question: I apologise in advance for the naive question, but at an elementary level it seems as though combinatorics (and the ensuing probability that can make use of it), seems not overly rigorous. It doesn't seem as though you can "prove" that the number of arrangements you deemed is the correct number. What if you forget a case?

I know that you could argue that you've considered all cases, by asking if there is another case other than the ones you've considered. But, that doesn't seem to be the way other areas of mathematics is done. If I wish to prove something, I couldn't just say "can you find a situation where the statement is incorrect" as we don't just assume it is correct by nature.

Is combinatorics rigorous?

Thanks

$\endgroup$
6
  • 12
    $\begingroup$ Really, the arguments you hear are just a more plainspoken form of bijective proof, which is perfectly rigorous. The applications may be seem trivial, but the underlying logic and formalism is sound. $\endgroup$
    – Chris
    Commented Jun 14, 2017 at 1:32
  • 9
    $\begingroup$ You can write down a proof that you haven't forgotten all the cases. This is rigorous to the extent that you are good at writing down rigorous proofs. $\endgroup$ Commented Jun 14, 2017 at 1:35
  • 4
    $\begingroup$ Short answer, yes. However we count on our ability to count in order to define a formal (rigorous) proof. In that sense confidence in the principle that counting things "works" is a foundation on which formal reasoning is built. $\endgroup$
    – hardmath
    Commented Jun 14, 2017 at 1:42
  • 2
    $\begingroup$ Combinatorics is great. The biggest problems in creating a good proof are making sure you count every case AND making sure you don't double count any case. Once you can verify that hasn't been done, you know you're right (given you can count up to 8 men and 9 women) $\endgroup$
    – atreat
    Commented Jun 15, 2017 at 13:32
  • 9
    $\begingroup$ Formalism is not the same as rigor. $\endgroup$
    – anomaly
    Commented Jun 15, 2017 at 17:51

7 Answers 7

104
$\begingroup$

Combinatorics certainly can be rigourous but is not usually presented that way because doing it that way is:

  • longer (obviously)
  • less clear because the rigour can obscure the key ideas
  • boring because once you know intuitively that something works you lose interest in a rigourous argument

For example, compare the following two proofs that the binomial coefficient is $n!/k!(n - k)!$ where I will define the binomial coefficient as the number of $k$-element subsets of $\{1,\dots,n\}$.


Proof 1:

Take a permutation $a_1,\dots, a_n$ of $n$. Separate this into $a_1,\dots,a_k$ and $a_{k + 1}, \dots, a_n$. We can permute $1,\dots, n$ in $n!$ ways and since we don't care about the order of $a_1,\dots,a_k$ or $a_{k + 1},\dots,a_n$ we divide by $k!(n - k)!$ for a total of $n!/k!(n - k)!$.


Proof 2:

Let $B(n, k)$ denote the set of $k$-element subsets of $\{1,\dots,n\}$. We will show that there is a bijection

$$ S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}. $$

The map $\to$ is defined as follows. Let $\pi \in S_n$. Let $A = \{\pi(1),\pi(2),\dots,\pi(k)\}$ and let $B = \{\pi(k + 1),\dots, \pi(n)\}$. For each finite subset $C$ of $\{1,\dots,n\}$ with $m$ elements, fix a bijection $g_C : C \longleftrightarrow \{1,\dots,m\}$ by writting the elements of $C$ in increasing order $c_1 \le \dots \le c_m$ and mapping $c_i \longleftrightarrow i$.

Define maps $\pi_A$ and $\pi_B$ on $\{1,\dots,k\}$ and $\{1,\dots,n-k\}$ respectively by defining $$ \pi_A(i) = g_A(\pi(i)) \text{ and } \pi_B(j) = g_B(\pi(j)). $$

We map the element $\pi \in S_n$ to the triple $(A, \pi_A, \pi_B) \in B(n, k) \times S_k \times S_{n - k}$.

Conversely, given a triple $(A, \sigma, \rho) \in B(n, k) \times S_k \times S_{n - k}$ we define $\pi \in S_n$ by $$ \pi(i) = \begin{cases} g_A^{-1}(\sigma(i)) & \text{if } i \in \{1,\dots,k\} \\ g_B^{-1}(\rho(i-k)) & \text{if } i \in \{k + 1,\dots,n \} \end{cases} $$ where $B = \{1,\dots,n\} \setminus A$.

This defines a bijection $S_n \longleftrightarrow B(n, k) \times S_k \times S_{n - k}$ and hence

$$ n! = {n \choose k} k!(n - k)! $$

as required.


Analysis:

The first proof was two sentences whereas the second was some complicated mess. People with experience in combinatorics will understand the second argument is happening behind the scenes when reading the first argument. To them, the first argument is all the rigour necessary. For students it is useful to teach the second method a few times to build a level of comfort with bijective proofs. But if we tried to do all of combinatorics the second way it would take too long and there would be rioting.


Post Scriptum

I will say that a lot of combinatorics textbooks and papers do tend to be written more in the line of the second argument (i.e. rigourously). Talks and lectures tend to be more in line with the first argument. However, higher level books and papers only prove "higher level results" in this way and will simply state results that are found in lower level sources. They will also move a lot faster and not explain each step exactly.

For example, I didn't show that the map above was a bijection, merely stated it. In a lower level book there will be a proof that the two maps compose to the identity in both ways. In a higher level book, you might just see an example of the bijection and a statement that there is a bijection in general with the assumption that the person reading through the example could construct a proof on their own.

$\endgroup$
9
  • 5
    $\begingroup$ I'd tend to prefer to phrase things slightly more like the second version by saying something like "each combination corresponds to $k!(n-k)!$ permutations, so we divide by $k!(n-k)!$ to count each combination just once". Simply "we don't care so we divide by this" is a bit unclear. Still +1. $\endgroup$
    – Ian
    Commented Jun 15, 2017 at 13:05
  • 1
    $\begingroup$ You have not defined what $S_n$ denotes. Tsk tsk, lack of rigor :-) $\endgroup$
    – einpoklum
    Commented Jun 15, 2017 at 20:39
  • 1
    $\begingroup$ @T.Gunn: That's right, and it's definitely intuitively clear that "You can pick out a least element, remove it, then sort the rest by induction." is correct. It is just not justified (in the chosen foundational system) by a proof, so to speak. In my opinion, students should be given an explicit foundational system to work in, and state clearly whatever facts that one does not prove explicitly, such as being able to sort any finite set of elements in a linear order. Then students can see clearly that everything can be made fully rigorous if desired, and such questions like this won't arise. $\endgroup$
    – user21820
    Commented Jun 17, 2017 at 3:40
  • 1
    $\begingroup$ Cool answer... Can you suggest some books to learn that treat combinatiorics in such way? Thanks.. $\endgroup$
    – MathNerd
    Commented Sep 16, 2018 at 19:09
  • 1
    $\begingroup$ @MathNerd I doubt it. Most people avoid writing in this way because it obscures the main idea of the proof. $\endgroup$
    – Sera Gunn
    Commented Sep 16, 2018 at 22:19
19
$\begingroup$

Essentially, all (nearly all?) of combinatorics comes down to two things, the multiplication rule and the addition rule.

If $A,B$ are finite sets then $|A\times B|=|A|\,|B|$.

If $A,B$ are finite sets and $A\cap B=\emptyset$, then $|A\cup B|=|A|+|B|$.

These can be rigorously proved, and more sophisticated techniques can be rigorously derived from them, for example, the fact that the number of different $r$-element subsets of an $n$-element set is $C(n,r)$.

So, this far, combinatorics is perfectly rigorous. IMHO, the point at which it may become (or may appear to become) less rigorous is when it moves from pure to applied mathematics. So, with your specific example, you have to assume (or justify if you can) that counting the number of choices of $2$ men and $3$ women from $8$ men and $9$ women is the same as evaluating $$|\{A\subseteq M: |A|=2\}\times \{B\subseteq W: |B|=3\}|\ ,$$ where $|M|=8$ and $|W|=9$.

It should not be surprising that the applied aspect of the topic requires some assumptions that may not be mathematically rigorous. The same is true in many other cases: for example, modelling a physical system by means of differential equations. Solving the equations once you have them can be done (more or less) rigorously, but deriving the equations in the first place usually cannot.

Hope this helps!

$\endgroup$
4
  • 9
    $\begingroup$ There are plenty of topics that are usually considered part of combinatorics and have nothing to do with counting, so the multiplication and addition rule are irrelevant to them. $\endgroup$ Commented Jun 15, 2017 at 12:29
  • $\begingroup$ @SashoNikolov neat. what are some examples? $\endgroup$ Commented Oct 20, 2017 at 13:23
  • 2
    $\begingroup$ @LoveTooNap29 A lot of (structural) graph theory, for example the theory of graph minors which characterizes classes of graphs closed under minors. Much of matroid theory or design theory are not really about counting either, or at least not always. $\endgroup$ Commented Oct 21, 2017 at 5:00
  • 1
    $\begingroup$ In my opinion, that is a very narrow definition of what constitutes combinatorics, even introductory combinatorics. $\endgroup$ Commented Jul 16, 2020 at 18:35
15
$\begingroup$

Let's take an example of how bijective proof works rigorously. We'll prove that $${n \choose k} = {n-1 \choose k - 1} + {n-1 \choose k}, $$ i.e., Pascal's identity, for $1 \le k \le n$. We know that ${n \choose k}$ counts the number of $k$-sets we can make from a set of $n$ elements; let us select an arbitrary set of $n$ elements $A = \{a_1, \dots, a_n \}$, and let the collection of $k$-subsets of $A$ be called $K$. Now because $n \ge 1$ we can isolate an element $e$ of $A$, and, defining $E = \{s \in K: e \in s\}$, we can write $K = E \sqcup (K \cap E^c)$, so that $$|K| = |E| + |K \cap E^c|;$$ and we recall that by definition, $|K| = {n \choose k}$.

Now, take $A' = A - \{e\}$. The point here is that there is a bijection $\phi$ (an intuitively obvious one) between the $(k-1)$-subsets of $A'$ and the elements of $E$; namely, if $\alpha$ is a $(k-1)$-subset of $A'$, we take $$\phi: \quad \alpha \mapsto \{e\} \cup \alpha$$

The proof is trivial: if $\alpha$ and $\alpha'$ are $(k-1)$-subsets of $A'$, then $\{e\} \cup \alpha = \{e\} \cup \alpha' \implies \alpha \subseteq \alpha'$ and $\alpha' \subseteq \alpha$, going element-by-element, so that $\alpha = \alpha'$. In the other direction, if $\beta \in E$ then $\beta - \{e\}$ is a $(k - 1)$-subset of $A'$, so $\phi(\beta - \{e\}) = \beta$. So $\phi$ is injective and surjective, and so it is bijective.

But we know by definition that the number of $(k-1)$ sets we can make from the $(n-1)$ elements of $A'$ is ${n - 1 \choose k - 1}$; and because bijections such as $\phi$ preserve the cardinality of finite sets, we conclude that

$$|E| = {n-1 \choose k-1}.$$

In an entirely similar way we can prove that $|K \cap E^c| = {n-1 \choose k}$. Thus we've proven Pascal's identity, very formally and carefully. But we've also elongated the proof tremendously, and obscured the lines of the argument somewhat with set-theoretic details.

$\endgroup$
1
  • 3
    $\begingroup$ In standard set theory, there is no such thing as $E^c$ (complement of $E$). What you want is "$K \smallsetminus E$" instead of the invalid "$K \cap E^c$". $\endgroup$
    – user21820
    Commented Jun 16, 2017 at 7:52
13
$\begingroup$

Combinatorics has been, for a very long time, a set of more or less evolved recipes, often mixed with discrete probability or number theory.

Binomial coefficients, and other families of numbers like Stirling numbers, Catalan numbers etc. belong to this category.

Some deeper "recipes" appeared in the XIXth century, for example

$$g(n)=\sum_{d|n} f(d) \ \ \ \iff \ \ \ f(n)=\sum_{d|n} \mu(d)g(\tfrac{n}{d})$$

(where "$d|n$" means "$d$" divides "$n$") is a pure jewel, in particular with its generalizations as can be seen in (https://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/BenderGoldman.pdf).

But these functions were connected with arithmetics ; this has prevented for a long time the consideration of combinatorics as a "stand alone" subject.

In the first half of the twentieth century, new trends entered into the play with :

  • the speading of vocabulary and notations of (naive) set theory, providing simplified and accurate descriptions ; an important example : the very definition of a graph (graph theory had remained in a kind of infancy till that time) took a large benefit of being described as a mathematical object "per se" as a set $(E,V)$.

  • the fantastic development of linear algebra.

"Graph theory" became, from the combinatorial point of view, one of the most important subject of study:

It is only in the second part of the twentieth century that Combinatorics has been recognized as fully autonomous :

  • with the major influence of Gian Carlo Rota in the US, and also from many others like Claude Berge in France. For G.C. Rota, see (https://en.wikipedia.org/wiki/Gian-Carlo_Rota) where it is explicitly said "His series of ten papers on the "Foundations of Combinatorics" in the 1960s is credited with making it a respectable branch of modern mathematics."

  • with the large expansion of analytic combinatorics (see for example the masterpiece book (whose reference is given in the remark by Michael Lugo) with this title by Flajolet and Sedgewick) that was initiated by the (ancient) discovery that some important families of numbers (https://en.wikipedia.org/wiki/Stirling_number) are coefficients of expansions of analytic functions whose study gives back useful properties of their coefficients.

  • with some other techniques that have been raised to the level of respectable mathematics like Young tableaux (https://en.wikipedia.org/wiki/Young_tableau)(https://www.ams.org/notices/200702/whatis-yong.pdf).

Yet another typical example (equivalent to the Proofs 1 and 2 in the excellent answer of Trevor Gunn): necklace coloring. This question can be tackled in a ad hoc manner : see for example the 4 lines brilliant solution (https://math.stackexchange.com/q/2587851) ; or with a much more theoretical approach, using Burnside lemma, i.e., group theory.

If you want to have an idea of the huge diversity of present day combinatorics using "formula glasses", take a look at (https://mathoverflow.net/q/214927). For connections with algebra, see the the impressive document by Darij Grinberg (http://www.cip.ifi.lmu.de/~grinberg/primes2015/sols.pdf)

There is a new trend, made possible by the possibilities of present-day computers and perfectly well illustrated by the OEIS site (https://oeis.org/): being able, with a sequence of some numbers to find the most plausible combinatorics sequence with a compendium of situations in which the given sequence occurs : how can be qualified the use of such a site ? Not a rigorous approach, at least in a discovery phase, but surely huge possibilities offered to gain intuition, intelligence about equivalent situations that could have remained very dark otherwise...

$\endgroup$
5
  • 1
    $\begingroup$ Note that the Flajolet and Sedgewick book is available online from Flajolet's web page: algo.inria.fr/flajolet/Publications/books.html (even though Flajolet himself is no longer with us). $\endgroup$ Commented Jun 15, 2017 at 18:06
  • 1
    $\begingroup$ @Michael Lugo Thanks for this interesting reference ! $\endgroup$
    – Jean Marie
    Commented Jun 15, 2017 at 18:13
  • 1
    $\begingroup$ Note on my document (thanks for the call-out): It's not that much about combinatorics. My original plan was to write up (in full detail) the combinatorial reasoning involved in standard determinant theory (really just the basics; it gets more complicated as one goes deeper into the woods), and then I kept adding little bits and pieces as I taught combinatorics; but whenever I could avoid being combinatorial, I did so (thus you'll see almost no bijective proofs, no applications of the cycle decomposition or various sorts of insertion, no multisets, etc.). $\endgroup$ Commented Aug 12, 2019 at 9:42
  • 1
    $\begingroup$ For a proper rigorous combinatorics text that actually does serious combinatorics, I'd recommend Nicholas Loehr, Bijective Combinatorics. $\endgroup$ Commented Aug 12, 2019 at 9:42
  • $\begingroup$ Have you seen the toast with a infinity speed? :-) $\endgroup$
    – Sebastiano
    Commented Aug 23, 2020 at 10:34
3
$\begingroup$

What you phrased as "can you find a situation where the statement is incorrect" is better known as proof-by-contradiction in the realm of propositional logic. Proof by contradiction is a fairly common approach to proving statements about combinatorics and other areas of study in discrete mathematics--typically after a few initial lemmas and/or theorems have been introduced. This is in contrast to a tautological proof. Solving discrete math problems certainly requires a paradigm shift in thought when compared to algebra, because ultimately not everything is always completely quantifiable.

$\endgroup$
1
  • $\begingroup$ You misread the question; it has nothing to do with proof by contradiction. It is asking why proofs in combinatorics seem hand-wavy to the point that the typical justification one gives for a counting argument that considers cases is: "There are no other cases. (Can you find any?)". $\endgroup$
    – user21820
    Commented Jun 16, 2017 at 7:56
3
$\begingroup$

Addressing your main point. Yes combinatorics is a proper discipline of mathematics, and it is done as rigorously as analysis or geometry etc.

Ideas from combinatorics crop up a lot in mathematics and a whole lot in computer science, so it's evolved quite a bit from the basic stuff of permutations, combinations etc.

$\endgroup$
0
1
$\begingroup$

Other answers here address many facets of the question, but focus on more "classical" topics in combinatorics. Let me also point out that there is a vibrant and relatively new subfield of combinatorics known as algebraic combinatorics which is quite rigorous: it illustrates how combinatorial thinking can prove deep and new results in algebraic geometry. One recent and brilliant breakthrough in algebraic combinatorics was the proof by Haiman of the n! conjecture.

Changing gears, let me also mention the wonderful Richard Stanley's Enumerative Combinatorics books which I highly recommend for anyone interesting in dipping their toes into the waters of "rigorous combinatorics".

$\endgroup$
1
  • $\begingroup$ Stanley's EC books are advanced enough that they mostly work on the post-rigorous level, only sketching the proofs. So I wouldn't recommend them to someone who wants to see how proofs are formalized in the first place. Also, the n! conjecture has been proved through algebraic geometry, so it's not a great example of formalizing combinatorial reasoning either. $\endgroup$ Commented Aug 12, 2019 at 11:30

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .