11
$\begingroup$

Other than trivial ones where computers checked all cases by brute force, have programs led us to find new proofs? Since programming has been it's own field for long, I suppose there must be some transfer knowledge of tricks in it which would assist us to do mathematics with the development of proof assistants.

$\endgroup$
2
  • 1
    $\begingroup$ Comparing your phrase "trivial ones" with those quite deep computer aided proofs that I'm aware of which worked by case checking, I'm hesitant to suggest an answer. Would you be interested in a situation where the theory underlying a computer science topic has applications in mathematics? I'm thinking particularly of applications to group theory of regular languages and finite deterministic automata. $\endgroup$
    – Lee Mosher
    Commented Jul 6, 2023 at 16:07
  • $\begingroup$ Well yes! I am actually just beginning this topic, and I am trying to get as much inspiration as possible @LeeMosher $\endgroup$ Commented Jul 6, 2023 at 18:34

6 Answers 6

16
$\begingroup$

Because programs are proofs, every proof is a program (I am simplifying a bit). The correspondence is rich and open-ended, and subject to active research.

An example where programming concepts directly contributed to new discoveries in mathematics is classical Krivine realizability, a novel method for creating models of classical set theory, different from forcing. It is based on the idea that programs may witness (realize) mathematical facts. The particular kinds of programs used in Krivine realizability work with stacks and continuations, which are not anything that a "normal" mathematician would be familiar with, but are standard tools used in compilers and programming languages.

$\endgroup$
9
$\begingroup$

In HoTT there are clearly new proofs (of known theorems) with PL ideas.

The encode-decode method (chapter 8.9) introduced by the HoTT book gives us a novel approach of characterizing the path space of a certain type. It mainly involves in defining the following functions, given a pointed type $(A, a_0)$ and a type family $\text{Code}: A → \mathcal U$:

c0 : Code(a0)
decode : ∀ x : A, Code(x) → || a0 = x ||
...

In traditional topology, we prove $π_1(S^1) ≃ \mathbb{Z}$ by using a universal covering, while in HoTT we use path induction (aka dependent pattern matching without-K) introduced in Martin-Löf type theory, which is certainly a type theoretic idea.

$\endgroup$
6
  • 2
    $\begingroup$ Could the downvotes please explain why they think this answer is not helpful? $\endgroup$ Commented Jul 5, 2023 at 15:59
  • 2
    $\begingroup$ Perhaps the answer could be made more complete by explaining the PL origins of the encode-decode method? $\endgroup$
    – Trebor
    Commented Jul 5, 2023 at 22:11
  • 2
    $\begingroup$ @Trebor: it's difficult to guess that's the grievance from just seeing a "-1". To answer your question, encode-decode is a special case of reification, which is a programming technique for turning abstract values into concrete ones. But it's a windy road, because the way encode-decode method is used in HoTT can be traced back to old ideas from homotopy theory (my internet sucks at the moment, I can't find the relevant references). $\endgroup$ Commented Jul 6, 2023 at 14:04
  • 1
    $\begingroup$ I’m not sure the HoTT encode-decode method is really an example here — as far as I remember from the IAS special year, it originally emerged as a way of handling universal covering spaces that was particularly well-suited to type-theoretic implementation; the connection with reification was observed fairly early, but wasn’t a big part of the original inspiration. $\endgroup$ Commented Jul 8, 2023 at 14:31
  • 2
    $\begingroup$ @PeterLeFanuLumsdaine: that's why I said it was a windy road. I am pretty sure, though, that the computer scientists recognized reification and used the idea to expand the encode-decode method. $\endgroup$ Commented Jul 10, 2023 at 19:59
7
$\begingroup$

Theorem: The subset of transpositions $\{(i,i+1),\,1\leq i<n\}$ forms a generating family of the set of all permutations on $\{1, ..., n\}$.

Proof: The bubble sort algorithm always terminates.

Twelve years ago, when I was an undergraduate and proving this theorem was an exam question, my algebra teacher was not familiar with the bubble sort algorithm, and I got a zero for this question.

This is one example of many. There are lots of theorems that can be rephrased as "There exists some object...", and a constructive proof consists in writing an algorithm that builds the object, and proving that the algorithm is correct.

$\endgroup$
5
  • 5
    $\begingroup$ And your answer truly deserves a zero because you did not prove that bubble-sort always terminates, which is not only just as difficult as the original question, but also not enough at all because you also need to prove that bubble-sort produces the correct output. $\endgroup$
    – user21820
    Commented Jul 6, 2023 at 11:07
  • 1
    $\begingroup$ That bubble sort always terminates is obvious from the structure of the code when implemented as 2 nested fors. $\endgroup$
    – Pablo H
    Commented Jul 6, 2023 at 14:49
  • 4
    $\begingroup$ As an answer to that exam question I agree you shouldn't get full credit, but this is a good answer to this question. $\endgroup$
    – Trebor
    Commented Jul 6, 2023 at 14:58
  • 2
    $\begingroup$ This observation also happened to be my 'hey, ain't that neat'-style derivation of that fact concerning $S_n$ (I believe my very first formulation had been more along the lines of 'any finite deck of cards can be sorted, moving their vertical co-ordinates through each others', and without loss of generality assuming there never to be three or more at the same altitude')--kind of reminiscent of 'spaghetti sort' and 'proofs with physical flavor'. $\endgroup$ Commented Jul 7, 2023 at 18:04
  • 1
    $\begingroup$ I do agree that the correctness of bubble-sort is rather heavier and a 'sledgehammer' in comparison to the original fact, but if one happens already to know or believe in the former, then I like this method as a redundant or 'backdoor' proof for the latter. $\endgroup$ Commented Jul 7, 2023 at 18:04
3
$\begingroup$

The hierarchy of formal languages (also known as the Chomsky hierarchy) and their associated recognition automata has found applications in group theory.

In particular the theory of automatic groups, which was introduced by William Thurston and developed in the book "Word processing in groups", is an application of the theory of finite deterministic automata and regular languages; this is the theory that underlies lexical pre-processors of programming language compilers.

The discovery of the theory of automatic groups was, moreover, directly aided by use of the UNIX utility grep, an acronym for "get regular expression", which is one of those lexical pre-processor tools. See this mathoverflow link for a story of that discovery.

$\endgroup$
5
  • 1
    $\begingroup$ A quibble: this is a hierarchy of formal languages or formal grammars completely independent of whether they are programming languages. $\endgroup$
    – Max New
    Commented Jul 10, 2023 at 12:25
  • $\begingroup$ That's why I wrote my comment under the OP first. $\endgroup$
    – Lee Mosher
    Commented Jul 10, 2023 at 14:03
  • $\begingroup$ I added a bit to my post to emphasize a stronger connection to an actual computer program, if that helps. $\endgroup$
    – Lee Mosher
    Commented Jul 10, 2023 at 14:12
  • $\begingroup$ I think the answer is fine, because formal languages are a substratum of programming languages, but to call the Chomsky hierachy "a hiearachy of programming languages" is inaccurate. In what sense are, say, regular languages programming languages? I went ahead and edited your answer (which I believe is the norm here, answers are not owned by people). Please revert my edit if you disagree – I am not entering an edit war :-) $\endgroup$ Commented Jul 12, 2023 at 8:05
  • $\begingroup$ Formal languages is better but I added a phrase to emphasize their associated automata. $\endgroup$
    – Lee Mosher
    Commented Jul 12, 2023 at 13:39
2
$\begingroup$

Nothing deep, nothing to do with proof assistants but it does answer the question asked: in my paper Gergonne’s Card Trick, Positional Notation, and Radix Sort I use radix sort to explain why Gergonne's classic three pile card trick works.

$\endgroup$
2
$\begingroup$

The programming idea of speculative execution coupled with backtracking when needed, exhibited in the seminal "D5 paper", grew to a huge industry, the "dynamical methods in algebra". Lots of new proofs in constructive mathematics arose from this industry. Some of these proofs are also interesting from the point of view of classical mathematics (for instance, I'd argue that this is the case for Yengui's proof of Suslin's Lemma).

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.