58
$\begingroup$

The purposes of this question is to collect examples from theoretical computer science where the systematic use of computers was helpful

  1. in building a conjecture that lead to a theorem,
  2. falsifying a conjecture or proof approach,
  3. constructing/verifying (parts of) a proof.

If you have a specific example, please describe how it was done. Perhaps this will help others use computers more effectively in their daily research (which still seems to be a fairly uncommon practice in TCS as of today).

(Flagged as community wiki, since there is no single "correct" answer.)

$\endgroup$
3
  • $\begingroup$ I should say I'm particularly interested in instances of (1) and (2). That is, cases where computers helped shape human intuition in crucial ways. $\endgroup$
    – Moritz
    Commented Aug 18, 2010 at 16:03
  • 3
    $\begingroup$ Some of the more recent answers to this question, at the end of the list, are excellent and worth reading. I suggest reading to the end! $\endgroup$ Commented Nov 8, 2010 at 15:14
  • 1
    $\begingroup$ fyi similar to mathoverflow, Interesting conjectures “discovered” by computers and proved by humans? $\endgroup$
    – vzn
    Commented Sep 20, 2012 at 17:21

18 Answers 18

33
$\begingroup$

A very well-known example is the Four Color Theorem, originally proven by exhaustive checking.

$\endgroup$
1
  • 6
    $\begingroup$ (Arguably) not theoretical computer science. $\endgroup$
    – Jeffε
    Commented Aug 17, 2010 at 21:16
23
$\begingroup$

Doron Zeilberger has done some work in the field of computer-generated proofs. Most notably, he has prepared a Maple program to prove geometric identities, and another program to prove a class of combinatorial identities. Some of the methods are mentioned in the book A=B.

$\endgroup$
22
$\begingroup$

Computers have been also used to determine upper bounds on the running times of backtracking programs solving NP-hard problems, and construct gadgets to prove inapproximability results. This and other fun-filled topics await you in a short essay (warning, extreme self-promotion ahead) entitled "Applying Practice To Theory." See http://arxiv.org/abs/0811.1305

Given this nice list, it looks like I should update the paper!

$\endgroup$
1
  • $\begingroup$ Yeah, I like it too. $\endgroup$ Commented Aug 19, 2010 at 4:45
20
$\begingroup$

Fix $n$ points in the plane. Let T be a triangulation (i.e a planar straight-line graph with the points as vertices that is fully triangulated), and let the weight of the triangulation be the sum of the edge lengths.

Showing that the minimum weight triangulation (MWT) problem was NP-hard was a long standing open problem, made difficult by the fact that edge lengths involve square roots, and the desired precision needed to compute these accurately was difficult to bound.

Mulzer and Rote showed that MWT was NP-hard, and in the process used computer assistance to verify the correctness of their gadgets. As far as I know, there is no alternate proof.

$\endgroup$
20
$\begingroup$

Thomas Hales proof (his site, MathSciNet) of the Kepler conjecture involved so much case analysis -- and the cases were in turn verified by computer -- that he decided to attempt a formal proof of it. His project to do so is FlysPecK, and he estimates it will take 20 years of work.

Researchers in Programming Languages regularly use computer-aided proofs in their work, though I do not know how essential this is in terms of their research process (it certainly keeps them from having to write out tons of tedious manipulations, though).

$\endgroup$
19
$\begingroup$

A counterexample to the Hirsch conjecture, important to linear programming and polyhedral combinatorics, was proposed by Francisco Santos very recently. Computer verification was used first to establish some of the properties required of the example, although arguments without the assistance of computational power were discovered afterwards, cf. Gil Kalai's blog post or the paper on arxiv.

$\endgroup$
17
$\begingroup$

Haven't seen this mentioned here, but an automated theorem prover solved the long standing open problem of whether Robbins algebras are boolean:

http://www.cs.unm.edu/~mccune/papers/robbins/

This is especially notable because the computer developed the entire proof and the problem had been open for several decades.

Not completely sure if it qualifies as TCS, but arguably it is closely related.

$\endgroup$
1
  • 1
    $\begingroup$ An answer mentioning this was posted in mid-August, but the answer was deleted by the owner in late September. It is a nice example. $\endgroup$ Commented Nov 8, 2010 at 15:18
14
$\begingroup$

The Karloff–Zwick algorithm for MAX-3SAT achieves expected performance 7/8. However the analysis relies on unproven spherical volume inequalities. These inequalities were finally confirmed via computer assisted proofs in Zwick's another paper.

Besides Hales' proof to the Kepler conjecture as mentioned above, the proof to the Honeycomb conjecture and the one to the Dodecahedral conjecture are computer aided as well.

$\endgroup$
1
13
$\begingroup$

You can check out Shalosh B. Ekhad's homepage. This computer has been publishing papers for a while (usually with coauthors).

$\endgroup$
12
$\begingroup$

Christian Urban used the Isabelle proof assistant to check one of the main theorems in his PhD thesis was actually a theorem [1]. Using the assistant, a few changes needed to be made, but the result pretty much stood up.

Similarly, Urban and Narboux also discovered errors in a pen and paper proof of Crary's completeness proof for equivalence checking.

Meikle and Fleuriot formalised Hilbert's Grundlagen in Isabelle and demonstrated that, contrary to Hilbert's claims, he was still relying on his intutition to formalise geometry in an axiomatic manner (IIRC there were holes in his proof derived from Hilbert assuming things about diagrams) [3].

[1]: Revisiting Cut-Elimination: One Difficult Proof is Really a Proof

[2]: Formalising in Nominal Isabelle Crary's Completeness Proof for Equivalence Checking

[3]: Formalising Hilbert's Grundlagen in Isabelle/Isar

$\endgroup$
10
$\begingroup$

The results in "The Geometry of Binary Search Trees" by Demaine, Harmon, Iacono, Kane, and Patraşcu were developed with the help of software to test various charging schemes and construct optimal asses for small access sequences. (And yes, "asses" is the correct term.)

$\endgroup$
1
  • 1
    $\begingroup$ By "asses" I assume you mean "Arborally Satisfied Sets"? Maybe I have given away the fun of the acronym. :) $\endgroup$
    – Andrew W.
    Commented Aug 17, 2010 at 22:58
10
$\begingroup$

N. Shankar verified (fully and mechanically) Godel's proof of the incompleteness theorem and the Church--Rosser theorem using the Boyer--Moore theorem prover. There is a book describing how it was done.

$\endgroup$
8
$\begingroup$

The formula for the Bailey-Borwein-Plouffe algorithm to compute the nth bit of Pi without computing any of the more significant bits was discovered, according to Simon Plouffe, using computer proof systems.

$\endgroup$
7
$\begingroup$

There are numerous examples in the average-case analysis of algorithms. Maybe some of the earliest are the computer experiments which led to the paper "Some unexpected expected behavior results for bin packing" by Bentley, Johnson, Leighton, McGeoch and McGeoch in STOC 1984.

$\endgroup$
2
$\begingroup$

A paper of mine with Gupta and Kumar titled On a bidirected relaxation for the MULTIWAY CUT problem was also based on running experiments. In fact we were trying to prove the converse of what we ended up proving. Vazirani, in the first edition of his book on approximation algorithms, suggested that the bidirected relaxation was at least as good as the relaxation used by Calinescu-Karloff-Rabani. In running experiments we saw that CKR relaxation was always as good as the bidirected relaxation! And we proved that this was indeed the case. Vazirani also stated that the bidirected relaxation was invariant to the order of the terminals. The paper falsified this. Running experiments showed that different orders were giving different values for the relaxation.

$\endgroup$
1
$\begingroup$

In 2018, Aubrey de Grey found a 1581-vertex, non-4-colourable unit-distance graph. This gives a lower bound of five for the famous Hadwiger-Nelson problem. He used a computer to verify that the graph indeed has chromatic number at least five. Gil Kalai's blogpost covers some facts and further developments.

An article in the quanta magazine reports that he also found the example graph "with minimal computer assistance". That is, he used the computer twice for his result: first, he used it (a bit) to find the example, and then he used it (more heavily) to verify that his example really does the job.

$\endgroup$
1
$\begingroup$

Some recent results in state complexity were found with the help of systematic brute-force search for worst-case examples. This is doable because there are not too many deterministic finite automata with a small number of states, for example if we concentrate on binary or ternary alphabets. Also, in many cases there are families of worst-case examples for $1,2,3,4,5$ states and so on, and not only for all numbers of states $n$ that are greater than some large-ish $n_0$.

I've found this explicitly described in the introduction of the following paper.

Guangwu Liu, Carlos Martin-Vide, Arto Salomaa, Sheng Yu: State complexity of basic language operations combined with reversal. Information and Computation 206 (2008) 1178–1186

The relevant quote is as follows:

All the main results presented in this paper, especially the lower bound results, were obtained with the help of a large number of experiments using computer software. For obtaining the tight lower bound for the reversal of union, for example, we first proposed possible examples from our experiences. Then we checked the examples using our computer software Grail+. There were many iterations in modifying the examples and checking by the software again. After satisfying examples were obtained with experiments, we tried to prove the result theoretically. When we had problems in obtaining a theoretical proof, we were back to experiments again. We consider that this is a relatively new approach in theoretical computer science research. We think that our results presented in the paper would be very difficult to obtain without this new approach.

For many results in state complexity, the upper bounds are from standard constructions, and the difficult part is to find worst-case examples and prove the tight lower bound. Many of the previous papers on state complexity discussed the proofs and did not lay out how the examples were found. But I remember that in 2008, the above method already had been around for at least a few years, and was considered a trick of the trade.

By the way, I think that also many results in state complexity are and were obtained without the computer assisted method, but in the traditional way of doing mathematics.

$\endgroup$
1
$\begingroup$

My recent paper with Karthik Chandrasekharan titled Hypergraph $k$-cut in deterministic polynomial time was based on extensive computational experiments. We explored different conjectures and submodular functions and found counter examples to several different approaches. The truth of the main structural theorem was suggested by experiments and we then proved it by hand. We acknowledge this in the paper.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.