I was always under the impression, that separate tactic languages were generally considered to be vital for writing long proofs. I see tactic languages as a kind of interpreted DSL to generate boilerplate code while having much fewer static guarantees and much harder interactive stepping. This looks to me like a clear code smell, but I always assumed that is inevitable evil.
I solved most of Software Foundations Volume 1 (and partially 2) and while doing it, I spent most of my time trying to understand how tactic language works, not actually grasping PL theory or formal verification overall. After that, I found out that the Agda "variant" of SF exists, and I do not have any such issues with it. Almost all proofs seem clear to me at first sight, at least once you understand Agda syntax and the math idea behind proof. PLFA case report claims that it was not clear that using language without tactics would not make things much longer, but in fact it worked very well. While PLFA is only covering part of SF, so far I do not see any reasons why the same reasoning would not work for the rest of SF.
I was looking and asking for examples of tactic usage in practice, where working without them would be painful, but did not find anything that would convince me. All examples I could understand seem to me either like tactic languages used as boilerplate generating (instead of design making proper proof reuse easy) or just using auto/ring/simp, which may work without any tactic language. I only understand PL theory and basic abstract algebra/topology well and also do not understand complex tactic code, which may be a reason I did not find suitable examples.
I know that SSReflect was used for proving multiple very complex results, but as far as I understand what is going on (not being able to get how this proof works in detail), it seems like the "reflect pattern" part of SSReflect is much more important for writing such kind of proofs, then is tactic language by itself.
So my current hypothesis is that any use of tactic language is possible to be refactored by using proof patterns/instrumentation which is possible (while not always actually implemented) in Agda-style language. By such I mean:
- Boolean reflection proof pattern
- Libraries designed for proper code reuse by means of class types/parameterized modules/polymorphism
- Interactive programming with case splitting and auto tooling (which is AFAIU is not SoA in Agda currently)
- Large-scale decision procedures, like
ring
/omega
tactics and external ATP integrations
So to (dis-)proof this hypothesis, I am asking for specific, clear, and (hopefully) short examples of theorems/theories/libraries for which proofs are simple/clear/nice with tactic language, but they would not be without them (even by using tools I listed before instead).
I would stress, that while the question may have some flame potential, I think that my question is clear and examples I am looking for may be very useful for discussion of proof techniques. So even if you do not agree with me, the question is clear and useful, so please do not vote for closing.
Partially related question:
UPD:
I would like to reiterate, that I am asking for more specific examples not to confront anyone, and even not necessarily because I disagree. I just think that while general direction answers make sense as well, in such kinds of questions one must specify a specific task, a specific measure of "simple proof" and an overall goal, otherwise, the discussion is too vague and prone to "dynamic typing is bad" kind of holy war. Depending on proof/coding style and task people could evaluate measures of proof/code construction usability very differently.