Yes and no. It depends what you mean "reason about complexity of programs".
The yes side is easy. Complexity theory is just math. And all math can be formalized. And indeed there has been work formalizing complexity theory in various proof assistants, including the work that Trebor cited above in Coq, work in Isabelle (and I'm sure other theorem provers as well), and proposed work in Lean. The fact that sorting is at best O(n log n)
is certainly a formally provable mathematical fact.
The trickier bit, and I think the bit you are confused about, is how does one verify that an actual implementation of a sorting algorithm is O(n log n)
? And this question holds for any property you want to verify of code. How do we verify that a C compiler is correct, or that Rust code is correct, or that Lean code is correct? There are lots of methods to do this. I'm not an expert on all the approaches, so I won't try to speculate on what they are. Others can chime in. But the one approach I am familiar with (and the approach I think you have in mind) is how ones typically proves a property of a Lean or Coq function (or code in any dependent type theory language).
Consider this Lean 4 function:
def foo (n : Nat) : Nat := 0
In Lean, it is possible to prove all sorts of properties about it, like
theorem foo_is_constant (n m : Nat) : foo n = foo m
But now consider these two functions:
def bar (n : Nat) : Nat := (list.range n) |>.sort |>.map (fun _ => 0) |>.sum
def baz (n : Nat) : Nat := (fun x => 0)(bar n)
Assuming I got the syntax right (which I probably didn't), bar
makes a list [0, ..., n-1]
, sorts it, replaces the elements all with 0
and adds them. Let's assume the sort
algorithm is O(n log n)
even if the list is sorted and that there are no compiler optimizations. Then bar
and baz
are O(n log n)
and foo
is O(1)
-ish (or maybe O(n)
because of the natural number input?). But because Lean satisfies function extensionality, all three are provably equal. And even if we don't use function extensionality (which is not provable in pure DTT unless you add extra axioms) by beta reduction we still have that foo = baz
.
A fundamental property of equality is that equal things have the same properties, and this is provable in almost any DTT system, so if we could express and prove in Lean that foo
is O(n)
then we could do the same for baz
. So while one could prove a mathematical fact about the most optimal implementation of a constant function foo
, one can't prove anything special about a particular implementation of a constant function in this way.
So we can't prove runtime complexity results in the same way as other results in Lean, Coq, Agda, etc. We need another way. What is the best way? I don't know, and I don't think it has been worked out. One could certainly develop a model of Lean code and prove properties about that, just like one can develop a model of Rust or C code. One can also just work in computational frameworks like lambda calculus or Turing machines. There may also be fancier tricks which are more practical and useful for programmers writing Lean code, or maybe even new logics which will one day take into account runtime complexity. There may also be published research in this area already, but I'm unaware of the state of the art here.