5
$\begingroup$

I came across this line in a class note I am reading: In numerical linear algebra, we usually don't need to find the eigenvalues of a non-symmetric matrix.

Can someone explain why this is the case? I understand for symmetric matrix, there are many nice properties of eigenvalues. For example the eigenvalues of a real symmetric matrix are real. SVD comes from the eigenvalues of $A^TA$ which is symmetric, etc.

But why are we so confident that we usually don't need to find the eigenvalues of non-symmetric matrix? Is it purely because of the nice properties of symmetric matrix that make us tend to formulate our problems that way?

$\endgroup$
5
  • 1
    $\begingroup$ Just speculating, but it may be that the SVD is more useful for non-symmetric matrices. Things like the condition number can be expressed in terms of the singular values, for example. $\endgroup$
    – angryavian
    Commented Aug 5, 2021 at 23:50
  • 1
    $\begingroup$ A lot of matrices that arise "naturally" are symmetric. When you do some more numerical linear algebra, you will find that many algorithms only require you to work with symmetric matrices, or that you can transform the problem into one that is solved by an algorithm that utilizes symmetric matrices. I don't remember any examples though, since it's been a while since I last did numerical linear algebra. $\endgroup$
    – Guenterino
    Commented Aug 5, 2021 at 23:52
  • 3
    $\begingroup$ I don't know what distinguishes "numerical-linear-algebra" from ordinary "linear-algebra", but in the latter setting I would say that the premise of this question is false. There are plenty of situations where we are interested in eigenvalues of non-symmetric matrices. $\endgroup$
    – Lee Mosher
    Commented Aug 5, 2021 at 23:59
  • $\begingroup$ I would say that finding eigenvalues of non-symmetric matrices is more difficult and so we try to formulate problems in a way that proceeds along the easier path (working with symmetric matrices). Golub and van Loan's book "Matrix Computations" devotes separate chapters 7 and 8 to the Unsymmetric and Symmetric Eigenvalue Problems. $\endgroup$
    – hardmath
    Commented Aug 6, 2021 at 0:02
  • 2
    $\begingroup$ See also the answers in the cross-post scicomp.stackexchange.com/questions/38873/… $\endgroup$ Commented Aug 10, 2021 at 15:42

1 Answer 1

5
$\begingroup$

As Lee Mosher rightfully points out, in one sense, the premise of this question is false: we certainly are interested in the eigenvalues of non-symmetric matrices. Any nice-enough dynamical system $\dot{x} = f(x)$ can be linearized around an equillibrium point, yielding a linear system which I will abuse notation and call $\dot{x} = Ax$. Sometimes, we are lucky and the matrix $A$ is symmetric (or skew-symmetric or etc.), but often we are not. Many important questions concerning the local behavior of the dynamical system are governed by the eigenvalues of the nonsymmetric matrix $A$. In particular, the equillibrium is stable if all the eigenvalues of $A$ have negative real parts. This is a very important example and is relevant to many areas of science, engineering, and technology.

Despite the fact that we do care about the nonsymmetric eigenvalue problem, there is also something importantly true in the premise of your question. Let me rephrase your question a bit: can solving nonsymmetric eigenvalue problems give insight into scientific and engineering applications?

There's a famous example of Trefethen (see this SIAM Review article, specifically examples 8-10) in fluid mechanics. If one plots the eigenvalues corresponding to the linearized Navier stokes equations for fluid in a 2D channel, one finds that the flow is stable below a Reynolds number of $\approx 5772$ and unstable above. The problem? This has nothing to do with what is actually observed in a laboratory, where turbulence typically sets in around $\approx 1000$!

There are a couple things happening here:

  • The eigenvalues of a nonsymmetric matrix can be much more sensitive to perturbations then the eigenvalues of a symmetric matrix. Any unstable eigenvalues can be perturbed by large amounts by errors in the matrix $A$, either backwards errors induced by the numerical method, measurement errors in the matrix $A$, modeling error by the failure of the physical model to exactly describe reality, etc.
  • Even if we knew the exact model, had no measurement error, and solved everything to perfect accuracy with no numerical error, there would still be another problem: transient dynamics. The theory says that if $A$ has all of its eigenvalues with real part less than zero, then solutions of it eventually decay to zero. The key word is eventually. Even for supposedly stable systems, the norm of the matrix exponential $\|e^{tA}\|$ very often increases very large (see Fig. 8 in the linked Trefethen paper) before it decays to zero. If these so-called transient dynamics can magnify a small deviation from the equillibrium to a large one, then any nonlinearity in the system will start to kick in.

Both of these issues are purely perturbation theory and matrix analysis. We haven't even gotten into computational issues. There are several, but let me mention two:

  • The eigenvectors of a nonsymmetric matrix can be highly ill-conditioned, which causes all sorts of numerical stability problems. One can orthogonally triangularize a nonsymmetric matrix, but dealing with the full eigenmatrix is a huge problem numerically.
  • It is very hard to reliably find the rightmost (largest real part) eigenvalue of a large sparse matrix. Krylov methods like to find the big eigenvalues in magnitude. Often the rightmost eigenvalue is much smaller in magnitude then some other eigenvalues (but is also frustratingly not the smallest in magnitude either!), making the stability of a large dynamical system described by a large sparse matrix effectively uncomputable in a lot of instances. (Exercise for the reader: try and break eigs in MATLAB for finding the eigenvalue with the largest real part; it may be easier than you expect.)

The raison d'être of eigenvalues is describing the dynamics of linear systems; yet as the bullets above demonstrate, they often fail to be useful tools in doing this in practice. The third and fourth bullet could possibly be addressed by improvements in algorithms and extended precision arithmetic, but the first two bullets are more fundamental.

There is something of a solution to this problem: pseudospectra. The pseudospectra consist of a set in the complex plane which describe the sensitivity of the eigenvalues under small perturbations. If the pseudospectra are closely contained around the eigenvalues, then the eigenvalues computed numerically are probably stable and physically meaningful; otherwise, the computed eigenvalues should be treated with suspicion. Trefethen argues we shouldn't just compute nonsymmetric eigenvalues, but plot them with their pseudospectra. Pseudospectra are the subject of the review article linked above and is summarized in this one page note.

TL;DR: Non-symmetric eigenvalue problems do emerge frequently in applications. But general non-symmetric eigenvalue problems can be highly sensitive to perturbations and have eigenvalues which fail to capture physically relevant transient phenomena. Our algorithms for non-symmetric problems fail to have a lot of desirable properties. This makes it unclear in many cases how meaningful the computed eigenvalues of a non-symmetric matrix are to engineering practice, though the pseudospectra can be a helpful tool in diagnosing the meaningfulness or lack their of for computed eigenvalues. By contrast, the symmetric eigenvalue problem has none of these difficulties, making it a practically useful numerical algorithm in most cases when would want to use it.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .