4

Hi I've been doing some research about matrix inversion (linear algebra) and I wanted to use C++ template programming for the algorithm , what i found out is that there are number of methods like: Gauss-Jordan Elimination or LU Decomposition and I found the function LU_factorize (c++ boost library)

  1. I want to know if there are other methods , which one is better (advantages/disadvantages) , from a perspective of programmers or mathematicians ?

  2. If there are no other faster methods is there already a (matrix) inversion function in the boost library ? , because i've searched alot and didn't find any.

6
  • have a look at wolfram has to say : mathworld.wolfram.com/MatrixInverse.html Max.
    – Max
    Commented Jul 28, 2010 at 20:33
  • 1
    Is it a general matrix or does it have a special structure (e.g. triangular, orthonormal, sparse/dense)?
    – Staffan
    Commented Jul 28, 2010 at 20:35
  • Also, Boost.ublas is SLOW for anything but fairly small matrices. As Vitor Py says, check out Eigen if you want a fast C++ only linear algebra library.
    – Staffan
    Commented Jul 28, 2010 at 20:36
  • 1
    @Staffan: no it's general matrix , why do you think uBlas is slow ? Commented Jul 28, 2010 at 22:21
  • What aspect should be templated? I have code templated on the number type here: github.com/victorliu/RNP/blob/master/inc/LinearSolve.h
    – Victor Liu
    Commented Jul 28, 2010 at 22:27

5 Answers 5

5

As you mention, the standard approach is to perform a LU factorization and then solve for the identity. This can be implemented using the LAPACK library, for example, with dgetrf (factor) and dgetri (compute inverse). Most other linear algebra libraries have roughly equivalent functions.

There are some slower methods that degrade more gracefully when the matrix is singular or nearly singular, and are used for that reason. For example, the Moore-Penrose pseudoinverse is equal to the inverse if the matrix is invertible, and often useful even if the matrix is not invertible; it can be calculated using a Singular Value Decomposition.

1
  • 2
    @ismail: sure, but that doesn't actually matter. it's a library; it doesn't matter if it's actually implemented with C or Fortran or Prolog or Scheme, you can call it just fine from C or C++ code. Commented Jul 29, 2010 at 16:53
3

I'd suggest you to take a look at Eigen source code.

2

Please Google or Wikipedia for the buzzwords below.

First, make sure you really want the inverse. Solving a system does not require inverting a matrix. Matrix inversion can be performed by solving n systems, with unit basis vectors as right hand sides. So I'll focus on solving systems, because it is usually what you want.

It depends on what "large" means. Methods based on decomposition must generally store the entire matrix. Once you have decomposed the matrix, you can solve for multiple right hand sides at once (and thus invert the matrix easily). I won't discuss here factorization methods, as you're likely to know them already.

Please note that when a matrix is large, its condition number is very likely to be close to zero, which means that the matrix is "numerically non-invertible". Remedy: Preconditionning. Check wikipedia for this. The article is well written.

If the matrix is large, you don't want to store it. If it has a lot of zeros, it is a sparse matrix. Either it has structure (eg. band diagonal, block matrix, ...), and you have specialized methods for solving systems involving such matrices, or it has not.

When you're faced with a sparse matrix with no obvious structure, or with a matrix you don't want to store, you must use iterative methods. They only involve matrix-vector multiplications, which don't require a particular form of storage: you can compute the coefficients when you need them, or store non-zero coefficients the way you want, etc.

The methods are:

  • For symmetric definite positive matrices: conjugate gradient method. In short, solving Ax = b amounts to minimize 1/2 x^T A x - x^T b.
  • Biconjugate gradient method for general matrices. Unstable though.
  • Minimum residual methods, or best, GMRES. Please check the wikipedia articles for details. You may want to experiment with the number of iterations before restarting the algorithm.

And finally, you can perform some sort of factorization with sparse matrices, with specially designed algorithms to minimize the number of non-zero elements to store.

2

depending on the how large the matrix actually is, you probably need to keep only a small subset of the columns in memory at any given time. This might require overriding the low-level write and read operations to the matrix elements, which i'm not sure if Eigen, an otherwise pretty decent library, will allow you to.

For These very narrow cases where the matrix is really big, There is StlXXL library designed for memory access to arrays that are mostly stored in disk

EDIT To be more precise, if you have a matrix that does not fix in the available RAM, the preferred approach is to do blockwise inversion. The matrix is split recursively until each matrix does fit in RAM (this is a tuning parameter of the algorithm of course). The tricky part here is to avoid starving the CPU of matrices to invert while they are pulled in and out of disk. This might require to investigate in appropiate parallel filesystems, since even with StlXXL, this is likely to be the main bottleneck. Although, let me repeat the mantra; Premature optimization is the root of all programming evil. This evil can only be banished with the cleansing ritual of Coding, Execute and Profile

0
0

You might want to use a C++ wrapper around LAPACK. The LAPACK is very mature code: well-tested, optimized, etc.

One such wrapper is the Intel Math Kernel Library.

5
  • IMKL is a LAPACK implementation (and more), not a wrapper.
    – Staffan
    Commented Jul 28, 2010 at 21:01
  • I didn't mean to belittle LAPACK. I meant that it is a C++ interface to code that I presume is written in Fortran unless Intel rewrote everything. Commented Jul 28, 2010 at 21:17
  • yes LAPACK is written in Fortran , but man IMKL is for $399 !! :D Commented Jul 28, 2010 at 22:26
  • 1
    ATLAS is open source and pretty good. ACML is free as far as I can tell, but requires an additional redistribution agreement if you want to distribute it.
    – Staffan
    Commented Jul 28, 2010 at 22:43
  • thanks Staffan for your help I'm now looking at all of the libs and learning Commented Jul 29, 2010 at 8:52

Not the answer you're looking for? Browse other questions tagged or ask your own question.