Inverting Matrices

Inverting a matrix turns out to be quite useful, although not for the classic example of solving a set of simultaneous equations, for which other, better, methods exist. In particular, with phased array antenna calculations, you need to invert admittance matrices (produced as an output from a MOM code, like NEC) into mutual impedance matrices.

The theory of matrix inversion, etc, is better left to textbooks. Suffice it to say that the problem is one of finding a matrix [B] such that [A][B] = I, where I is the identity matrix. There are a variety of published programs for doing matrix inversion, typically by elimination, described in the next section; for example, MS Basic came with example code for it, and of course, the Numerical Recipes books stuff.), although normally, the matrix isn't composed of complex numbers. Links are at the bottom of this page. You can also compute it explicitly by calculating the adjoint matrix from cofactors and scaling by the determinant, which is described in more detail below.

Inversion by elimination

For moderate and big matrices, the straightforward way is to do a form of Gaussian elimination.

Inversion by using the adjoint matrix

For small matrices (2,3,4) calculating the inverse by scaling the adjoint is easier. The adjoint matrix is computed by taking the transpose of a matrix where each element is cofactor of the corresponding element in the source matrix. The cofactor is the determinant of the matrix created by taking the original matrix and removing the row and column for the element you are calculating the cofactor of. The signs of the cofactors alternate, just as when computing the determinant

For example, if the original matrix [M] is

		a b c
		d e f
		g h i

the cofactor of the upper left element is

     |e f|
     |h i|

which is = (ei - hf)

the cofactor of the upper center element is

     _ |d f|
       |g i|

= - (di - gf)

the cofactor of the upper right element is

     |d e|
     |g h|

= (dh - ge)

 

and the determinant is simply det(M) = a(ei-hf)-b(di - gf)+c(dh-ge). If we label the cofactors in the above array as A, B, C, etc. corresponding to the elements, the adjoint matrix would be:

	A D G
   	B E H
  	C F I

The inverse of the original matrix is the adjoint, scaled by 1/det(M).

I've built a few Excel spreadsheets to calculate the inverses of 2x2, 3x3, and 4x4 matrices, using the above method and using Excel's complex math functions. Download them here: (inv2x2.xls 15 kb) (inv3x3.xls 20 kb) (inv4x4.xls 29kb). Note that this technique is computationally expensive because a lot of the multiplies get repeated (for instance, you calculate cofactors to calculate the determinant by Cramer's rule, then calculate them again for the adjoint), and it probably scales as O(n!).

Complex Matrix Inversion

It's a bit trickier to invert a matrix of complex numbers, mostly because all the handy routines are oriented towards real numbers. You can either rewrite the algorithm for complex numbers (as described above), or, you can make use of some algebraic manipulation.

Consider an NxN complex array. Create a 2Nx2N array from it as follows:
[Real part, Imaginary Part; -Imaginary Part, Real Part]

Re(M) Im(M)
-Im(M) Re(M)


Invert that.. Then, the inverse of the original array is the Nx2N top half of the result you got from the inversion:
[Real Part of inverse, Imaginary Part of inverse;-Imaginary part of inverse, Real Part of inverse]

Re(inv(M)) Im(inv(M))
-Im(inv(M)) Re(inv(M))

If your complex array is already stored as a real and imaginary parts separately (as in many languages, that don't support complex data directly), building the array to be inverted is very simple. Here's some pseudo-code:

   float inputreal[], inputimag[];
   float outputreal[], outputimag[];
   int N;
   int i,j;
   float m[2*N,2*N];
   for(i=0;i<N;i++)
       for(j=0;i<N;i++)
           m[i,j]  =  inputreal[i,j];  m[i,j+N]  = inputimag[i,j];
           m[i+N,j]= - inputimag[i,j]; m[i+N,j+n]= inputreal[i,j];
   matrix_invert(m);
   for(i=0;i<N;i++)
       for(j=0;i<N;i++)
           outputreal[i,j] = m[i,j]; outputimag[i,j] = m[i,j+N];
 

 

Why it works...

Consider a single complex number a + bj that you want the inverse of. That is, you want c and d, such that (a+bj)(c+dj) = 1. Build the matrix:

M =
a b
-b a

and calculate the inverse of this array. That is, solve for: [M][X] = I, where I is the identity matrix. Using Cramer's rule, from above, the inverse of M is:

inv(M)=

a/det(M) -b/det(M)
b/det(M) a/det(M)

where det(M) is the determinant =(a*a - (b*-b))= (a^2+b^2).
Now, take the top row of the matrix as the real and imaginary parts:
(a/det(M)) - (b/det(M))j.
That is,
c = a / (a^2+b^2)
d = -b / (a^2+b^2)
which is, in fact, the inverse of (a+bj)

This clever scheme (which I really should have known about) is how the very useful matrix math package for Excel (below) does it. Note that it's not necessarily the most efficient means to do the inversion, since you wind up essentially calculating twice as many numbers as you need, and the technique doesn't take advantage of the symmetry in the (real) matrix being inverted. One would also invert the matrix using some more efficient scheme than Cramer's rule.

Links and references

Here are some links I found (for which I make no claims of quality) just doing a search for "matrix inversion":

A textbook reference (from my undergrad matrix math course) which describes all of the above:

Reiner, I., Introduction to Matrix Theory and Linear Algebra, Holt Rinehart Winston, 1971

Some newer links:

http://sbcx.mit.edu/prop_codes.html - C libraries

http://digilander.libero.it/foxes/SoftwareDownload.htm - for Excel! This is way cool. Blurb from site:

Since birth, Excel appeared as a fantastic tool for studying and developing numeric calculus. Its worksheets, divided into rows and columns, resulted particularly adapted for matrix and linear calculus. From then on, lots and lots of new objects are included into Excel (not all of them very useful, indeed!) but, surprisingly, the matrix functions are substantially the same of many years ago. This little smart addin intends to cover this gap, improving the standard set of the functions for matrix calculus.

This Excel 97/2000 addin contains more than 70 useful functions for Matrix and Linear Algebra: Euclidean Norm. Complex Matrix Inversion and Multiplication. Similarity transform. Determinant. Power. Trace. Scalar Product (inner). Vector Product. Eigenvalues and Eigenvectors of symmetric matrix with Jacobi algorithm and power method. Complex eigenvectors. Jacobi's rotation matrix. Eigenvalues with QR algorithm. Characteristic polynomial coefficients. Polynomial Rootfinder with Lin-bairstow method. Random matrix generator with given eigenvalues, rank or determinant. Several useful matrix added: Householder; Hilbert's; Tartaglia's; Vandermonde's. Gauss Jordan algorithm step by step. Linear System (real and complex). Linear System with iterative methods: Gauss-Seidel and Jacobi algorithms. Singular Linear System. Linear Transform. Gram-Schmidt's Orthogonalization. Cholesky decomposition. LU decomposition. QR decomposition. Extract sub-matrix. SVD decomposition. Rank. Floyd's shortest-path algorithm. Varimax Kaiser's method. Linear optimization with Simplex method. All functions work in standard 32 bit precision. For further details see MATRIX function list. Help on line. No installation. Freeware and open source. For math fans!


matinv.htm - 28 June 2000 - Jim Lux
revised 25 Jan 2001 (fixed missing - in middle cofactor of 3x3 determinant)
revised 13 Feb 2003 (fixed site relative links)
revised 15 Jan 2006 (fixed the error in sign, once again, in the 3x3 calculation)
(math home) (radio home) (Jim's Home page)