Woodbury matrix identity

In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury,[1][2] says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. However, the identity appeared in several papers before the Woodbury report.[3][4]

The Woodbury matrix identity is[5]

where A, U, C and V are conformable matrices: A is n×n, C is k×k, U is n×k, and V is k×n. This can be derived using blockwise matrix inversion.

While the identity is primarily used on matrices, it holds in a general ring or in an Ab-category.

DiscussionEdit

To prove this result, we will start by proving a simpler one. Replacing A and C with the identity matrix I, we obtain another identity which is a bit simpler:

 

To recover the original equation from this reduced identity, set   and  .

This identity itself can be viewed as the combination of two simpler identities. We obtain the first identity from

 ,

thus,

 ,

and similarly

 

The second identity is the so-called push-through identity[6]

 

that we obtain from

 

after multiplying by   on the right and by   on the left.

Special casesEdit

When   are vectors, the identity reduces to the Sherman–Morrison formula.

In the scalar case it (the reduced version) is simply

 

Inverse of a sumEdit

If n = k and U = V = In is the identity matrix, then

 

Continuing with the merging of the terms of the far right-hand side of the above equation results in Hua's identity

 

Another useful form of the same identity is

 

which has a recursive structure that yields

 

This form can be used in perturbative expansions where B is a perturbation of A.

VariationsEdit

Binomial inverse theoremEdit

If A, B, U, V are matrices of sizes n×n, k×k, n×k, k×n, respectively, then

 

provided A and B + BVA−1UB are nonsingular. Nonsingularity of the latter requires that B−1 exist since it equals B(I + VA−1UB) and the rank of the latter cannot exceed the rank of B.[6]

Since B is invertible, the two B terms flanking the parenthetical quantity inverse in the right-hand side can be replaced with (B−1)−1, which results in the original Woodbury identity.

A variation for when B is singular and possibly even non-square:[6]

 

Formulas also exist for certain cases in which A is singular.[7]

Pseudoinverse with positive semidefinite matricesEdit

In general Woodbury's identity is not valid if one or more inverses are replaced by (Moore–Penrose) pseudoinverses. However, if   and   are positive semidefinite, and   (implying that   is itself positive semidefinite), then the following formula provides a generalization:[8][9]

 

where   can be written as   because any positive semidefinite matrix is equal to   for some  .

DerivationsEdit

Direct proofEdit

The formula can be proven by checking that   times its alleged inverse on the right side of the Woodbury identity gives the identity matrix:

 

Alternative proofsEdit

Algebraic proof

First consider these useful identities,

 

Now,

 
Derivation via blockwise elimination

Deriving the Woodbury matrix identity is easily done by solving the following block matrix inversion problem

 

Expanding, we can see that the above reduces to

 

which is equivalent to  . Eliminating the first equation, we find that  , which can be substituted into the second to find  . Expanding and rearranging, we have  , or  . Finally, we substitute into our  , and we have  . Thus,

 

We have derived the Woodbury matrix identity.

Derivation from LDU decomposition

We start by the matrix

 

By eliminating the entry under the A (given that A is invertible) we get

 

Likewise, eliminating the entry above C gives

 

Now combining the above two, we get

 

Moving to the right side gives

 

which is the LDU decomposition of the block matrix into an upper triangular, diagonal, and lower triangular matrices.

Now inverting both sides gives

 

We could equally well have done it the other way (provided that C is invertible) i.e.

 

Now again inverting both sides,

 

Now comparing elements (1, 1) of the RHS of (1) and (2) above gives the Woodbury formula

 

ApplicationsEdit

This identity is useful in certain numerical computations where A−1 has already been computed and it is desired to compute (A + UCV)−1. With the inverse of A available, it is only necessary to find the inverse of C−1 + VA−1U in order to obtain the result using the right-hand side of the identity. If C has a much smaller dimension than A, this is more efficient than inverting A + UCV directly. A common case is finding the inverse of a low-rank update A + UCV of A (where U only has a few columns and V only a few rows), or finding an approximation of the inverse of the matrix A + B where the matrix B can be approximated by a low-rank matrix UCV, for example using the singular value decomposition.

This is applied, e.g., in the Kalman filter and recursive least squares methods, to replace the parametric solution, requiring inversion of a state vector sized matrix, with a condition equations based solution. In case of the Kalman filter this matrix has the dimensions of the vector of observations, i.e., as small as 1 in case only one new observation is processed at a time. This significantly speeds up the often real time calculations of the filter.

In the case when C is the identity matrix I, the matrix   is known in numerical linear algebra and numerical partial differential equations as the capacitance matrix.[4]

See alsoEdit

NotesEdit

  1. ^ Max A. Woodbury, Inverting modified matrices, Memorandum Rept. 42, Statistical Research Group, Princeton University, Princeton, NJ, 1950, 4pp MR38136
  2. ^ Max A. Woodbury, The Stability of Out-Input Matrices. Chicago, Ill., 1949. 5 pp. MR32564
  3. ^ Guttmann, Louis (1946). "Enlargement methods for computing the inverse matrix". Ann. Math. Statist. 17 (3): 336–343. doi:10.1214/aoms/1177730946.
  4. ^ a b Hager, William W. (1989). "Updating the inverse of a matrix". SIAM Review. 31 (2): 221–239. doi:10.1137/1031049. JSTOR 2030425. MR 0997457.
  5. ^ Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2nd ed.). SIAM. p. 258. ISBN 978-0-89871-521-7. MR 1927606.
  6. ^ a b c Henderson, H. V.; Searle, S. R. (1981). "On deriving the inverse of a sum of matrices" (PDF). SIAM Review. 23 (1): 53–60. doi:10.1137/1023004. hdl:1813/32749. JSTOR 2029838.
  7. ^ Kurt S. Riedel, "A Sherman–Morrison–Woodbury Identity for Rank Augmenting Matrices with Application to Centering", SIAM Journal on Matrix Analysis and Applications, 13 (1992)659-662, doi:10.1137/0613040 preprint MR1152773
  8. ^ Bernstein, Dennis S. (2018). Scalar, Vector, and Matrix Mathematics: Theory, Facts, and Formulas (Revised and expanded ed.). Princeton: Princeton University Press. p. 638. ISBN 9780691151205.
  9. ^ Schott, James R. (2017). Matrix analysis for statistics (Third ed.). Hoboken, New Jersey: John Wiley & Sons, Inc. p. 219. ISBN 9781119092483.

External linksEdit