Laplace expansion

  (Redirected from Cofactor expansion)

In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression for the determinant |B| of an n × n matrix B that is a weighted sum of the determinants of n sub-matrices (or minors) of B, each of size (n − 1) × (n − 1). The Laplace expansion is of didactic interest for its simplicity and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute when compared to methods using matrix decomposition.

The i, j cofactor of the matrix B is the scalar Cij defined by

where Mij is the i, j minor of B, that is, the determinant of the (n − 1) × (n − 1) matrix that results from deleting the i-th row and the j-th column of B.

Then the Laplace expansion is given by the following

Theorem. Suppose B = [bij] is an n × n matrix and fix any ij ∈ {1, 2, ..., n}.

Then its determinant |B| is given by:

where are are values of the matrix's row or column that were excluded by the step of finding minor matrix for the cofactor (see example below).

ExamplesEdit

Consider the matrix

 

The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields:

 

Laplace expansion along the second column yields the same result:

 

It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.

ProofEdit

Suppose   is an n × n matrix and   For clarity we also label the entries of   that compose its   minor matrix   as

  for  

Consider the terms in the expansion of   that have   as a factor. Each has the form

 

for some permutation τSn with  , and a unique and evidently related permutation   which selects the same minor entries as τ. Similarly each choice of σ determines a corresponding τ i.e. the correspondence   is a bijection between   and   The explicit relation between   and   can be written as

 

where   is a temporary shorthand notation for a cycle  . This operation decrements all indices larger than j so that every index fit in the set {1,2,...,n-1}

The permutation τ can be derived from σ as follows. Define   by   for   and  . Then   is expressed as

 

Now, the operation which apply   first and then apply   is (Notice applying A before B is equivalent to applying inverse of A to the upper row of B in Cauchy's two-line notation )

 

where   is temporary shorthand notation for  .

the operation which apply   first and then apply   is

 

above two are equal thus,

 
 

where   is the inverse of   which is  .

Thus

 

Since the two cycles can be written respectively as   and   transpositions,

 

And since the map   is bijective,

 

from which the result follows. Similarly, the result holds if the index of the outer summation was replaced with  .

Laplace expansion of a determinant by complementary minorsEdit

Laplaces cofactor expansion can be generalised as follows.

ExampleEdit

Consider the matrix

 

The determinant of this matrix can be computed by using the Laplace's cofactor expansion along the first two rows as follows. Firstly note that there are 6 sets of two distinct numbers in {1, 2, 3, 4}, namely let   be the aforementioned set.

By defining the complementary cofactors to be

 
 

and the sign of their permutation to be

 

The determinant of A can be written out as

 

where   is the complementary set to  .

In our explicit example this gives us

 

As above, it is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.

General statementEdit

Let   be an n × n matrix and   the set of k-element subsets of {1, 2, ... , n},   an element in it. Then the determinant of   can be expanded along the k rows identified by   as follows:

 

where   is the sign of the permutation determined by   and  , equal to  ,   the square minor of   obtained by deleting from   rows and columns with indices in   and   respectively, and   (called the complement of  ) defined to be   ,   and   being the complement of   and   respectively.

This coincides with the theorem above when  . The same thing holds for any fixed k columns.

Computational expenseEdit

The Laplace expansion is computationally inefficient for high dimension matrices, with a time complexity in big O notation of  . Alternatively, using a decomposition into triangular matrices as in the LU decomposition can yield determinants with a time complexity of  .[1]

See alsoEdit

ReferencesEdit

  1. ^ Stoer Bulirsch: Introduction to Numerical Mathematics
  • David Poole: Linear Algebra. A Modern Introduction. Cengage Learning 2005, ISBN 0-534-99845-3, pp. 265–267 (restricted online copy, p. 265, at Google Books)
  • Harvey E. Rose: Linear Algebra. A Pure Mathematical Approach. Springer 2002, ISBN 3-7643-6905-1, pp. 57–60 (restricted online copy, p. 57, at Google Books)

External linksEdit