In mathematics, Dodgson condensation or method of contractants is a method of computing the determinants of square matrices. It is named for its inventor, Charles Lutwidge Dodgson (better known by his pseudonym, as Lewis Carroll, the popular author). The method in the case of an n × n matrix is to construct an (n − 1) × (n − 1) matrix, an (n − 2) × (n − 2), and so on, finishing with a 1 × 1 matrix, which has one entry, the determinant of the original matrix.

General method Edit

This algorithm can be described in the following four steps:

  1. Let A be the given n × n matrix. Arrange A so that no zeros occur in its interior. An explicit definition of interior would be all ai,j with  . One can do this using any operation that one could normally perform without changing the value of the determinant, such as adding a multiple of one row to another.
  2. Create an (n − 1) × (n − 1) matrix B, consisting of the determinants of every 2 × 2 submatrix of A. Explicitly, we write  
  3. Using this (n − 1) × (n − 1) matrix, perform step 2 to obtain an (n − 2) × (n − 2) matrix C. Divide each term in C by the corresponding term in the interior of A so  .
  4. Let A = B, and B = C. Repeat step 3 as necessary until the 1 × 1 matrix is found; its only entry is the determinant.

Examples Edit

Without zeros Edit

One wishes to find


All of the interior elements are non-zero, so there is no need to re-arrange the matrix.

We make a matrix of its 2 × 2 submatrices.


We then find another matrix of determinants:


We must then divide each element by the corresponding element of our original matrix. The interior of the original matrix is  , so after dividing we get  . The process must be repeated to arrive at a 1 × 1 matrix.   Dividing by the interior of the 3 × 3 matrix, which is just −5, gives   and −8 is indeed the determinant of the original matrix.

With zeros Edit

Simply writing out the matrices:


Here we run into trouble. If we continue the process, we will eventually be dividing by 0. We can perform four row exchanges on the initial matrix to preserve the determinant and repeat the process, with most of the determinants precalculated:


Hence, we arrive at a determinant of 36.

Desnanot–Jacobi identity and proof of correctness of the condensation algorithm Edit

The proof that the condensation method computes the determinant of the matrix if no divisions by zero are encountered is based on an identity known as the Desnanot–Jacobi identity (1841) or, more generally, the Sylvester determinant identity (1851).[1]

Let   be a square matrix, and for each  , denote by   the matrix that results from   by deleting the  -th row and the  -th column. Similarly, for  , denote by   the matrix that results from   by deleting the  -th and  -th rows and the  -th and  -th columns.

Desnanot–Jacobi identity Edit


Proof of the correctness of Dodgson condensation Edit

Rewrite the identity as


Now note that by induction it follows that when applying the Dodgson condensation procedure to a square matrix   of order  , the matrix in the  -th stage of the computation (where the first stage   corresponds to the matrix   itself) consists of all the connected minors of order   of  , where a connected minor is the determinant of a connected   sub-block of adjacent entries of  . In particular, in the last stage  , one gets a matrix containing a single element equal to the unique connected minor of order  , namely the determinant of  .

Proof of the Desnanot-Jacobi identity Edit

We follow the treatment in Bressoud's book; for an alternative combinatorial proof see the paper by Zeilberger. Denote   (up to sign, the  -th minor of  ), and define a   matrix   by


(Note that the first and last column of   are equal to those of the adjugate matrix of  ). The identity is now obtained by computing   in two ways. First, we can directly compute the matrix product   (using simple properties of the adjugate matrix, or alternatively using the formula for the expansion of a matrix determinant in terms of a row or a column) to arrive at


where we use   to denote the  -th entry of  . The determinant of this matrix is  .
Second, this is equal to the product of the determinants,  . But clearly
so the identity follows from equating the two expressions we obtained for   and dividing out by   (this is allowed if one thinks of the identities as polynomial identities over the ring of polynomials in the   indeterminate variables  ).

Notes Edit

  1. ^ Sylvester, James Joseph (1851). "On the relation between the minor determinants of linearly equivalent quadratic functions". Philosophical Magazine. 1: 295–305.
    Cited in Akritas, A. G.; Akritas, E. K.; Malaschonok, G. I. (1996). "Various proofs of Sylvester's (determinant) identity". Mathematics and Computers in Simulation. 42 (4–6): 585. doi:10.1016/S0378-4754(96)00035-3.

References and further reading Edit

External links Edit