# Block matrix

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.[1][2]

Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices.[3][2] For example, the 3x4 matrix presented below is divided by horizontal and vertical lines into four blocks: the top-left 2x3 block, the top-right 2x1 block, the bottom-left 1x3 block, and the bottom-right 1x1 block.

${\displaystyle \left[{\begin{array}{ccc|c}a_{11}&a_{12}&a_{13}&b_{1}\\a_{21}&a_{22}&a_{23}&b_{2}\\\hline c_{1}&c_{2}&c_{3}&d\end{array}}\right]}$

Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

This notion can be made more precise for an ${\displaystyle n}$ by ${\displaystyle m}$ matrix ${\displaystyle M}$ by partitioning ${\displaystyle n}$ into a collection ${\displaystyle {\text{rowgroups}}}$, and then partitioning ${\displaystyle m}$ into a collection ${\displaystyle {\text{colgroups}}}$. The original matrix is then considered as the "total" of these groups, in the sense that the ${\displaystyle (i,j)}$ entry of the original matrix corresponds in a 1-to-1 way with some ${\displaystyle (s,t)}$ offset entry of some ${\displaystyle (x,y)}$, where ${\displaystyle x\in {\text{rowgroups}}}$ and ${\displaystyle y\in {\text{colgroups}}}$.[4]

Block matrix algebra arises in general from biproducts in categories of matrices.[5]

## Example

The matrix

${\displaystyle \mathbf {P} ={\begin{bmatrix}1&2&2&7\\1&5&6&2\\3&3&4&5\\3&3&6&7\end{bmatrix}}}$

can be visualized as divided into four blocks, as

${\displaystyle \mathbf {P} =\left[{\begin{array}{cc|cc}1&2&2&7\\1&5&6&2\\\hline 3&3&4&5\\3&3&6&7\end{array}}\right]}$ .

The horizontal and vertical lines have no special mathematical meaning,[6][7] but are a common way to visualize a partition.[6][7] By this partition, ${\displaystyle P}$  is partitioned into four 2×2 blocks, as

${\displaystyle \mathbf {P} _{11}={\begin{bmatrix}1&2\\1&5\end{bmatrix}},\quad \mathbf {P} _{12}={\begin{bmatrix}2&7\\6&2\end{bmatrix}},\quad \mathbf {P} _{21}={\begin{bmatrix}3&3\\3&3\end{bmatrix}},\quad \mathbf {P} _{22}={\begin{bmatrix}4&5\\6&7\end{bmatrix}}.}$

The partitioned matrix can then be written as

${\displaystyle \mathbf {P} ={\begin{bmatrix}\mathbf {P} _{11}&\mathbf {P} _{12}\\\mathbf {P} _{21}&\mathbf {P} _{22}\end{bmatrix}}.}$ [8]

## Formal definition

Let ${\displaystyle A\in \mathbb {C} ^{m\times n}}$ . A partitioning of ${\displaystyle A}$  is a representation of ${\displaystyle A}$  in the form

${\displaystyle A={\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1q}\\A_{21}&A_{22}&\cdots &A_{2q}\\\vdots &\vdots &\ddots &\vdots \\A_{p1}&A_{p2}&\cdots &A_{pq}\end{bmatrix}}}$ ,

where ${\displaystyle A_{ij}\in \mathbb {C} ^{m_{i}\times n_{j}}}$  are contiguous submatrices, ${\displaystyle \sum _{i=1}^{p}m_{i}=m}$ , and ${\displaystyle \sum _{j=1}^{q}n_{j}=n}$ .[9] The elements ${\displaystyle A_{ij}}$  of the partition are called blocks.[9]

By this definition, the blocks in any one column must all have the same number of columns.[9] Similarly, the blocks in any one row must have the same number of rows.[9]

### Partitioning methods

A matrix can be partitioned in many ways.[9] For example, a matrix ${\displaystyle A}$  is said to be partitioned by columns if it is written as

${\displaystyle A=(a_{1}\ a_{2}\ \cdots \ a_{n})}$ ,

where ${\displaystyle a_{j}}$  is the ${\displaystyle j}$ th column of ${\displaystyle A}$ .[9] A matrix can also be partitioned by rows:

${\displaystyle A={\begin{bmatrix}a_{1}^{T}\\a_{2}^{T}\\\vdots \\a_{m}^{T}\end{bmatrix}}}$ ,

where ${\displaystyle a_{i}^{T}}$  is the ${\displaystyle i}$ th row of ${\displaystyle A}$ .[9]

### Common partitions

Often,[9] we encounter the 2x2 partition

${\displaystyle A={\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}}}$ ,[9]

particularly in the form where ${\displaystyle A_{11}}$  is a scalar:

${\displaystyle A={\begin{bmatrix}a_{11}&a_{12}^{T}\\a_{21}&A_{22}\end{bmatrix}}}$ .[9]

## Block matrix operations

### Transpose

Let

${\displaystyle A={\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1q}\\A_{21}&A_{22}&\cdots &A_{2q}\\\vdots &\vdots &\ddots &\vdots \\A_{p1}&A_{p2}&\cdots &A_{pq}\end{bmatrix}}}$

where ${\displaystyle A_{ij}\in \mathbb {C} ^{k_{i}\times \ell _{j}}}$ . (This matrix ${\displaystyle A}$  will be reused in § Addition and § Multiplication.) Then its transpose is

${\displaystyle A^{T}={\begin{bmatrix}A_{11}^{T}&A_{21}^{T}&\cdots &A_{p1}^{T}\\A_{12}^{T}&A_{22}^{T}&\cdots &A_{p2}^{T}\\\vdots &\vdots &\ddots &\vdots \\A_{1q}^{T}&A_{2q}^{T}&\cdots &A_{pq}^{T}\end{bmatrix}}}$ ,[9][10]

and the same equation holds with the transpose replaced by the conjugate transpose.[9]

#### Block transpose

A special form of matrix transpose can also be defined for block matrices, where individual blocks are reordered but not transposed. Let ${\displaystyle A=(B_{ij})}$  be a ${\displaystyle k\times l}$  block matrix with ${\displaystyle m\times n}$  blocks ${\displaystyle B_{ij}}$ , the block transpose of ${\displaystyle A}$  is the ${\displaystyle l\times k}$  block matrix ${\displaystyle A^{\mathcal {B}}}$  with ${\displaystyle m\times n}$  blocks ${\displaystyle \left(A^{\mathcal {B}}\right)_{ij}=B_{ji}}$ .[11] As with the conventional trace operator, the block transpose is a linear mapping such that ${\displaystyle (A+C)^{\mathcal {B}}=A^{\mathcal {B}}+C^{\mathcal {B}}}$ .[10] However, in general the property ${\displaystyle (AC)^{\mathcal {B}}=C^{\mathcal {B}}A^{\mathcal {B}}}$  does not hold unless the blocks of ${\displaystyle A}$  and ${\displaystyle C}$  commute.

Let

${\displaystyle B={\begin{bmatrix}B_{11}&B_{12}&\cdots &B_{1s}\\B_{21}&B_{22}&\cdots &B_{2s}\\\vdots &\vdots &\ddots &\vdots \\B_{r1}&B_{r2}&\cdots &B_{rs}\end{bmatrix}}}$ ,

where ${\displaystyle B_{ij}\in \mathbb {C} ^{m_{i}\times n_{j}}}$ , and let ${\displaystyle A}$  be the matrix defined in § Transpose. (This matrix ${\displaystyle B}$  will be reused in § Multiplication.) Then if ${\displaystyle p=r}$ , ${\displaystyle q=s}$ , ${\displaystyle k_{i}=m_{i}}$ , and ${\displaystyle \ell _{j}=n_{j}}$ , then

${\displaystyle A+B={\begin{bmatrix}A_{11}+B_{11}&A_{12}+B_{12}&\cdots &A_{1q}+B_{1q}\\A_{21}+B_{21}&A_{22}+B_{22}&\cdots &A_{2q}+B_{2q}\\\vdots &\vdots &\ddots &\vdots \\A_{p1}+B_{p1}&A_{p2}+B_{p2}&\cdots &A_{pq}+B_{pq}\end{bmatrix}}}$ .[9]

### Multiplication

It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "conformable partitions"[12] between two matrices ${\displaystyle A}$  and ${\displaystyle B}$  such that all submatrix products that will be used are defined.[13]

Two matrices ${\displaystyle A}$  and ${\displaystyle B}$  are said to be partitioned conformally for the product ${\displaystyle AB}$ , when ${\displaystyle A}$  and ${\displaystyle B}$  are partitioned into submatrices and if the multiplication ${\displaystyle AB}$  is carried out treating the submatrices as if they are scalars, but keeping the order, and when all products and sums of submatrices involved are defined.

— Arak M. Mathai and Hans J. Haubold, Linear Algebra: A Course for Physicists and Engineers[14]

Let ${\displaystyle A}$  be the matrix defined in § Transpose, and let ${\displaystyle B}$  be the matrix defined in § Addition. Then the matrix product

${\displaystyle C=AB}$

can be performed blockwise, yielding ${\displaystyle C}$  as an ${\displaystyle (p\times s)}$  matrix. The matrices in the resulting matrix ${\displaystyle C}$  are calculated by multiplying:

${\displaystyle C_{ij}=\sum _{k=1}^{q}A_{ik}B_{kj}.}$ [6]

Or, using the Einstein notation that implicitly sums over repeated indices:

${\displaystyle C_{ij}=A_{ik}B_{kj}.}$

Depicting ${\displaystyle C}$  as a matrix, we have

${\displaystyle C=AB={\begin{bmatrix}\sum _{i=1}^{q}A_{1i}B_{i1}&\sum _{i=1}^{q}A_{1i}B_{i2}&\cdots &\sum _{i=1}^{q}A_{1i}B_{is}\\\sum _{i=1}^{q}A_{2i}B_{i1}&\sum _{i=1}^{q}A_{2i}B_{i2}&\cdots &\sum _{i=1}^{q}A_{2i}B_{is}\\\vdots &\vdots &\ddots &\vdots \\\sum _{i=1}^{q}A_{pi}B_{i1}&\sum _{i=1}^{q}A_{pi}B_{i2}&\cdots &\sum _{i=1}^{q}A_{pi}B_{is}\end{bmatrix}}}$ .[9]

### Inversion

If a matrix is partitioned into four blocks, it can be inverted blockwise as follows:

${\displaystyle {P}={\begin{bmatrix}{A}&{B}\\{C}&{D}\end{bmatrix}}^{-1}={\begin{bmatrix}{A}^{-1}+{A}^{-1}{B}\left({D}-{CA}^{-1}{B}\right)^{-1}{CA}^{-1}&-{A}^{-1}{B}\left({D}-{CA}^{-1}{B}\right)^{-1}\\-\left({D}-{CA}^{-1}{B}\right)^{-1}{CA}^{-1}&\left({D}-{CA}^{-1}{B}\right)^{-1}\end{bmatrix}},}$

where A and D are square blocks of arbitrary size, and B and C are conformable with them for partitioning. Furthermore, A and the Schur complement of A in P: P/A = DCA−1B must be invertible.[15]

Equivalently, by permuting the blocks:

${\displaystyle {P}={\begin{bmatrix}{A}&{B}\\{C}&{D}\end{bmatrix}}^{-1}={\begin{bmatrix}\left({A}-{BD}^{-1}{C}\right)^{-1}&-\left({A}-{BD}^{-1}{C}\right)^{-1}{BD}^{-1}\\-{D}^{-1}{C}\left({A}-{BD}^{-1}{C}\right)^{-1}&\quad {D}^{-1}+{D}^{-1}{C}\left({A}-{BD}^{-1}{C}\right)^{-1}{BD}^{-1}\end{bmatrix}}.}$ [16]

Here, D and the Schur complement of D in P: P/D = ABD−1C must be invertible.

If A and D are both invertible, then:

${\displaystyle {\begin{bmatrix}{A}&{B}\\{C}&{D}\end{bmatrix}}^{-1}={\begin{bmatrix}\left({A}-{B}{D}^{-1}{C}\right)^{-1}&{0}\\{0}&\left({D}-{C}{A}^{-1}{B}\right)^{-1}\end{bmatrix}}{\begin{bmatrix}{I}&-{B}{D}^{-1}\\-{C}{A}^{-1}&{I}\end{bmatrix}}.}$

By the Weinstein–Aronszajn identity, one of the two matrices in the block-diagonal matrix is invertible exactly when the other is.

### Determinant

The formula for the determinant of a ${\displaystyle 2\times 2}$ -matrix above continues to hold, under appropriate further assumptions, for a matrix composed of four submatrices ${\displaystyle A,B,C,D}$ . The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is

${\displaystyle \det {\begin{pmatrix}A&0\\C&D\end{pmatrix}}=\det(A)\det(D)=\det {\begin{pmatrix}A&B\\0&D\end{pmatrix}}.}$ [16]

Using this formula, we can derive that characteristic polynomials of ${\displaystyle {\begin{pmatrix}A&0\\C&D\end{pmatrix}}}$  and ${\displaystyle {\begin{pmatrix}A&B\\0&D\end{pmatrix}}}$  are same and equal to the product of characteristic polynomials of ${\displaystyle A}$  and ${\displaystyle D}$ .[citation needed] Furthermore, If ${\displaystyle {\begin{pmatrix}A&0\\C&D\end{pmatrix}}}$  or ${\displaystyle {\begin{pmatrix}A&B\\0&D\end{pmatrix}}}$  is diagonalizable, then ${\displaystyle A}$  and ${\displaystyle D}$  are diagonalizable too. The converse is false; simply check ${\displaystyle {\begin{pmatrix}1&1\\0&1\end{pmatrix}}}$ .[citation needed]

If ${\displaystyle A}$  is invertible, one has

${\displaystyle \det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}=\det(A)\det \left(D-CA^{-1}B\right).}$ [16]

and if ${\displaystyle D}$  is invertible, one has

${\displaystyle \det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}=\det(D)\det \left(A-BD^{-1}C\right).}$ [17][16]

If the blocks are square matrices of the same size further formulas hold. For example, if ${\displaystyle C}$  and ${\displaystyle D}$  commute (i.e., ${\displaystyle CD=DC}$ ), then

${\displaystyle \det {\begin{pmatrix}A&B\\C&D\end{pmatrix}}=\det(AD-BC).}$ [18]

This formula has been generalized to matrices composed of more than ${\displaystyle 2\times 2}$  blocks, again under appropriate commutativity conditions among the individual blocks.[19]

For ${\displaystyle A=D}$  and ${\displaystyle B=C}$ , the following formula holds (even if ${\displaystyle A}$  and ${\displaystyle B}$  do not commute)

${\displaystyle \det {\begin{pmatrix}A&B\\B&A\end{pmatrix}}=\det(A-B)\det(A+B).}$ [16]

## Special types of block matrices

### Direct sums and block diagonal matrices

#### Direct sum

For any arbitrary matrices A (of size m × n) and B (of size p × q), we have the direct sum of A and B, denoted by A ${\displaystyle \oplus }$  B and defined as

${\displaystyle {A}\oplus {B}={\begin{bmatrix}a_{11}&\cdots &a_{1n}&0&\cdots &0\\\vdots &\ddots &\vdots &\vdots &\ddots &\vdots \\a_{m1}&\cdots &a_{mn}&0&\cdots &0\\0&\cdots &0&b_{11}&\cdots &b_{1q}\\\vdots &\ddots &\vdots &\vdots &\ddots &\vdots \\0&\cdots &0&b_{p1}&\cdots &b_{pq}\end{bmatrix}}.}$ [10]

For instance,

${\displaystyle {\begin{bmatrix}1&3&2\\2&3&1\end{bmatrix}}\oplus {\begin{bmatrix}1&6\\0&1\end{bmatrix}}={\begin{bmatrix}1&3&2&0&0\\2&3&1&0&0\\0&0&0&1&6\\0&0&0&0&1\end{bmatrix}}.}$

This operation generalizes naturally to arbitrary dimensioned arrays (provided that A and B have the same number of dimensions).

Note that any element in the direct sum of two vector spaces of matrices could be represented as a direct sum of two matrices.

#### Block diagonal matrices

A block diagonal matrix is a block matrix that is a square matrix such that the main-diagonal blocks are square matrices and all off-diagonal blocks are zero matrices.[16] That is, a block diagonal matrix A has the form

${\displaystyle {A}={\begin{bmatrix}{A}_{1}&{0}&\cdots &{0}\\{0}&{A}_{2}&\cdots &{0}\\\vdots &\vdots &\ddots &\vdots \\{0}&{0}&\cdots &{A}_{n}\end{bmatrix}}}$

where Ak is a square matrix for all k = 1, ..., n. In other words, matrix A is the direct sum of A1, ..., An.[16] It can also be indicated as A1 ⊕ A2 ⊕ ... ⊕ An[10] or diag(A1, A2, ..., An)[10] (the latter being the same formalism used for a diagonal matrix). Any square matrix can trivially be considered a block diagonal matrix with only one block.

For the determinant and trace, the following properties hold:

{\displaystyle {\begin{aligned}\det {A}&=\det {A}_{1}\times \cdots \times \det {A}_{n},\end{aligned}}} [20][21] and
{\displaystyle {\begin{aligned}\operatorname {tr} {A}&=\operatorname {tr} {A}_{1}+\cdots +\operatorname {tr} {A}_{n}.\end{aligned}}} [16][21]

A block diagonal matrix is invertible if and only if each of its main-diagonal blocks are invertible, and in this case its inverse is another block diagonal matrix given by

${\displaystyle {\begin{bmatrix}{A}_{1}&{0}&\cdots &{0}\\{0}&{A}_{2}&\cdots &{0}\\\vdots &\vdots &\ddots &\vdots \\{0}&{0}&\cdots &{A}_{n}\end{bmatrix}}^{-1}={\begin{bmatrix}{A}_{1}^{-1}&{0}&\cdots &{0}\\{0}&{A}_{2}^{-1}&\cdots &{0}\\\vdots &\vdots &\ddots &\vdots \\{0}&{0}&\cdots &{A}_{n}^{-1}\end{bmatrix}}.}$ [22]

The eigenvalues[23] and eigenvectors of ${\displaystyle {A}}$  are simply those of the ${\displaystyle {A}_{k}}$ s combined.[21]

### Block tridiagonal matrices

A block tridiagonal matrix is another special block matrix, which is just like the block diagonal matrix a square matrix, having square matrices (blocks) in the lower diagonal, main diagonal and upper diagonal, with all other blocks being zero matrices. It is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal matrix ${\displaystyle A}$  has the form

${\displaystyle {A}={\begin{bmatrix}{B}_{1}&{C}_{1}&&&\cdots &&{0}\\{A}_{2}&{B}_{2}&{C}_{2}&&&&\\&\ddots &\ddots &\ddots &&&\vdots \\&&{A}_{k}&{B}_{k}&{C}_{k}&&\\\vdots &&&\ddots &\ddots &\ddots &\\&&&&{A}_{n-1}&{B}_{n-1}&{C}_{n-1}\\{0}&&\cdots &&&{A}_{n}&{B}_{n}\end{bmatrix}}}$

where ${\displaystyle {A}_{k}}$ , ${\displaystyle {B}_{k}}$  and ${\displaystyle {C}_{k}}$  are square sub-matrices of the lower, main and upper diagonal respectively.[24][25]

Block tridiagonal matrices are often encountered in numerical solutions of engineering problems (e.g., computational fluid dynamics). Optimized numerical methods for LU factorization are available[26] and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix. The Thomas algorithm, used for efficient solution of equation systems involving a tridiagonal matrix can also be applied using matrix operations to block tridiagonal matrices (see also Block LU decomposition).

### Block triangular matrices

#### Upper block triangular

A matrix ${\displaystyle A}$  is upper block triangular (or block upper triangular[27]) if

${\displaystyle A={\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1k}\\0&A_{22}&\cdots &A_{2k}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &A_{kk}\end{bmatrix}}}$ ,

where ${\displaystyle A_{ij}\in \mathbb {F} ^{n_{i}\times n_{j}}}$  for all ${\displaystyle i,j=1,\ldots ,k}$ .[23][27]

#### Lower block triangular

A matrix ${\displaystyle A}$  is lower block triangular if

${\displaystyle A={\begin{bmatrix}A_{11}&0&\cdots &0\\A_{21}&A_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\A_{k1}&A_{k2}&\cdots &A_{kk}\end{bmatrix}}}$ ,

where ${\displaystyle A_{ij}\in \mathbb {F} ^{n_{i}\times n_{j}}}$  for all ${\displaystyle i,j=1,\ldots ,k}$ .[23]

### Block Toeplitz matrices

A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, as a Toeplitz matrix has elements repeated down the diagonal.

A matrix ${\displaystyle A}$  is block Toeplitz if ${\displaystyle A_{(i,j)}=A_{(k,l)}}$  for all ${\displaystyle k-i=l-j}$ , that is,

${\displaystyle A={\begin{bmatrix}A_{1}&A_{2}&A_{3}&\cdots \\A_{4}&A_{1}&A_{2}&\cdots \\A_{5}&A_{4}&A_{1}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{bmatrix}}}$ ,

where ${\displaystyle A_{i}\in \mathbb {F} ^{n_{i}\times m_{i}}}$ .[23]

### Block Hankel matrices

A matrix ${\displaystyle A}$  is block Hankel if ${\displaystyle A_{(i,j)}=A_{(k,l)}}$  for all ${\displaystyle i+j=k+l}$ , that is,

${\displaystyle A={\begin{bmatrix}A_{1}&A_{2}&A_{3}&\cdots \\A_{2}&A_{3}&A_{4}&\cdots \\A_{3}&A_{4}&A_{5}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{bmatrix}}}$ ,

where ${\displaystyle A_{i}\in \mathbb {F} ^{n_{i}\times m_{i}}}$ .[23]

• Kronecker product (matrix direct product resulting in a block matrix)
• Jordan normal form (canonical form of a linear operator on a finite-dimensional complex vector space)
• Strassen algorithm (algorithm for matrix multiplication that is faster than the conventional matrix multiplication algorithm)

## Notes

1. ^ Eves, Howard (1980). Elementary Matrix Theory (reprint ed.). New York: Dover. p. 37. ISBN 0-486-63946-0. Retrieved 24 April 2013. We shall find that it is sometimes convenient to subdivide a matrix into rectangular blocks of elements. This leads us to consider so-called partitioned, or block, matrices.
2. ^ a b Dobrushkin, Vladimir. "Partition Matrices". Linear Algebra with Mathematica. Retrieved 2024-03-24.{{cite web}}: CS1 maint: url-status (link)
3. ^ Anton, Howard (1994). Elementary Linear Algebra (7th ed.). New York: John Wiley. p. 30. ISBN 0-471-58742-7. A matrix can be subdivided or partitioned into smaller matrices by inserting horizontal and vertical rules between selected rows and columns.
4. ^ Indhumathi, D.; Sarala, S. (2014-05-16). "Fragment Analysis and Test Case Generation using F-Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing" (PDF). International Journal of Computer Applications. 93 (6): 13. doi:10.5120/16218-5662.
5. ^ Macedo, H.D.; Oliveira, J.N. (2013). "Typing linear algebra: A biproduct-oriented approach". Science of Computer Programming. 78 (11): 2160–2191. arXiv:1312.4818. doi:10.1016/j.scico.2012.07.012.
6. ^ a b c Johnston, Nathaniel (2021). Introduction to linear and matrix algebra. Cham, Switzerland: Springer Nature. pp. 30, 425. ISBN 978-3-030-52811-9.
7. ^ a b Johnston, Nathaniel (2021). Advanced linear and matrix algebra. Cham, Switzerland: Springer Nature. p. 298. ISBN 978-3-030-52814-0.
8. ^ Jeffrey, Alan (2010). Matrix operations for engineers and scientists: an essential guide in linear algebra. Dordrecht [Netherlands] ; New York: Springer. p. 54. ISBN 978-90-481-9273-1. OCLC 639165077.
9. Stewart, Gilbert W. (1998). Matrix algorithms. 1: Basic decompositions. Philadelphia, PA: Soc. for Industrial and Applied Mathematics. pp. 18–20. ISBN 978-0-89871-414-2.
10. Gentle, James E. (2007). Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer Texts in Statistics. New York, NY: Springer New York Springer e-books. pp. 47, 487. ISBN 978-0-387-70873-7.
11. ^ Mackey, D. Steven (2006). Structured linearizations for matrix polynomials (PDF) (Thesis). University of Manchester. ISSN 1749-9097. OCLC 930686781.
12. ^ Eves, Howard (1980). Elementary Matrix Theory (reprint ed.). New York: Dover. p. 37. ISBN 0-486-63946-0. Retrieved 24 April 2013. A partitioning as in Theorem 1.9.4 is called a conformable partition of A and B.
13. ^ Anton, Howard (1994). Elementary Linear Algebra (7th ed.). New York: John Wiley. p. 36. ISBN 0-471-58742-7. ...provided the sizes of the submatrices of A and B are such that the indicated operations can be performed.
14. ^ Mathai, Arakaparampil M.; Haubold, Hans J. (2017). Linear Algebra: a course for physicists and engineers. De Gruyter textbook. Berlin Boston: De Gruyter. p. 162. ISBN 978-3-11-056259-0.
15. ^ Bernstein, Dennis (2005). Matrix Mathematics. Princeton University Press. p. 44. ISBN 0-691-11802-7.
16. Abadir, Karim M.; Magnus, Jan R. (2005). Matrix Algebra. Cambridge University Press. pp. 97, 100, 106, 111, 114, 118. ISBN 9781139443647.
17. ^ Taboga, Marco (2021). "Determinant of a block matrix", Lectures on matrix algebra.
18. ^ Silvester, J. R. (2000). "Determinants of Block Matrices" (PDF). Math. Gaz. 84 (501): 460–467. doi:10.2307/3620776. JSTOR 3620776. Archived from the original (PDF) on 2015-03-18. Retrieved 2021-06-25.
19. ^ Sothanaphan, Nat (January 2017). "Determinants of block matrices with noncommuting blocks". Linear Algebra and Its Applications. 512: 202–218. arXiv:1805.06027. doi:10.1016/j.laa.2016.10.004. S2CID 119272194.
20. ^ Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000). Numerical mathematics. Texts in applied mathematics. New York: Springer. pp. 10, 13. ISBN 978-0-387-98959-4.
21. ^ a b c George, Raju K.; Ajayakumar, Abhijith (2024). "A Course in Linear Algebra". University Texts in the Mathematical Sciences: 35, 407. doi:10.1007/978-981-99-8680-4. ISSN 2731-9318.
22. ^ Prince, Simon J. D. (2012). Computer vision: models, learning, and inference. New York: Cambridge university press. p. 531. ISBN 978-1-107-01179-3.
23. Bernstein, Dennis S. (2009). Matrix mathematics: theory, facts, and formulas (2 ed.). Princeton, NJ: Princeton University Press. pp. 168, 298. ISBN 978-0-691-14039-1.
24. ^ Dietl, Guido K. E. (2007). Linear estimation and detection in Krylov subspaces. Foundations in signal processing, communications and networking. Berlin ; New York: Springer. pp. 85, 87. ISBN 978-3-540-68478-7. OCLC 85898525.
25. ^ Horn, Roger A.; Johnson, Charles R. (2017). Matrix analysis (Second edition, corrected reprint ed.). New York, NY: Cambridge University Press. p. 36. ISBN 978-0-521-83940-2.
26. ^ Datta, Biswa Nath (2010). Numerical linear algebra and applications (2 ed.). Philadelphia, Pa: SIAM. p. 168. ISBN 978-0-89871-685-6.
27. ^ a b Stewart, Gilbert W. (2001). Matrix algorithms. 2: Eigensystems. Philadelphia, Pa: Soc. for Industrial and Applied Mathematics. p. 5. ISBN 978-0-89871-503-3.