Frobenius matrix

A Frobenius matrix is a special kind of square matrix from numerical mathematics. A matrix is a Frobenius matrix if it has the following three properties:

• all entries on the main diagonal are ones
• the entries below the main diagonal of at most one column are arbitrary
• every other entry is zero

The following matrix is an example.

${\displaystyle A={\begin{pmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\0&a_{32}&1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&a_{n2}&0&\cdots &1\end{pmatrix}}}$

Frobenius matrices are invertible. The inverse of a Frobenius matrix is again a Frobenius matrix, equal to the original matrix with changed signs outside the main diagonal. The inverse of the example above is therefore:

${\displaystyle A^{-1}={\begin{pmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\0&-a_{32}&1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&-a_{n2}&0&\cdots &1\end{pmatrix}}}$

Frobenius matrices are named after Ferdinand Georg Frobenius.

The term Frobenius matrix may also be used for an alternative matrix form that differs from an Identity matrix only in the elements of a single row preceding the diagonal entry of that row (as opposed to the above definition which has the matrix differing from the identity matrix in a single column below the diagonal). The following matrix is an example of this alternative form showing a 4-by-4 matrix with its 3rd row differing from the identity matrix.

${\displaystyle A={\begin{pmatrix}1&0&0&0\\0&1&0&0\\a_{31}&a_{32}&1&0\\0&0&0&1\end{pmatrix}}}$

An alternative name for this latter form of Frobenius matrices is Gauss transformation matrix, after Carl Friedrich Gauss.[1] They are used in the process of Gaussian elimination to represent the Gaussian transformations.

If a matrix is multiplied from the left (left multiplied) with a Gauss transformation matrix, a linear combination of the preceding rows is added to the given row of the matrix (in the example shown above, a linear combination of rows 1 and 2 will be added to row 3). Multiplication with the inverse matrix subtracts the corresponding linear combination from the given row. This corresponds to one of the elementary operations of Gaussian elimination (besides the operation of transposing the rows and multiplying a row with a scalar multiple).