Archive 1

Text that was removed

The following text may contains some useful additional information, but needs to be reworked. It might also be better placed somewhere else.

Decomposition theorems for general matrices

The decomposition theorem is a version of the spectral theorem in the particular case of matrices. This theorem is usually introduced in terms of coordinate transformation. If U is an invertible matrix, it can be seen as a transformation from one coordinate system to another, with the columns of U being the components of the new basis vectors within the old basis set. In this new system the coordinates of the vector   are labeled  . The latter are obtained from the coordinates v in the original coordinate system by the relation   and, the other way around, we have  . Applying successively  ,   and  , to the relation   defining the matrix multiplication provides   with  , the representation of A in the new basis. In this situation, the matrices A and   are said to be similar.

The decomposition theorem states that, if one chooses as columns of   n linearly independent eigenvectors of A, the new matrix   is diagonal and its diagonal elements are the eigenvalues of A. If this is possible the matrix A is diagonalizable. An example of non-diagonalizable matrix is given by the matrix A above. There are several generalizations of this decomposition which can cope with the non-diagonalizable case, suited for different purposes:

  • the Schur triangular form states that any matrix is unitarily equivalent to an upper triangular one;
  • the singular value decomposition,   where   is diagonal with U and V unitary matrices. The diagonal entries of   are nonnegative; they are called the singular values of A. This can be done for non-square matrices as well;
  • the Jordan normal form, where   where   is not diagonal but block-diagonal. The number and the sizes of the Jordan blocks are dictated by the geometric and algebraic multiplicities of the eigenvalues. The Jordan decomposition is a fundamental result. One might glean from it immediately that a square matrix is described completely by its eigenvalues, including multiplicity, up to similarity. This shows mathematically the important role played by eigenvalues in the study of matrices;
  • as an immediate consequence of Jordan decomposition, any matrix A can be written uniquely as A = S + N where S is diagonalizable, N is nilpotent (i.e., such that Nq=0 for some q), and S commutes with N (SN=NS). — Repliedthemockturtle 00:52, 7 October 2007 (UTC)

Some other properties of eigenvalues

The spectrum is invariant under similarity transformations: the matrices A and P-1AP have the same eigenvalues for any matrix A and any invertible matrix P. The spectrum is also invariant under transposition: the matrices A and AT have the same eigenvalues.

Since a linear transformation on finite dimensional spaces is bijective if and only if it is injective, a matrix is invertible if and only if zero is not an eigenvalue of the matrix.

Some more consequences of the Jordan decomposition are as follows:

  • a matrix is diagonalizable if and only if the algebraic and geometric multiplicities coincide for all its eigenvalues. In particular, an n×n matrix which has n different eigenvalues is always diagonalizable; Under the same reasoning a matrix A with eigenvectors stored in matrix P will result in P-1AP=Σ where Σ is a diagonal matrix with the eigenvalues of A along the diagonal.
  • the vector space on which the matrix acts can be viewed as a direct sum of its invariant subspaces span by its generalized eigenvectors. Each block on the diagonal corresponds to a subspace in the direct sum. When a block is diagonal, its invariant subspace is an eigenspace. Otherwise it is a generalized eigenspace, defined above;
  • Since the trace, or the sum of the elements on the main diagonal of a matrix, is preserved by unitary equivalence, the Jordan normal form tells us that it is equal to the sum of the eigenvalues;
  • Similarly, because the eigenvalues of a triangular matrix are the entries on the main diagonal, the determinant equals the product of the eigenvalues (counted according to algebraic multiplicity).

The location of the spectrum for a few subclasses of normal matrices are:

Suppose that A is an m×n matrix, with mn, and that B is an n×m matrix. Then BA has the same eigenvalues as AB plus nm eigenvalues equal to zero.

Each matrix can be assigned an operator norm, which depends on the norm of its domain. The operator norm of a square matrix is an upper bound for the moduli of its eigenvalues, and thus also for its spectral radius. This norm is directly related to the power method for calculating the eigenvalue of largest modulus given above. For normal matrices, the operator norm induced by the Euclidean norm is the largest moduli among its eigenvalues.

Algebraic multiplicity

The algebraic multiplicity of an eigenvalue λ of A is the order of λ as a zero of the characteristic polynomial of A; in other words, if λ is one root of the polynomial, it is the number of factors (tλ) in the characteristic polynomial after factorization. An n×n matrix has n eigenvalues, counted according to their algebraic multiplicity, because its characteristic polynomial has degree n.

An eigenvalue of algebraic multiplicity 1 is called a "simple eigenvalue".

In an article on matrix theory, a statement like the one below might be encountered:

"the eigenvalues of a matrix A are 4,4,3,3,3,2,2,1,"

meaning that the algebraic multiplicity of 4 is two, of 3 is three, of 2 is two and of 1 is one. This style is used because algebraic multiplicity is the key to many mathematical proofs in matrix theory.

Recall that above we defined the geometric multiplicity of an eigenvalue to be the dimension of the associated eigenspace, the nullspace of λI − A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix (λI − A)k for any sufficiently large k. That is, it is the space of generalized eigenvectors (1st sense), where a generalized eigenvector is any vector which eventually becomes 0 if λI − A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity. The first sense should not to be confused with generalized eigenvalue problem as stated below.

For example:

 

It has only one eigenvalue, namely λ = 1. The characteristic polynomial is  , so this eigenvalue has algebraic multiplicity 2. However, the associated eigenspace is the axis usually called the x axis, spanned by the unit vector  , so the geometric multiplicity is only 1.

Generalized eigenvectors can be used to calculate the Jordan normal form of a matrix (see discussion below). The fact that Jordan blocks in general are not diagonal but nilpotent is directly related to the distinction between eigenvectors and generalized eigenvectors. — Preceding unsigned comment added by Repliedthemockturtle (talkcontribs) 00:54, 7 October 2007 (UTC)

Items needing attention

  • The section on decomposition of special matrices needs expansion.
  • The section on useful facts needs expansion.
  • The section on advanced topics is haphazard and needs cleanup.

Repliedthemockturtle 02:27, 7 October 2007 (UTC)

  • The section on "Fundamental theory of matrix eigenvectors and eigenvalues" under "summary" doesn't treat eigenvectors as solutions of the eigenproblem properly: the solution set is generally infinite, not finite; the dimensions of the eigenspaces are finite. Language on "eigenspaces" is needed to clean this up. 63.228.55.134 17:19, 10 October 2007 (UTC)

Recent edits

I've made a number of recent edits to try to change the tone of the article. It reads like a textbook, which wikipedia is not. I've removed a lot of material in the process, usually stuff that's spelled out in greater detail than is suitable, summaries, or stuff that's just not relevant. James pic 16:58, 12 October 2007 (UTC)

good show!!!86.164.224.41 (talk) 19:29, 6 August 2008 (UTC)

Largest: algebraic vs. magnitude?

I'm reading documentation for ARPACK and I come across this:

c  WHICH   Character*2.  (INPUT)
c          Specify which of the Ritz values of OP to compute.
c
c          'LA' - compute the NEV largest (algebraic) eigenvalues.
c          'SA' - compute the NEV smallest (algebraic) eigenvalues.
c          'LM' - compute the NEV largest (in magnitude) eigenvalues.
c          'SM' - compute the NEV smallest (in magnitude) eigenvalues. 
c          'BE' - compute NEV eigenvalues, half from each end of the
c                 spectrum.  When NEV is odd, compute one more from the
c                 high end than from the low end.
c           (see remark 1 below)

It's clear what "largest (in magnitude)" and "smallest (in magnitude)" mean, but what are the "algebraic" ones? Is this algebraic multiplicity or something else? —Ben FrantzDale (talk) 03:51, 13 February 2009 (UTC)

spectral

Why is this called "spectral" decomp./theorem? 150.203.48.23 (talk) 06:37, 18 March 2010 (UTC)

Because the set of eigenvalues is sometimes call'd the spectrum. Sometimes they correspond to energy levels. The absorption/emission spectrum of an atom is based on the differences between its energy levels, and a particular "series" is based on the energy differences between various states and a certain given state. That's the connexion between energy levels and a spectrum. Eric Kvaalen (talk) 06:38, 2 November 2010 (UTC)

Merge proposal

I think this article and Diagonalizable matrix should be merged. Eric Kvaalen (talk) 06:38, 2 November 2010 (UTC)

Symmetric Matrices

If you relax the constraint that the matrix be real, symmetric matrices can produce mutually orthogonal eigenvectors in the complex space, which is very nice in quantum mechanics since eigenvector1* times eigenvector2 should equal zero. --24.80.113.218 (talk) 05:48, 10 July 2011 (UTC)

Dimension (Generalized eigenvalue problem section)

Hello,

in the Generalized eigenvalue problem section, do the vi vectors have to be of dimension n? Shouldn't we write

  

I see no reason why the number of generalised eigenvectors should be equal to their dimension, but as this is a bit beyond my math skills, I prefere to ask before modifying.

cdang|write me 12:49, 2 January 2013 (UTC)

Mmmm, I found a source telling me that A and B are n×n matrices. This is consistant with the begining of the article, but as this seciton is about a "generalised" case, it is not obvious that this limitation also applies here. So, I reformulate: would it be a good idea to remind that we are still in the case of square matrices?
cdang|write me 13:15, 2 January 2013 (UTC)