Talk:Symmetric matrix

Latest comment: 1 year ago by Svennik in topic Proof 3 of Takagi

Shouldn't be better to create a distinct entry for 'skew-symmetric matrix' ?

Inverse Matrix edit

Does the inverse of a square symmetrical matrix have any special properties? Does being symmetrical provide any shortcut to finding an inverse? 58.107.136.85 (talk) 03:56, 11 April 2008 (UTC)Reply


If the inverse of a symmetrical matrix is also a symmetrical matrix it should be stated under properties. —Preceding unsigned comment added by 77.13.24.86 (talk) 18:25, 25 January 2011 (UTC)Reply

Yes! Of course the inverse of a symmetric matrix is symmetric; its very easy to show too.

Proof:

Suppose A = A^t and A is non-singular, then there exists A^-1 such that A*A^-1 = I. Applying the transposition operator to each side of the equation we get...

Transpose[A*A^-1] = Transpose[I]... {(A^-1)^t}*A^t = I; however, we have that A = A^t, so it follows that... {(A^-1)^t}*A = I, but the inverse is unique therefore,... (A^-1)^t = A^-1. This proves that the inverse is symmetric. QED — Preceding unsigned comment added by Brydustin (talkcontribs) 00:36, 1 January 2012 (UTC)Reply

Basis, Eigenvectors edit

It's easy to identify a symmetric matrix when it's written in terms of an orthogonal basis, but what about when it's not? Is a real-valued matrix symmetrix iff its eigenvectors are orthogonal? —Ben FrantzDale 00:31, 11 September 2006 (UTC)Reply

Reading more carefully answers my question: "Every symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix." So apparently the answer is yes. —Ben FrantzDale 15:27, 11 September 2006 (UTC)Reply

I believe you're confusing a couple of concepts here. A matrix is a rectangular array of numbers, and it's symmetric if it's, well, symmetric. Of course, a linear map can be represented as a matrix when a choice of basis has been fixed. On the other hand, the concept of symmetry for a linear operator is basis independent. Greg Woodhouse 01:34, 30 November 2006 (UTC)Reply

being symmetric with real entries implies unitarily diagonalizable; the converse need not be true. anti-symmetric matrices with real entries are normal therefore unitarily diagonalizable. but the eigenvalues are no longer real, so one must speak of unitary matrices, rather than orthogonal. Mct mht 04:07, 12 September 2006 (UTC)Reply

It's been a while since I followed up on this. I still feel like there is something missing in this article. For me back in 2006, I was confused about the importance of symmetry of a matrix because they are "just" rectangular arrays of numbers. As such, symmetry seems like a superficial property that can be undone by simple things like swapping rows. Furthermore, we could have a matrix that is symmetric but meaninglessly so. For example, a data matrix of participants with age and weight as columns. If Alice is 80 and weighs 90 pounds and bob is 90 and weighs 80 pounds, then you get a symmetric table, but that symmetry doesn't mean anything (for starters, the units don't match, but we could construct something for which they did). That left me wondering "when does symmetry mean something?" I now think I understand. Consider the moment matrix of a bunch of points in R3. That is a symmetric 3×3 matrix. As I've come to understand things, that matrix is contravariant (in the tensor sense) in its rows and columns.

I think matched variance of rows and columns is a necessary (but not sufficient) condition for a matrix to be symmetric in any meaningful sense. That implies that a meaningfully symmetric matrix is strictly-speaking the matrix representation of a tensor. Does that sound right? (I don't mean to say that [80 90; 90 80] isn't symmetric, I am just saying that for that symmetry to be anything other than coincidence, the matrix has to have matched variance in rows and columns.) —Ben FrantzDale (talk) 13:46, 14 December 2010 (UTC)Reply

"More precisely, a matrix is symmetric if and only if it has an orthonormal basis of eigenvectors" This statement is just wrong. See 'Normal Matrix'. Normal matrices need not be symmetric (in fact they can be anti-symmetric), but does have an orthonormal basis of eigenvectors. However, it IS true that if a matrix is symmetric, then it has an orthonormal basis (in fact this is trivially true, since all 'symmetric matrices' are 'normal matrices', and normal matrices have an orthonormal basis of eigenvectors) Please correct. —Preceding unsigned comment added by 128.122.20.210 (talk) 03:17, 30 December 2010 (UTC)Reply

It is correct if we assume that eigenvectors are real. Then A=O^TDO, and A^T=O^TD^TO=A. This is a bad username (talk) 22:16, 8 February 2016 (UTC)Reply

Symmetric matrices are usually considered to be real valued edit

I've made several changes to indicate that symmetric matrices are generally assumed to be real valued. With this, the real spectral theorem can be stated properly. VectorPosse 05:03, 12 September 2006 (UTC)Reply

Would it be better to have a little more detailed discussion of Hermitian? --TedPavlic 16:21, 19 February 2007 (UTC)Reply

It may be worthwhile to add a section on complex symmetric matrices, or matrices that are (complex) symmetric w/r/t an orthonormal basis. They are not as useful as self-adjoint operators, but the category includes toeplitz matrices, hankel matrices and any normal matrix. 140.247.23.104 04:43, 12 January 2007 (UTC)Reply

I agree. We just need to make sure it's in a different section so that it doesn't get mixed up with the stuff about the spectral theorem. VectorPosse 19:28, 19 February 2007 (UTC)Reply

Products of Symmetric Matrices: Eigenspaces Closed Under Transformation edit

As the article states, products of symmetric matrices are symmetric if and only if the matrices commute. However, it also says, "Two real symmetric matrices commute if and only if they have the same eigenspaces." This makes no sense. Consider arbitrary matrix   and the identity matrix  . Certainly,  , so these matrices commute. However, in general   and   will not have the same eigenspaces! I think this statement was supposed to be, "Two real symmetric matrices commute if and only if they are simultaneously diagonalizable," or, "Two real symmetric matrices commute if and only if the eigenspace for one matrix is closed under the other matrix." Both of these statements sound complicated compared to the original statement. I'm not sure if it's worthwhile to even mention it. However, I'm going to make a change. I'm okay with someone removing the statement entirely. --TedPavlic 17:34, 19 February 2007 (UTC)Reply

the previous version was correct. two real symmetric matrix commute iff they can be simultaneously diagoanlized iff they have the same eigenspaces. please undo your change. Mct mht 10:24, 21 February 2007 (UTC)Reply
As far as I can see, Ted's counterexample (identity matrix and arbitrary symmetric matrix) shows that two symmetric matrices can commute without having the same eigenspaces. Please tell me where we go wrong. -- Jitse Niesen (talk) 11:25, 21 February 2007 (UTC)Reply
hm, that depends on what's meant by "having the same eigenspaces", no? if that means "the collection of eigenspaces coincide", then you would be right. (however, seems to me the wording of the comment, which i removed, about the "closure" of eigenspaces can be improved.) perhaps it's more precise to say two real symmetric matrix commute iff there exists a basis consisting of common eigenvectors. Mct mht 12:17, 21 February 2007 (UTC)Reply
also, the identity matrix is really a degenerate case. since it and its multiples are the only matrices that's diagonal irrespective of the basis chosen. excluding such cases (if A restricted to a subspace V is a · I, remove V), seems to me that the general claim is true: real symmetric matrices {Ai} commute pairwise iff the family of eigenspaces of Ai and the family of eigenspaces of Aj are the same for all i and j. Mct mht 15:42, 21 February 2007 (UTC)Reply
I agree with "two real symmetric matrices commute iff there exists a basis consisting of common eigenvectors". I think the more common formulation is "two real symmetric matrices commute iff they are simultaneously diagonalizable", so I'd prefer that. I agree that the formulation "the eigenspace for one matrix is closed under the other matrix" is rather unfortunate as I had to read that sentence a couple of times before I understood what is meant.
I don't understand what you mean with "if A restricted to a subspace V is a · I, remove V". Every matrix is a multiple of the identity when restricted to an eigenspace, and after removing the eigenspaces of a symmetric matrix there's nothing left. -- Jitse Niesen (talk) 04:04, 22 February 2007 (UTC)Reply
shoot, you're right. well, remove V if dimension V is > 1. that better? Mct mht 04:10, 22 February 2007 (UTC)Reply
hm, forget it Jitse, that did not make it better. you're right there. Mct mht 12:20, 22 February 2007 (UTC)Reply

Hey..the definition of symmetrizable matrices is not complete. A symmetrizable matrix is a product of a symmetric matrix and a positive definite matrix. The positive definite matrix need not be a invertible diagonal matrix as in the section. Please check... Naik.a.s —Preceding unsigned comment added by Naik.a.s (talkcontribs) 10:08, 27 July 2009 (UTC)Reply

eigenvalues edit

are the eigenvalues of A:n×n, A=AT always {0,...,0,tr(A)} ?
applies for matrix BTB with B=[1,2,3,4]
--Saippuakauppias 10:48, 31 December 2007 (UTC)Reply

No. For instance, the identity matrix is symmetric, but has eigenvalues {1,1,…,1}. However, every matrix of the form   does have {0,…,0,tr(A)) as its eigenvalues. Such matrices are called rank-one matrices, because their rank is one. -- Jitse Niesen (talk) 15:24, 31 December 2007 (UTC)Reply


In the article the statement "Two real symmetric matrices commute if and only if they have the same eigenspaces." is wrong. For a counterexample consider the identity matrix and any diagonal matrix with more than one eigenvalue. The statement should read: "If two real symmetric matrices of dimension n commute then a basis for R^n can be chosen so that every element of the basis is an eigenvector for both matrices."

Incidentally the answer above is is assuming that B is itself a rank one matrix (as in the example given with B=[1,2,3,4]). It's not true for B an arbitrary matrix.

137.222.137.107 (talk) 15:16, 15 June 2012 (UTC)Nick GillReply

The spectral theorem... edit

...is conspicuous by the absence of any mention of it in this article!

Maybe I'll be back. Michael Hardy (talk) 02:18, 10 August 2008 (UTC)Reply

It's at the start of the "Properties" section. -- Jitse Niesen (talk) 10:57, 10 August 2008 (UTC)Reply

trace of the product of three matrices edit

Hi,

there's a mistake in the article. It's claimed that the trace of three symmetric (or hermitian) matrices is invariant under arbitrary permutations. To prove this, it's used that (CBA)^t = CBA which is simply not true because the product of symmetric (hermitian) matrices is symmetric (hermitian) if and only if they commute. —Preceding unsigned comment added by 192.33.103.47 (talk) 09:27, 28 June 2010 (UTC)Reply

Complex symmetric matrix eigenvalues edit

Hello, the page currently states that for each complex symmetric matrix, there exists a unitary transformation such that the resulting diagonal matrix has real entries. The eigenvalues of complex symmetric matrices are generally themselves complex, and not all real. Somebody tell me if I'm reading this wrong, otherwise I'm going to change the wording "is a real diagonal matrix" to "is a complex diagonal matrix". — Preceding unsigned comment added by Jeparie (talkcontribs) 18:36, 25 September 2015 (UTC)Reply

The entries of the diagonal matrix are the singular values not the eigenvalues of the original matrix. Best wishes, --Quartl (talk) 19:47, 25 September 2015 (UTC)Reply

The article is still (or again) wrong. A complex symmetric matrix doesn't necessarily have real eigenvalues, as the article currently states in the Decomposition section. Either we need to change complex symmetric matrix to complex Hermitian matrix, or elaborate that the diagonal matrix doesn't contain eigenvalues. — Preceding unsigned comment added by 213.52.196.70 (talk) 18:07, 6 November 2017 (UTC)Reply

Math notation edit

I changed all the math from inline math to tex. This was reverted by ‎87.254.93.231 several times. The article as it is now contains an ugly mixture of tex and inlne math. I propse changing all again to tex. Reasons for Tex:

  • You can display almost everything with Tex
  • Tex is easy to distinguish from surrounding nom-math text. E.g. for a matrix   vs. for a matrix A.

Reasons for inline math

  • Less work to write article

What do you think? 11:36, 15 January 2019 (UTC)

Proposing adding new proof of Takagi edit

Note: This proof yields a short algorithm for computing the Takagi decomposition using software packages like Scipy and Matlab. The schur decomposition feature should be used for this.

Let   denote the injective ring homomorphism  , where we use block matrix notation. Let   be an arbitrary non-singular complex-symmetric matrix. (The non-singularity restriction will later be lifted.) Observe that while the matrix   is not  -symmetric, we may define   and get that   is  -symmetric. By the spectral theorem it follows that   has an orthonormal eigenbasis   with real eigenvalues  . Let   and observe that for every eigenvector   in our eigenbasis, the vector   is an eigenvector of eigenvalue  . We therefore improve our orthonormal eigenbasis of   to the new orthonormal eigenbasis  , which we interpret as a block matrix  . Verify that each   is indeed orthogonal to each   because they have distinct eigenvalues (which might fail if   were singular). Given our definition of  , we have that   (where  ). Observe that   is equal to   for some matrix  ; this follows because if we write each   as the block matrix  , we have that  . Observe also that   is unitary; this follows because  , and by cancelling   due to injectivity, we get  . We therefore have that

 

Cancelling   (as it's invertible) and then cancelling   (because it's injective) yields  .

The result can be extended to any singular matrix   by approximating   as a sequence of invertible matrices  , and then forming the pair of sequences   such that  . Since the components of   and   (for each  ) are bounded, we may appeal to the Bolzano–Weierstrass theorem to get a pair of subsequences   that both converge. We therefore get  . We are done. --Svennik (talk) 14:54, 19 May 2022 (UTC)Reply

Proof 3 of Takagi edit

Note: This proof does not treat singular matrices as a special case.

Let   denote the injective ring homomorphism  , where we use block matrix notation. Let   be an arbitrary complex-symmetric matrix. Observe that while the matrix   is not  -symmetric, we may define   and get that   is indeed  -symmetric. Let   and observe that for every eigenvector   of eigenvalue   of  , the vector   is an eigenvector of eigenvalue  . We therefore build an orthonormal eigenbasis of   in the following way:

We start with the empty basis   and the linear map  .
While   is not  -dimensional, we let   be a unit eigenvector of  . Observe that as well as   being orthogonal to  , so is  . We replace   with  , and replace   with its restriction to  .

We arrange our resulting eigenbasis into the block matrix  . Given our definition of  , we have that   (where   consists of the eigenvalues corresponding to the eigenvectors   to  ). Observe that   is equal to   for some matrix  ; this follows because if we write each   as the block matrix  , we have that  , so that  . Observe also that   is unitary; this follows because  , and by cancelling   (due to injectivity) we get  . We therefore have that

 

Cancelling   (as it's invertible) and then cancelling   (because it's injective) yields  . --Svennik (talk) 19:46, 19 May 2022 (UTC)Reply