Numerical method edit

But the numerical method is only really possible for real symmetric matrices, no? Charles Matthews 22:00, 18 October 2005 (UTC)Reply

I think it is okay as long as the matrix is diagonalizable (though I seem to remember that it is unstable). I also removed your statement on 2x2 matrices which I didn't quite understand by the statement that every nonsingular matrix has a matrix logarithm. Hopefully, I'll manage to write a bit more later. -- Jitse Niesen (talk) 23:35, 18 October 2005 (UTC)Reply
PS: I'm glad to see you throw your hat in the ring again. -- Jitse Niesen (talk) 23:40, 18 October 2005 (UTC)Reply
It occured to me that Charles might have been thinking about matrix logarithms that are real, while for me the logarithm can well be complex. -- Jitse Niesen (talk) 10:41, 19 October 2005 (UTC)Reply

So, for real symmetric positive-definite there is no issue with taking logs of the eigenvalues. Anything else: well, it might give

log λ

with λ < 0, which is 'interesting'. Otherwise you can of course have complex eigenvalues, or not be able to diagonalize. Probably with a real matrix and a pair of conjugate complex eigenvalues something good happens.

The 2×2 case

 

ought to be the same question as the complex logarithm. Charles Matthews 10:46, 19 October 2005 (UTC)Reply

More explanation: I believe exp is surjective from n×n complex matrices to invertible matrices; but it is not surjective from n×n real matrices to invertible real matrices, which is not even connected. So in discussing what kind of inverse function there is, you do really need the complex entries. Also, it will only be a local inverse function. Charles Matthews 10:55, 19 October 2005 (UTC)Reply

Yes, that's correct. I'll think a bit about the 2x2 case. I remember from when I studied this stuff, that some things do not quite work out as I'd expected. For example, the matrix
 
does have a real logarithm, namely
 
Geometrically, this corresponds with rotation through 180 degrees.
By the way, what do you think about the statement that the matrix logarithm "is in some sense an inverse function of the matrix exponential"? I'm not very happy with it because it is imprecise (in what sense?), but I think some sentiment like this needs to be expressed in the lead section.
Finally, on rereading I realized that my PS above could be rather mysterious. It refers to your standing for the ArbCom election. -- Jitse Niesen (talk) 12:32, 19 October 2005 (UTC)Reply

The diagonalization edit

As noticed before me, this calculation of the matrix logarithm works only for diagonalizable matrices. That has to be menioned in the article, no? Conceptually though, any invertible matrix should have a logarithm, that follows from functional calculus, but I guess one can't find the log so easily for nondiagonalizable matrices. Oleg Alexandrov (talk) 13:28, 19 October 2005 (UTC)Reply

Yes, we can actually boost the article and make it more interesting by getting those extra points of view in. On nilpotent matrices, exp is a polynomial mapping to unipotent matrices, with polynomial inverse; in some sense, then, the off-diagonal block of the Jordan form is simpler, and numerically. If you already have the Jordan form of M, then you have it as a product diagonal×unipotent, with commuting factors, so log takes multiplication to addition. Of course a numerical analyst doesn't want to do that first; but it is clarifying, I think. Charles Matthews 15:07, 19 October 2005 (UTC)Reply

Yes, indeed the diagonalization is "cheap algorithm", and better (more widely applicable; more stable) algorithms are available by the work of Nick Higham. He has a Wikipedia page: http://en.wikipedia.org/Nicholas_Higham see http://www.maths.manchester.ac.uk/~higham/ One should really consult "the" book

Functions of Matrices: Theory and Computation
by Nicholas J. Higham, SIAM, 2008. xx+425 pages, hardcover, ISBN 978-0-898716-46-7.

see http://www.maths.manchester.ac.uk/~higham/fm/

Maechler (talk) 15:10, 28 February 2009 (UTC) Martin Maechler, Seminar für Statistik, ETH Zurich, SwitzerlandReply

Connection to Lie groups edit

The article emphasizes the algorithmical point of view. It would be nice to strengthen the connection to matrix Lie groups and Lie algebras. I started a section on this. Geometric examples help to understand why the logarithm is usually multi-valued. I included one such example.

--Benjamin.friedrich (talk) 22:06, 2 February 2008 (UTC)Reply

Real-complex question edit

Currently (2009-02-28) the page *seems* to have ab contradiction (2) "Properties" claims that the M.Log. exists whenever the matrix is invertible, whereas (10) "Constraints in the n x n case" says that *additionally* all Jordan blocks corresponding to negative Eigenvalues must appear even times. If you look closely, the latter is about *real* matrices, but nowhere is it mentioned that the former is about general complex matrices. Is it? Maechler (talk) 15:15, 28 February 2009 (UTC) Martin Maechler, ETH ZurichReply

Domain restriction edit

To improve the article's veracity on existence of the logarithm matrix, the section on the 2 x 2 real case has been augmented with examples and counter-examples. A comment has been included to show the conjugacy existence of the logarithm when a matrix itself has none.Rgdboer (talk) 02:46, 3 March 2009 (UTC)Reply

Today the references used by editors referring to the Jordan normal form were added. Further, the lede has the facts concerning Lie theory. The incorrect assertion of domain = invertible matrices has been removed. This article straddles the era of Gantmacher and modern Lie theory; it can be improved by more detail with well-known examples, perhaps unitary matrix and skew-Hermitian matrix.

Rgdboer (talk) 23:41, 14 March 2009 (UTC)Reply
I don't think there is anything wrong with domain = invertible matrices, if you are allowing the logarithm to be complex, as the text that you removed specified. So what is the problem?
I have a hard time with the section on the 2-by-2 case. I think I understand the connection between 2-by-2 matrices and z = x + y ε (a reference would be useful here), but I don't see what it has to do with the matrix logarithm. -- Jitse Niesen (talk) 01:01, 15 March 2009 (UTC)Reply
The phrase "allowing the logaritm to be complex" entails a ring extension to the ring of real 2 x 2 matrices. Such extension recalls the extension of the real line to include roots to the quadratic equation xx + 1 = 0, and so gaining the complex plane. So with extension more things are possible. The comments I made about logarithm refer to the function taking values in the same ring as used for selecting the argument. To clarify this idea of logarithm, with restricted domain, there is now Real matrices (2 x 2)#Functions of 2 × 2 real matrices.
Rgdboer (talk) 22:44, 8 May 2009 (UTC)Reply

Real matrices edit

The edits by R.e.b. have been undone because they detract from the article. Discussion has been going on about the domain of logarithm when applied to real matrices. See 2 × 2 real matrices#Functions of 2 × 2 matrices for a larger context. It is important to be correct. Further, it is helpful to the reader to indicate that logarithm of matrices gets one into the door of Lie theory. Why R.e.b. said this is an error is a mystery.Rgdboer (talk) 03:23, 4 November 2009 (UTC)Reply

Derivative? edit

What's the derivative (Jacobian) of the matrix logarithm? That is, what is:

 ?

Intuitively it should somehow generalize  . That identity must hold for the eigenvalues (i.e., the derivatives of the eigenvalues of   must be the reciprocial of the eigenvalues of M and so must be the eigenvalues of the inverse of M. I guess it's just a matter of plugging that into eigenvalue perturbation... Is there a pat answer for this, though? —Ben FrantzDale (talk) 19:39, 18 November 2009 (UTC)Reply

The starting point of all answers to questions of this ilk is naturally Jacobi's formula. In your case, depending on your objectives, a partial pat answer follows by considering the variation of the identity
 

so, then,

 

Cuzkatzimhut (talk) 00:05, 2 October 2014 (UTC)Reply

Problem with article edit

Almost every time this article mentions a logarithm of a matrix M, it refers erroneously to the logarithm of M. This is just incorrect, since every matrix that has a complex logarithm also has more than one complex logarithm. This is misleading to the reader, who is led to believe incorrectly that matrices in general have a particular matrix that is their logarithm. This is not true in any natural sense. Except for special categories of matrices, such as positive definite symmetric matrices M (whose "principal" logarithm log(M) can be taken as the diagonal matrix with entries that are the real logarithms of the eigenvalues of M).

This ought to be fixed in the article.2600:1700:E1C0:F340:9C6D:A392:5AC0:7D43 (talk) 04:08, 28 June 2018 (UTC)Reply

The first half of the article (up to and including the fourth section on Existence) diligently points out, again and again, that logarithms are not unique. But after that the article becomes less consistent in its assumptions (real or complex), notation, etc. So I agree that the later sections could use some work. Mgnbar (talk) 19:05, 28 June 2018 (UTC)Reply

2018 November 19 edit

IP editor 77.102.101.85 has twice now edited the start of the "Power series expression" section, in my opinion making the text worse, both in its prose and in its correctness. But, before this turns into a revert war, maybe other editors would like to chime in. Mgnbar (talk) 22:28, 19 November 2018 (UTC)Reply

Issue with section on power series expression: matrix norm edit

It is not clear to me which matrix norm this article refers to when it states that the power series expression converges if ||B-I||\<1. The source Hall (2015) uses ||B|| to denote the Hilbert-Schmidt norm trace(B*B)^(1/2), so perhaps that is the right one. But could one use another matrix norm instead? In any case, I would find it helpful if the article would specify the permissible matrix norms for the statement on the convergence of the power series. Aliceschwarze (talk) 01:18, 6 March 2019 (UTC)Reply

You are right that the article should clarify the norm. I don't have the required reliable sources in front of me. Meanwhile, I do suspect that it's the norm you mentioned, which in the real case is called the Frobenius norm (or the Euclidean norm). Mgnbar (talk) 13:01, 7 March 2019 (UTC)Reply
Any sub-multiplicative matrix norm (such as any matrix norm induced from a vector norm) will do. The proof is simple, since then   for any k. Equivalently, it is sufficient that the spectral radius is less than 1. I'll add this soon. McKay (talk) 23:27, 7 March 2019 (UTC)Reply
Oh nice. I think this should be mentioned explicitly (perhaps in parentheses), since I had the same concern as Aliceschwarze upon reading. Ei283 (talk) 01:30, 10 January 2024 (UTC)Reply

What norm? edit

Specifically, if  , then the preceding series converges and  .

In what norm the condition   is? 132.64.39.238 (talk) 11:23, 8 March 2020 (UTC)Reply

For the answer, see the text immediately above your question. (Though these details should be added to the article itself by someone with reliable sources at hand.) Mgnbar (talk) 17:54, 8 March 2020 (UTC)Reply

Multiples of 2π edit

"This corresponds to the fact that the rotation angle is only determined up to multiples of 2π."

But ∞ is a multiple of 2π, so, this says it's determined from -∞ to +∞, so it says nothing of meaningful value.

I think what it intends to say is "the rotation angle repeats every 2π." — Preceding unsigned comment added by 192.31.106.42 (talk) 13:37, 27 December 2020 (UTC)Reply

Infinity is not a real number and not an integer multiple of 2 π.
The phrase "determined up to multiples of 2 π" means that there is a unique rotation angle A, if you ignore the fact that A + k 2 π is also a valid value for the rotation angle, for any integer k. For example, if 1.7 is the rotation angle, then
1.7 + 2 π, 1.7 - 2 π, 1.7 + 4 π, 1.7 - 4 π, ...,
are also valid values of the rotation angle, but 1.7 + 3 π and 1.4 are not.
This is what you meant, I think, about the angle repeating every 2 π. However, the angle does not repeat every 2 π. Instead, the rotation repeats every 2 π.
Thanks for your input. We can continue to talk about other ways to make the text clearer to you. :) Mgnbar (talk) 22:36, 27 December 2020 (UTC)Reply

Integral formula error in the Properties section edit

The integral doesn't make sense. Perhaps it's meant for each "1" to be "I". I don't know what's meant. Somebody who does should fix it. Getthebasin (talk) 19:53, 28 January 2021 (UTC)Reply

I agree. It's not clear why there are scalars in the numerators and matrices in the denominators. Perhaps "1 / BLAH" simply means BLAH^-1. In any event, it should be clearer. Mgnbar (talk) 22:03, 28 January 2021 (UTC)Reply
In addition, I would love to see a citation for this integral, so I can learn more about where it comes from Mrfout (talk) 00:38, 21 October 2021 (UTC)Reply
https://www.ias.edu/sites/default/files/sns/files/1-matrixlog_tex(1).pdf Cuzkatzimhut (talk) 11:39, 21 October 2021 (UTC)Reply
So the equation in Properties seems to be equation (6) in that PDF. Thanks. Still, it would be nice to have a reliable source. (This source does not appear to be peer-reviewed, explicitly states that it is not rigorous, and even cites Wikipedia.) Mgnbar (talk) 13:41, 21 October 2021 (UTC)Reply
Whatever; these notes are more than reliable, if only you understood what a towering giant the IAS author is. The notes are more rigorous than the level of the article. Fussing about rigor while stumbling on the obvious is bad form. The parting "reference" on WP's Matrix exponential#The exponential map simply reminds you of the well-known formula. WP provides a ref to Willcox's article. This is routine stuff. Cuzkatzimhut (talk) 15:14, 21 October 2021 (UTC)Reply
Appreciate the addition, thanks! Mrfout (talk) 12:45, 31 December 2021 (UTC)Reply

first element of the sum edit

I'm not sure about this, but if the summation goes from k=1 to infinity, shouldn't the first element be (-1)^{2} \frac{(B-I)^2}{2}, which is the second one in th formula? The formula includes the one for k=0 so I think either the lower bound of k or the formula is wrong. DoerteMitHut (talk) 12:39, 16 June 2021 (UTC)Reply

You're talking about the power series expansion? It looks right (or at least self-consistent) to me. Notice that, although the exponent on -1 is k + 1, the denominator and the exponent on (B - I) are simply k. Mgnbar (talk) 14:54, 16 June 2021 (UTC)Reply