Baker–Campbell–Hausdorff formula

In mathematics, the Baker–Campbell–Hausdorff formula gives the value of that solves the equation

for possibly noncommutative X and Y in the Lie algebra of a Lie group. There are various ways of writing the formula, but all ultimately yield an expression for in Lie algebraic terms, that is, as a formal series (not necessarily convergent) in and and iterated commutators thereof. The first few terms of this series are:
where "" indicates terms involving higher commutators of and . If and are sufficiently small elements of the Lie algebra of a Lie group , the series is convergent. Meanwhile, every element sufficiently close to the identity in can be expressed as for a small in . Thus, we can say that near the identity the group multiplication in —written as —can be expressed in purely Lie algebraic terms. The Baker–Campbell–Hausdorff formula can be used to give comparatively simple proofs of deep results in the Lie group–Lie algebra correspondence.

If and are sufficiently small matrices, then can be computed as the logarithm of , where the exponentials and the logarithm can be computed as power series. The point of the Baker–Campbell–Hausdorff formula is then the highly nonobvious claim that can be expressed as a series in repeated commutators of and .

Modern expositions of the formula can be found in, among other places, the books of Rossmann[1] and Hall.[2]

History edit

The formula is named after Henry Frederick Baker, John Edward Campbell, and Felix Hausdorff who stated its qualitative form, i.e. that only commutators and commutators of commutators, ad infinitum, are needed to express the solution. An earlier statement of the form was adumbrated by Friedrich Schur in 1890[3] where a convergent power series is given, with terms recursively defined.[4] This qualitative form is what is used in the most important applications, such as the relatively accessible proofs of the Lie correspondence and in quantum field theory. Following Schur, it was noted in print by Campbell[5] (1897); elaborated by Henri Poincaré[6] (1899) and Baker (1902);[7] and systematized geometrically, and linked to the Jacobi identity by Hausdorff (1906).[8] The first actual explicit formula, with all numerical coefficients, is due to Eugene Dynkin (1947).[9] The history of the formula is described in detail in the article of Achilles and Bonfiglioli[10] and in the book of Bonfiglioli and Fulci.[11]

Explicit forms edit

For many purposes, it is only necessary to know that an expansion for   in terms of iterated commutators of   and   exists; the exact coefficients are often irrelevant. (See, for example, the discussion of the relationship between Lie group and Lie algebra homomorphisms in Section 5.2 of Hall's book,[2] where the precise coefficients play no role in the argument.) A remarkably direct existence proof was given by Martin Eichler,[12] see also the "Existence results" section below.

In other cases, one may need detailed information about   and it is therefore desirable to compute   as explicitly as possible. Numerous formulas exist; we will describe two of the main ones (Dynkin's formula and the integral formula of Poincaré) in this section.

Dynkin's formula edit

Let G be a Lie group with Lie algebra  . Let

 
be the exponential map. The following general combinatorial formula was introduced by Eugene Dynkin (1947),[13][14]
 
where the sum is performed over all nonnegative values of   and  , and the following notation has been used:
 
with the understanding that [X] := X.

The series is not convergent in general; it is convergent (and the stated formula is valid) for all sufficiently small   and  . Since [A, A] = 0, the term is zero if   or if   and  .[15]

The first few terms are well-known, with all higher-order terms involving [X,Y] and commutator nestings thereof (thus in the Lie algebra):

 

The above lists all summands of order 6 or lower (i.e. those containing 6 or fewer X's and Y's). The XY (anti-)/symmetry in alternating orders of the expansion, follows from Z(Y, X) = −Z(−X, −Y). A complete elementary proof of this formula can be found in the article on the derivative of the exponential map.

An integral formula edit

There are numerous other expressions for  , many of which are used in the physics literature.[16][17] A popular integral formula is[18][19]

 
involving the generating function for the Bernoulli numbers,
 
utilized by Poincaré and Hausdorff.[nb 1]

Matrix Lie group illustration edit

For a matrix Lie group   the Lie algebra is the tangent space of the identity I, and the commutator is simply [X, Y] = XYYX; the exponential map is the standard exponential map of matrices,

 

When one solves for Z in

 
using the series expansions for exp and log one obtains a simpler formula:
 
[nb 2] The first, second, third, and fourth order terms are:
  •  
  •  
  •  
  •  

The formulas for the various  's is not the Baker–Campbell–Hausdorff formula. Rather, the Baker–Campbell–Hausdorff formula is one of various expressions for  's in terms of repeated commutators of   and  . The point is that it is far from obvious that it is possible to express each   in terms of commutators. (The reader is invited, for example, to verify by direct computation that   is expressible as a linear combination of the two nontrivial third-order commutators of   and  , namely   and  .) The general result that each   is expressible as a combination of commutators was shown in an elegant, recursive way by Eichler.[12]

A consequence of the Baker–Campbell–Hausdorff formula is the following result about the trace:

 
That is to say, since each   with   is expressible as a linear combination of commutators, the trace of each such terms is zero.

Questions of convergence edit

Suppose   and   are the following matrices in the Lie algebra   (the space of   matrices with trace zero):

 
Then
 
It is then not hard to show[20] that there does not exist a matrix   in   with  . (Similar examples may be found in the article of Wei.[21])

This simple example illustrates that the various versions of the Baker–Campbell–Hausdorff formula, which give expressions for Z in terms of iterated Lie-brackets of X and Y, describe formal power series whose convergence is not guaranteed. Thus, if one wants Z to be an actual element of the Lie algebra containing X and Y (as opposed to a formal power series), one has to assume that X and Y are small. Thus, the conclusion that the product operation on a Lie group is determined by the Lie algebra is only a local statement. Indeed, the result cannot be global, because globally one can have nonisomorphic Lie groups with isomorphic Lie algebras.

Concretely, if working with a matrix Lie algebra and   is a given submultiplicative matrix norm, convergence is guaranteed[14][22] if

 

Special cases edit

If   and   commute, that is  , the Baker–Campbell–Hausdorff formula reduces to  .

Another case assumes that   commutes with both   and  , as for the nilpotent Heisenberg group. Then the formula reduces to its first three terms.

Theorem:[23] If   and   commute with their commutator,  , then  .

This is the degenerate case used routinely in quantum mechanics, as illustrated below and is sometimes known as the disentangling theorem.[24] In this case, there are no smallness restrictions on   and  . This result is behind the "exponentiated commutation relations" that enter into the Stone–von Neumann theorem. A simple proof of this identity is given below.

Another useful form of the general formula emphasizes expansion in terms of Y and uses the adjoint mapping notation  :

 
which is evident from the integral formula above. (The coefficients of the nested commutators with a single   are normalized Bernoulli numbers.)

Now assume that the commutator is a multiple of  , so that  . Then all iterated commutators will be multiples of  , and no quadratic or higher terms in   appear. Thus, the   term above vanishes and we obtain:

Theorem:[25] If  , where   is a complex number with   for all integers  , then we have

 

Again, in this case there are no smallness restriction on   and  . The restriction on   guarantees that the expression on the right side makes sense. (When   we may interpret  .) We also obtain a simple "braiding identity":

 
which may be written as an adjoint dilation:
 

Existence results edit

If   and   are matrices, one can compute   using the power series for the exponential and logarithm, with convergence of the series if   and   are sufficiently small. It is natural to collect together all terms where the total degree in   and   equals a fixed number  , giving an expression  . (See the section "Matrix Lie group illustration" above for formulas for the first several  's.) A remarkably direct and concise, recursive proof that each   is expressible in terms of repeated commutators of   and   was given by Martin Eichler.[12]

Alternatively, we can give an existence argument as follows. The Baker–Campbell–Hausdorff formula implies that if X and Y are in some Lie algebra   defined over any field of characteristic 0 like   or  , then

 
can formally be written as an infinite sum of elements of  . [This infinite series may or may not converge, so it need not define an actual element Z in  .] For many applications, the mere assurance of the existence of this formal expression is sufficient, and an explicit expression for this infinite sum is not needed. This is for instance the case in the Lorentzian[26] construction of a Lie group representation from a Lie algebra representation. Existence can be seen as follows.

We consider the ring   of all non-commuting formal power series with real coefficients in the non-commuting variables X and Y. There is a ring homomorphism from S to the tensor product of S with S over R,

 
called the coproduct, such that
 
and
 
(The definition of Δ is extended to the other elements of S by requiring R-linearity, multiplicativity and infinite additivity.)

One can then verify the following properties:

  • The map exp, defined by its standard Taylor series, is a bijection between the set of elements of S with constant term 0 and the set of elements of S with constant term 1; the inverse of exp is log
  •   is grouplike (this means  ) if and only if s is primitive (this means  ).
  • The grouplike elements form a group under multiplication.
  • The primitive elements are exactly the formal infinite sums of elements of the Lie algebra generated by X and Y, where the Lie bracket is given by the commutator  . (Friedrichs' theorem[16][13])

The existence of the Campbell–Baker–Hausdorff formula can now be seen as follows:[13] The elements X and Y are primitive, so   and   are grouplike; so their product   is also grouplike; so its logarithm   is primitive; and hence can be written as an infinite sum of elements of the Lie algebra generated by X and Y.

The universal enveloping algebra of the free Lie algebra generated by X and Y is isomorphic to the algebra of all non-commuting polynomials in X and Y. In common with all universal enveloping algebras, it has a natural structure of a Hopf algebra, with a coproduct Δ. The ring S used above is just a completion of this Hopf algebra.

Zassenhaus formula edit

A related combinatoric expansion that is useful in dual[16] applications is

 
where the exponents of higher order in t are likewise nested commutators, i.e., homogeneous Lie polynomials.[27] These exponents, Cn in exp(−tX) exp(t(X+Y)) = Πn exp(tn Cn), follow recursively by application of the above BCH expansion.

As a corollary of this, the Suzuki–Trotter decomposition follows.

An important lemma and its application to a special case of the Baker–Campbell–Hausdorff formula edit

The identity (Campbell 1897) edit

Let G be a matrix Lie group and g its corresponding Lie algebra. Let adX be the linear operator on g defined by adX Y = [X,Y] = XYYX for some fixed Xg. (The adjoint endomorphism encountered above.) Denote with AdA for fixed AG the linear transformation of g given by AdAY = AYA−1.

A standard combinatorial lemma which is utilized[18] in producing the above explicit expansions is given by[28]

 
so, explicitly,
 
This is a particularly useful formula which is commonly used to conduct unitary transforms in quantum mechanics. By defining the iterated commutator,
 
we can write this formula more compactly as,
 

This formula can be proved by evaluation of the derivative with respect to s of f (s)YesX Y esX, solution of the resulting differential equation and evaluation at s = 1,

 
or[29]
 

An application of the identity edit

For [X,Y] central, i.e., commuting with both X and Y,

 
Consequently, for g(s) ≡ esX esY, it follows that
 
whose solution is
 
Taking   gives one of the special cases of the Baker–Campbell–Hausdorff formula described above:
 

More generally, for non-central [X,Y], the following braiding identity further follows readily,

 

Infinitesimal case edit

A particularly useful variant of the above is the infinitesimal form. This is commonly written as

 
This variation is commonly used to write coordinates and vielbeins as pullbacks of the metric on a Lie group.

For example, writing   for some functions   and a basis   for the Lie algebra, one readily computes that

 
for   the structure constants of the Lie algebra.

The series can be written more compactly (cf. main article) as

 
with the infinite series
 
Here, M is a matrix whose matrix elements are  .

The usefulness of this expression comes from the fact that the matrix M is a vielbein. Thus, given some map   from some manifold N to some manifold G, the metric tensor on the manifold N can be written as the pullback of the metric tensor   on the Lie group G,

 
The metric tensor   on the Lie group is the Cartan metric, the Killing form. For N a (pseudo-)Riemannian manifold, the metric is a (pseudo-)Riemannian metric.

Application in quantum mechanics edit

A special case of the Baker–Campbell–Hausdorff formula is useful in quantum mechanics and especially quantum optics, where X and Y are Hilbert space operators, generating the Heisenberg Lie algebra. Specifically, the position and momentum operators in quantum mechanics, usually denoted   and  , satisfy the canonical commutation relation:

 
where   is the identity operator. It follows that   and   commute with their commutator. Thus, if we formally applied a special case of the Baker–Campbell–Hausdorff formula (even though   and   are unbounded operators and not matrices), we would conclude that
 
This "exponentiated commutation relation" does indeed hold, and forms the basis of the Stone–von Neumann theorem. Further,  


A related application is the annihilation and creation operators, â and â. Their commutator [â,â] = −I is central, that is, it commutes with both â and â. As indicated above, the expansion then collapses to the semi-trivial degenerate form:

 
where v is just a complex number.

This example illustrates the resolution of the displacement operator, exp(v*â), into exponentials of annihilation and creation operators and scalars.[30]

This degenerate Baker–Campbell–Hausdorff formula then displays the product of two displacement operators as another displacement operator (up to a phase factor), with the resultant displacement equal to the sum of the two displacements,

 
since the Heisenberg group they provide a representation of is nilpotent. The degenerate Baker–Campbell–Hausdorff formula is frequently used in quantum field theory as well.[31]

See also edit

Notes edit

  1. ^ Recall
     
    for the Bernoulli numbers, B0 = 1, B1 = 1/2, B2 = 1/6, B4 = −1/30, ...
  2. ^ Rossmann 2002 Equation (2) Section 1.3. For matrix Lie algebras over the fields R and C, the convergence criterion is that the log series converges for both sides of eZ = eXeY. This is guaranteed whenever X‖ + ‖Y‖ < log 2, ‖Z‖ < log 2 in the Hilbert–Schmidt norm. Convergence may occur on a larger domain. See Rossmann 2002 p. 24.

References edit

  1. ^ Rossmann 2002
  2. ^ a b Hall 2015
  3. ^ F. Schur (1890), "Neue Begründung der Theorie der endlichen Transformationsgruppen," Mathematische Annalen, 35 (1890), 161–197. online copy
  4. ^ see, e.g., Shlomo Sternberg, Lie Algebras (2004) Harvard University. (cf p 10.)
  5. ^ John Edward Campbell, Proceedings of the London Mathematical Society 28 (1897) 381–390; (cf pp386-7 for the eponymous lemma); J. Campbell, Proceedings of the London Mathematical Society 29 (1898) 14–32.
  6. ^ Henri Poincaré, Comptes rendus de l'Académie des Sciences 128 (1899) 1065–1069; Transactions of the Cambridge Philosophical Society 18 (1899) 220–255. online
  7. ^ Henry Frederick Baker, Proceedings of the London Mathematical Society (1) 34 (1902) 347–360; H. Baker, Proceedings of the London Mathematical Society (1) 35 (1903) 333–374; H. Baker, Proceedings of the London Mathematical Society (Ser 2) 3 (1905) 24–47.
  8. ^ Felix Hausdorff, "Die symbolische Exponentialformel in der Gruppentheorie", Ber Verh Saechs Akad Wiss Leipzig 58 (1906) 19–48.
  9. ^ Rossmann 2002 p. 23
  10. ^ Achilles & Bonfiglioli 2012
  11. ^ Bonfiglioli & Fulci 2012
  12. ^ a b c Eichler, Martin (1968). "A new proof of the Baker-Campbell-Hausdorff formula". Journal of the Mathematical Society of Japan. 20 (1–2): 23–25. doi:10.2969/jmsj/02010023.
  13. ^ a b c Nathan Jacobson, Lie Algebras, John Wiley & Sons, 1966.
  14. ^ a b Dynkin, Eugene Borisovich (1947). "Вычисление коэффициентов в формуле Campbell–Hausdorff" [Calculation of the coefficients in the Campbell–Hausdorff formula]. Doklady Akademii Nauk SSSR (in Russian). 57: 323–326.
  15. ^ A.A. Sagle & R.E. Walde, "Introduction to Lie Groups and Lie Algebras", Academic Press, New York, 1973. ISBN 0-12-614550-4.
  16. ^ a b c Magnus, Wilhelm (1954). "On the exponential solution of differential equations for a linear operator". Communications on Pure and Applied Mathematics. 7 (4): 649–673. doi:10.1002/cpa.3160070404.
  17. ^ Suzuki, Masuo (1985). "Decomposition formulas of exponential operators and Lie exponentials with some applications to quantum mechanics and statistical physics". Journal of Mathematical Physics. 26 (4): 601–612. Bibcode:1985JMP....26..601S. doi:10.1063/1.526596.; Veltman, M, 't Hooft, G & de Wit, B (2007), Appendix D.
  18. ^ a b W. Miller, Symmetry Groups and their Applications, Academic Press, New York, 1972, pp 159–161. ISBN 0-12-497460-0
  19. ^ Hall 2015 Theorem 5.3
  20. ^ Hall 2015 Example 3.41
  21. ^ Wei, James (October 1963). "Note on the Global Validity of the Baker-Hausdorff and Magnus Theorems". Journal of Mathematical Physics. 4 (10): 1337–1341. Bibcode:1963JMP.....4.1337W. doi:10.1063/1.1703910.
  22. ^ Biagi, Stefano; Bonfiglioli, Andrea; Matone, Marco (2018). "On the Baker-Campbell-Hausdorff Theorem: non-convergence and prolongation issues". Linear and Multilinear Algebra. 68 (7): 1310–1328. arXiv:1805.10089. doi:10.1080/03081087.2018.1540534. ISSN 0308-1087. S2CID 53585331.
  23. ^ Hall 2015 Theorem 5.1
  24. ^ Gerry, Christopher; Knight, Peter (2005). Introductory Quantum Optics (1st ed.). Cambridge University Press. p. 49. ISBN 978-0-521-52735-4.
  25. ^ Hall 2015 Exercise 5.5
  26. ^ Hall 2015 Section 5.7
  27. ^ Casas, F.; Murua, A.; Nadinic, M. (2012). "Efficient computation of the Zassenhaus formula". Computer Physics Communications. 183 (11): 2386–2391. arXiv:1204.0389. Bibcode:2012CoPhC.183.2386C. doi:10.1016/j.cpc.2012.06.006. S2CID 2704520.
  28. ^ Hall 2015 Proposition 3.35
  29. ^ Rossmann 2002 p. 15
  30. ^ L. Mandel, E. Wolf Optical Coherence and Quantum Optics (Cambridge 1995).
  31. ^ Greiner & Reinhardt 1996 See pp 27-29 for a detailed proof of the above lemma.

Bibliography edit

External links edit