Wikipedia:Reference desk/Archives/Mathematics/2013 December 18

Mathematics desk
< December 17 << Nov | December | Jan >> December 19 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 18 edit

Irreducible representations of sl(2;C) edit

Regarding an old question: https://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Archives/Mathematics/2013_February_16#Notation_for_representation_theory_of_the_Lorentz_group

Would it be correct to say that the (m,n) representations for m ≠ 0,n = 0 are complex linear, while those for m = 0,n ≠ 0 are conjugate linear, and that those with m ≠ n, m,n ≠ 0 are either "neither" or perhaps "sesquilinear", and finally those with m = n are real linear?

(Sesquilinear supposedly means "one-and-a-half linear".) YohanN7 (talk) 17:32, 18 December 2013 (UTC)[reply]

The reason I ask is that I'm editing Representation theory of the Lorentz group a bit on these matters. Any input is welcome so it gets right. YohanN7 (talk) 21:09, 18 December 2013 (UTC)[reply]

Proving That Ln'(x) = 1/x Without Using e = (1 + 1/n)n edit

Where e is the sum of the reciprocals of factorials, and ln = loge. — 79.113.241.241 (talk) 21:03, 18 December 2013 (UTC)[reply]

x = exp(y)
dx/dy = x
ln(x) = y
(differentiate wrt y) dx/dy d(ln(x))/dx = 1
d(ln(x))/dx = 1/(dx/dy) = 1/x — Preceding unsigned comment added by 86.160.218.11 (talk) 22:00, 18 December 2013 (UTC)[reply]
And how exactly would we know that the derivative of ey is ey ? — 79.113.241.241 (talk) 22:08, 18 December 2013 (UTC)[reply]
You defined e using the power series. This can be differentiated, and the result that ey is its own derivative follows. 2.25.141.83 (talk) 23:34, 18 December 2013 (UTC)[reply]
No. I defined it (i.e., the number e) as a simple sum. The more general Taylor series for the exponential function would be based on its derivatives: but it is precisely these derivatives whose expression we're seeking. — 79.113.241.241 (talk) 00:29, 19 December 2013 (UTC)[reply]
You are assuming that you already know what e^x means, for any real x, given that you know the value of e. Actually, this is not so obvious, so instead we can define e^x equal to the power series, and then show that the resulting definition satisfies the expected power laws (and subsequently use the meaning of exp() and ln() to define the meaning of a^b generally for irrational numbers). 86.160.218.11 (talk) 01:14, 19 December 2013 (UTC)[reply]
And how precisely would we prove that ? I'm not too good at multiplying two infinite series. — 79.113.241.241 (talk) 01:35, 19 December 2013 (UTC)[reply]
I think you can just set up the equation (1 + x + x^2/2! + x^3/3! + ...)(1 + y + y^2/2! + y^3/3! + ...) = 1 + (x + y) + (x + y)^2/2! + (x + y)^3/3! + ... and show that the coeffecients of general x^i y^j are the same on both sides (using binomial theorem for the rhs). Of course, this probably omits some technical details of why it is valid to multiply infinite series term by term. Someone smarter than me will have to explain that part. 86.160.218.11 (talk) 02:11, 19 December 2013 (UTC)[reply]
I've tried with Mathematica for small values of n, and it does indeed work (first good news so far), BUT I cannot either "see" it or prove it, unfortunately... and I was kinda hoping someone might be able to guide me through it. — 79.113.241.241 (talk) 02:32, 19 December 2013 (UTC)[reply]
Sorry, I'm lost, what exactly are you trying to show? (Not what I outlined above for exp(x) exp(y) = exp(x + y), clearly, since there is no n in that.) 86.160.218.11 (talk) 02:38, 19 December 2013 (UTC)[reply]
Yes, I've limited the number of terms in each of the two series above [in x and in y] to just a few, then used Mathematica to expand the parentheses. And I did manage to get the first few terms from the third series [in x+y]. So I'm half-way there, but I still need to figure out the general mechanism or algorithm as to how this happens. — 79.115.133.61 (talk) 16:09, 19 December 2013 (UTC)[reply]
Oh, I see, well, as I say, just compare the coefficients of x^i y^j. On the lhs you get this term by multiplying x^i/i! by y^j/j!, so the coefficient is 1/(i!j!). On the rhs, x^i y^j must come from the term (x + y)^(i + j)/(i + j)!. Using the binomial theorem, the coffecient of x^i y^j in the expansion of (x + y)^(i + j) is (i + j)!/(i!j!), then dividing by the (i + j)! we end up with the same as the lhs for all i and j. 86.176.211.137 (talk) 18:08, 19 December 2013 (UTC)[reply]
Thanks ! :-) — 79.115.133.61 (talk) 18:16, 19 December 2013 (UTC)[reply]

It is first of all easy to establish that   using the power series definition of the exponential function (what other rigorous definition of it would you use?). We then use implicit differentiation of   with respect to y.--Jasper Deng (talk) 01:58, 19 December 2013 (UTC)[reply]

Then prove that  79.113.241.241 (talk) 02:07, 19 December 2013 (UTC)[reply]
How do you know what   means? 86.160.218.11 (talk) 02:13, 19 December 2013 (UTC)[reply]
How do we know what exponentiation means ? Seriously ? — 79.113.241.241 (talk) 02:29, 19 December 2013 (UTC)[reply]
The superscript notation is only a useful shorthand for a function that we define.--Jasper Deng (talk) 02:49, 19 December 2013 (UTC)[reply]
(ec) For irrational exponents, yes, seriously. 86.160.218.11 (talk) 02:53, 19 December 2013 (UTC)[reply]
Not quite sure why...  79.115.133.61 (talk) 16:09, 19 December 2013 (UTC)[reply]
It's true you can do that, but it's horrible and arbitrary, and you then have the problem of showing that all the infinitely many ways you can do it converge to the same number. So much nicer to use exp()! 86.176.211.137 (talk) 18:11, 19 December 2013 (UTC)[reply]
Convergence is not a problem, since for each term the exponent of x is bound (0...9), while the order of the radical decreases exponentially (10k). Or by using the squeeze theorem for each term of the form  . — 79.115.133.61 (talk) 18:30, 19 December 2013 (UTC)[reply]
Yeah, you can do it, but I still think as a definition it's ugly and arbitrary (arbitrary since you can write it in infinitely many ways, and there is no reason to choose one rather than the other). 18:43, 19 December 2013 (UTC)
(edit conflict)Why make life unnecessarily hard? It is rather coincidental really that the derivative of that particular power series is itself. The proof of that is simple, since e is equal to the power series on the right evaluated at x=1 (remember that   - the definition of the natural logarithm function means ln(a)=1 when a=e). Don't bother trying to expand the expression on the left, you don't even know how many terms you are adding up. (for this I'm assuming that the expression on the left means  ).--Jasper Deng (talk) 02:19, 19 December 2013 (UTC)[reply]
Side note: a flower-like symbol of a star is not the best choice for multiplication. We have no better choice in ASCII, but in LaTeX ther are also \cdot and \times, usually better readable than a star – compare to  . --CiaPan (talk) 08:07, 19 December 2013 (UTC)[reply]
I think the problem with this question is that one needs to say what definition one is using, not what definition one isn't using. The article Exponential function gives three different equivalent ways one can define the function and there's more. One has to decide which one to start from. The first one, power series definition, is the obvious choice, the second starting point there is immediately equivalent to the conclusion and the third definition is based on what they don't want to assume. Dmcq (talk) 13:17, 19 December 2013 (UTC)[reply]
Thanks! I found this to be rather useful. :-) — 79.115.133.61 (talk) 20:37, 19 December 2013 (UTC)[reply]

Here's a naive approach. Let "log" denote the logarithm in any base. Then

 

Now make a change of variables   so that this becomes

 

which is 1/x times some constant (I won't prove that this limit exists, but it can be done using elementary estimates). The natural logarithm is then by definition the logarithm in the base such that this constant is equal to one. Sławomir Biały (talk) 14:22, 19 December 2013 (UTC)[reply]

Slawomir, you're brilliant even when you're "naive" ! — 79.115.133.61 (talk) 16:12, 19 December 2013 (UTC)[reply]

trig equation edit

(sqrt(2) - 1)(1 + sec α) = tan α

By "inspection" α = 45deg, but how could I show this rigorously (and show whether it is the only solution)? 2.25.141.83 (talk) 21:58, 18 December 2013 (UTC)[reply]

There is probably a tricky solution using half-angle formulas, but a more general method is to convert to an algebraic equation using
 
This reduces the equation, after a bit of cancellation, to t = √2 - 1. Plugging this back in gives cos α = sin α = 1/√2 or α = π/4 + 2kπ. --RDBury (talk) 00:37, 19 December 2013 (UTC)[reply]
Are you sure those are all the solutions? 86.160.218.11 (talk) 01:23, 19 December 2013 (UTC)[reply]
RDBury's method is a parameterization of the whole unit circle, with the exception of the point   where  , which is actually also a solution of the original problem. (This parameterization but with sine and cosine are interchanged gives all solutions, since then the point at infinity is no longer in the domain of the original problem.) Sławomir Biały (talk) 14:06, 19 December 2013 (UTC)[reply]
What about α = π, for instance? That is also a solution to the original problem but is not of the form α = π/4 + 2kπ. 86.176.211.137 (talk) 14:46, 19 December 2013 (UTC)[reply]
You were inattentive. I do discuss this case. Sławomir Biały (talk) 15:17, 19 December 2013 (UTC)[reply]
Sorry, I didn't read it properly. 86.176.211.137 (talk) 18:13, 19 December 2013 (UTC)[reply]
sec is 1/cos and tan is sin/cos, so multiply through by cos α. you get something like
(√2 - 1) (cos α + 1) = sin α
which can be solved various ways. E.g. from the formula for sin (a + b) you can rewrite it as
sin (α + A) = B
for some A and B which is easily solved.--JohnBlackburnewordsdeeds 00:39, 19 December 2013 (UTC)[reply]
You should remember, however, as always when multiplying an equation, that the multiplier may be zero, which degenerates the equation to 0 = 0 form, thus introducing false solutions. Here cos(α) might be zero, so the final equation would be satisfied by any α = + π/2 (for integer n), which is not necessarily true for the original equation. Luckily in this problem which is discussed here, the original equation contains a term tan(α), which excludes cosine's zeros from the domain. Anyway this exclusion should be explicitly made in the final solution, just to be on the safe side... --CiaPan (talk) 08:00, 19 December 2013 (UTC)[reply]