Wikipedia:Reference desk/Archives/Mathematics/2010 October 2

Mathematics desk
< October 1 << Sep | October | Nov >> October 3 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 2 edit

more combinatorics edit

  Resolved

I am reading a book on combinatorics and am stuck on the following three problems:

  • Why is the identity   called the hexagon identity?
  • Compute the following sum:  . I can reduce this to   and no further.
  • Compute the following sum:  . I want to use Vandermonde's convolution here but does it imply that this sum is  ? What do I do next? I dont want any summation in the final result.

Can anyone help. Thanks-Shahab (talk) 07:19, 2 October 2010 (UTC)[reply]

You might like the book A=B. I'm not sure but it could be helpful for the last two problems. 67.122.209.115 (talk) 09:01, 2 October 2010 (UTC)[reply]
The binomial coefficients in the hexagon identity are corners of a hexagon in Pascal's triangle. The infinite series are actually finite sums, as  =0 for k<0 and for k>n≥0. Bo Jacoby (talk) 10:19, 2 October 2010 (UTC).[reply]


Another nice book for learning how to do these manipulations is concrete mathematics. In your case, using generating functions seems a reasonable way for treating the sums. In particular, if you can write your expression in the form  , you can see it as the coefficient of   in the power series expansion of the product:   (this is the Cauchy product of power series). Note that the Vandermonde's identity is a special case of this. In your case, the task is not difficult (but ask again here if you meet any difficulty). You can do it in the first sum either in the original form (writing  ; in this case you also need a closed expression for  , which is related to the binomial series with exponent   ) or in your reduction, which is simpler to treat (then you need the simpler  ). Another possibility in order to proceed from your reduction is, make a substitution:   so   and distribute: this leaves you with two sums,   and   which is identity (6a) here. Your last sum is indeed close to the Vandermonde's identity, but the result you wrote is not at all correct (you should have done something very bad in the middle). You may write  ; put   so   to get the form of Vandermonde's identity--pma 15:53, 3 October 2010 (UTC)[reply]
Thanks pma. I hope you're well:). I solved both the problems.-Shahab (talk) 02:40, 6 October 2010 (UTC)[reply]

Second moment of the binomial distribution edit

The moment generating function of the binomial distribution is  . When I take the second derivative I get  . Substituting 0 in for t gives me  . Why is this not the same as the variance of the binomial distribution  ?--220.253.253.56 (talk) 11:34, 2 October 2010 (UTC)[reply]

See cumulant. Bo Jacoby (talk) 11:51, 2 October 2010 (UTC).[reply]

The variance is not the same thing as the raw second moment. The variance is

 

where μ is E(X). The second moment, on the other hand, is

 

Michael Hardy (talk) 22:30, 2 October 2010 (UTC)[reply]

Then why does the article moment (mathematics) say that the second moment is the variance?--220.253.253.56 (talk) 22:50, 2 October 2010 (UTC)[reply]
It doesn't. Algebraist 22:53, 2 October 2010 (UTC)[reply]
Moment_(mathematics)#Variance--220.253.253.56 (talk) 23:00, 2 October 2010 (UTC)[reply]
There are eighteen words in that section. You seem to have neglected to read the third. Algebraist 23:03, 2 October 2010 (UTC)[reply]
So what is a central moment and how are they calculated (can you use the moment generating function)?--220.253.253.56 (talk) 23:05, 2 October 2010 (UTC)[reply]
Central moment might help. 129.234.53.175 (talk) 15:52, 3 October 2010 (UTC)[reply]

In Moment_(mathematics)#Variance, I've now added a link to central moment. Michael Hardy (talk) 02:48, 4 October 2010 (UTC)[reply]

There is already a link to Central moment in Moment (mathematics). I think the guideline is not to link to the same article twice. -- Meni Rosenfeld (talk) 08:28, 4 October 2010 (UTC)[reply]
Where is that guideline? To me that seems unwise in long articles. Michael Hardy (talk) 19:44, 4 October 2010 (UTC)[reply]
Wikipedia:Manual of Style (linking)#Repeated links. It does mention as an exception the case where the distance is large, but here the instances are quite close in my opinion. -- Meni Rosenfeld (talk) 20:35, 4 October 2010 (UTC)[reply]

More Limits edit

Hello. How can I prove  ? I tried l'Hopital's rule but get  . Thanks very much in advance. --Mayfare (talk) 15:19, 2 October 2010 (UTC)[reply]

Forget l'Hopital's rule and try to visualise what is happening. If a < 0 then both xa and e-x tend to 0 as x grows, so the result is obvious. The case a=0 is also easily dealt with. If a > 0 then as x gets larger, xa grows but ex grows even more quickly. In fact, if x > 0, then
 
where m is the next integer greater than a. So
 
I'll let you take it from there. Gandalf61 (talk) 16:06, 2 October 2010 (UTC)[reply]
It might not be obvious to the questioner that exponentials grow faster than polynomials (or even what a statement like that means). Mayfare, if you want to use l'Hôpital's rule for this, imagine using it over and over until you don't get ∞/∞ any more. What is going to happen? The exponent in the numerator is going to decrease by 1 each time you use the rule, while the denominator stays the same. So what can you conclude? —Bkell (talk) 17:52, 2 October 2010 (UTC)[reply]
If the questioner does not understand that an exponential function grows faster than any polynomial, or why this is implied by
 
then they cannot understand why  . At best they are reproducing a method (l'Hôpital's rule) learnt by rote, without understanding. Once they do understand the behaviour of exponential functions then the result is intuitively obvious and a formal proof is easily found. Gandalf61 (talk) 09:25, 3 October 2010 (UTC)[reply]


While you can take Bkell's suggestion and work that into a proof, I would suggest using the definition of a limit directly. That is   equals 0 if for every ε>0 there exists a δ such that if x>δ then |xa e^-x|<|ε|. Note that xa and e^-x are both eventually monotone functions. Can you solve |xa ex|=|ε| ? Taemyr (talk) 18:24, 2 October 2010 (UTC)[reply]

L'Hopital's rule will do it if you iterate it: a becomes a − 1, then after another step it's a − 2, and so on. After it gets down to 0 is less, the rest is trivial. However, there's another way to view it: every time x is incremented by 1, ex gets multiplied by more than 2, whereas the numerator xa is multiplied by less than 2 if x is big enough. Therefore it has to approach zero. Michael Hardy (talk) 22:27, 2 October 2010 (UTC)[reply]

Noticing that xa = exp(a ln x) is useful. Then

 

Since exp(x) is an increasing function, the original product must also go to 0. —Anonymous DissidentTalk 01:53, 3 October 2010 (UTC)[reply]

This seems highly questionable to me. Showing that the ratio of the exponents goes to zero does not prove that the ratio of the original functions goes to zero. For example consider the constant functions f(x) = 1 and g(x) = e.
 
but clearly f(x)/g(x) does not go to zero. What you actually need is for the difference in the exponents to go to negative infinity, but that's not any easier to prove than the original problem. Rckrone (talk) 02:17, 3 October 2010 (UTC)[reply]
I think it does if both functions are increasing. Your counter-examples seem a little trivial, since they are constant functions (which do not change, let alone strictly increase). Perhaps you are correct in general, but in this case the result seems quite clear. —Anonymous DissidentTalk 02:24, 3 October 2010 (UTC)[reply]
I picked a trivial counter example because it's easy to consider. Here is a case with strictly increasing functions: f(x) = (x-1)/x, g(x) = ef(x). Anyway, I was wrong before about taking logs not helping. If you consider ln(xae-x), which is aln(x) - x, you can show it goes to negative infinity by arguing that the derivative a/x - 1 goes to -1. I guess that's not too bad. Rckrone (talk) 02:32, 3 October 2010 (UTC)[reply]
Okay, point well taken. —Anonymous DissidentTalk 03:02, 3 October 2010 (UTC)[reply]

If the OP was thrown off by   but has already proven that  , then consider applying the squeeze theorem with g(x)=xne-x and h(x)=xme-x where n≤a≤m. See also floor and ceiling functions. -- 124.157.254.146 (talk) 02:52, 3 October 2010 (UTC)[reply]

And in case the OP didn't catch the remark by Rckrone above, xa = ea ln(x) so xae-x = ea ln(x) - x and it can be shown that a ln(x) - x → -∞ as x → +∞. -- 124.157.254.146 (talk) 05:50, 3 October 2010 (UTC)[reply]