Wikipedia:Reference desk/Archives/Mathematics/2009 April 25

Mathematics desk
< April 24 << Mar | April | May >> April 26 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 25 edit

higher derivatives edit

Hi, I read the following surprising comment in a maths textbook: "Let   be a function such that   exists. Then   exists in an interval around   for  " Surely this is false. Take the function   for rational x, and   for some integer n > 2, for all irrational x. At x = 0, all derivatives below the nth will exist, but the function isn't even continuous around 0. Have I got this right? It's been emotional (talk) 00:21, 25 April 2009 (UTC)[reply]

It looks like the first derivative exists, but do the higher ones? The first derivative is only defined at 0, don't you need your function to be defined on an interval to differentiate it? --Tango (talk) 00:34, 25 April 2009 (UTC)[reply]

<sheepish grin> oops - I was only thinking with reference to the squeeze theorem, in which case the idea makes sense, but I think you are quite right, and there is no such thing as the higher derivatives. However, if instead of using   in the definition of  , you use the definition of   in terms of  , it might be a different story. Then I suspect the squeeze theorem would apply, and the result would follow. Am I right now? It's been emotional (talk) 04:24, 25 April 2009 (UTC)[reply]

The comment in your book is correct, and it's not a surprising result but rather a remark on the definitions. The n order derivative of f at c is the derivative at c of the function   , therefore as Tango says you need the latter to exist in a neighborhood of c. The same for all  . Maybe you have in mind a weaker definition of the second derivative like   ? --pma (talk) 07:16, 25 April 2009 (UTC)[reply]

Exactly what I was thinking.   was in fact the form I came up with, and in the function I gave, you get an easy limit, since f(0) = 0 and 2h is rational exactly when h is, so they are taken from the same subcomponent of f(x) for any h. Is there anything wrong with using this as the definition of the second derivative? If not, it would seem advantageous, since differentiation is usually considered a "good" property for a function to have. Thanks for the help; I have always got good marks in maths, but never really had my rigour checked, so I make careless errors like this. It's good getting things "peer reviewed" so to speak. Much appreciated, It's been emotional (talk) 00:15, 26 April 2009 (UTC)[reply]

I think the primary difficulty is the loss of the property mentioned. It would be a bit of a problem if taking the derivative of something twice did not produce the second derivative, which would be the case if the second derivative was defined in that way. Black Carrot (talk) 02:47, 26 April 2009 (UTC)[reply]
Yes, it seems a notion too weak to be useful, as your example shows (nice!). Also in the symmetric form I wrote, any even function would have " ". A somehow richer property closer to what you have in mind is maybe having a n-th order polynomial expansion:   with   as  . So, n=0 is continuity at c; n=1 is differentiability at c; however for any n it doesn't even imply continuity at points other than c (therefore in particular it doesn't imply the existence of  ). An intersting result here is that the Taylor theorem has a converse: that   is of class   if and onlfy if has n-th order expansions at all points x,   with continuous coefficients   and remainder   as  , locally uniformly. Then the coefficients are of course the derivatives. The analog characterization of   maps holds true in the case of Banach spaces. So "having an n-th order polynomial expansion in a point" is a reasonable alternative notion to "having the n-th order derivative in a point"; they are not equivalent, but having them everywhere continuously is indeed the same. --pma (talk) 10:56, 26 April 2009 (UTC)[reply]

dice edit

If I roll 64 standard non-weighted 6 sided dice, what are my odds of rolling <= 1, 6?

Could you explain this further, please? Are you referring to your score on each die individually, or your total score, or what? It's been emotional (talk) 04:25, 25 April 2009 (UTC)[reply]

I mean what are the odds that 63 of the 64 dice will not have a 6 pips facing up. —Preceding unsigned comment added by 173.25.242.33 (talk) 04:39, 25 April 2009 (UTC)[reply]

If you mean 0 or 1 of them showing a 6, the probability is   which is about 5.068 E-48, or 0.0000...005068, with 47 zeros after the decimal point. If you mean exactly 1 six, then that's just  , or 5.02 E-48. It's been emotional (talk) 05:44, 25 April 2009 (UTC)[reply]

You surely mean  . Bikasuishin (talk) 10:05, 25 April 2009 (UTC)[reply]
And about 1.2*10-4 for the other possibility. Algebraist 14:33, 25 April 2009 (UTC)[reply]
Thanks, that was my thesis doing things to my brain ;) It's been emotional (talk) 23:56, 25 April 2009 (UTC)[reply]

Poker theory and cardless poker edit

Inspired by This question, I was wondering if any math(s) types could gve me an idea of how this game would have to be played. The idea is you pick a poker hand at the start, write it down and then play as if you had been dealt that hand. Clearly to prevent everyone from choosing royal flushes or allowing one hand to become the optimum pick, there need to be rules limiting payment to high hands etc. What rules would be a good start? Any help is hugely appreciated 86.8.176.85 (talk) 05:07, 25 April 2009 (UTC)[reply]

The simplest variant would be to give everyone a pack, and they choose whatever cards, but when you've chosen a hand, that gets discarded. If you need to avoid having a pack, just get everyone to write the cards down, and strike out your cards as they are used, then you can't use them again for that game. With 52 cards, of course that's 10 hands, with two for each player that don't get used, then you start over. That limits a proper game to multiples of 10 rounds, or you just accept that you finish when everyone gets sick of it, and if Fred's been saving his royal flush up, and Sally has already played hers, too bad for him. It's been emotional (talk) 05:50, 25 April 2009 (UTC)[reply]

A large slice of pie edit

The 50,366,472nd digit onwards of pi is 31415926 (it's true, I'm not making it up). I was wondering what the expectation value is of the digit in pi after which all the digits up to that point are repeated. I think the probability of this happening must be;

 

which limits to  

which is less than 0.5. Does this mean that expectation value is meaningless here? SpinningSpark 13:06, 25 April 2009 (UTC)[reply]

Your first assumption would have to be that pi is a normal number, which is not actually known. —JAOTC 13:48, 25 April 2009 (UTC)[reply]
Yes, I was making that assumption (and indeed I knew that it was an assumption - forgive my slopiness, I am an engineer, not a mathematician). I would still like to know how this can have a finite probability but not have an expectation value. SpinningSpark 14:19, 25 April 2009 (UTC)[reply]
Expectations are only meaningful for random events, and the digits of pi are not random, so you're never going to get a meaningful expectation here. Furthermore, even if pi is normal, it's not obvious (at least to me) that it must contain such a repetition. If instead of considering pi, we consider a random number (between 0 and 1, with independent uniformly-distributed digits, say), then the probability that such a repetition occurs is not 1 (it's not 1/9, either; your calculation is an overestimate due to some double counting). Thus to get a meaningful expectation value for the point at which the first repetition occurs, you need to decide what value this variable should take if no such repetition occurs. The obvious value to choose is infinity, in which case the expectation is also infinite. Algebraist 14:31, 25 April 2009 (UTC)[reply]
The digits of pi certainly are random in the Bayesian paradigm. Robinh (talk) 20:55, 25 April 2009 (UTC)[reply]

One could say the sequence of digits itself is random (regardless of whether π is normal or not) in the sense that it defines a probability measures, thus: the probability assigned to any sequence abc...x of digits is the limit of its relative frequency of occurrence as consecutive digits in the whole decimal expansion. Then one could ask about the expected value. However, if it is not known that π is a normal number, then finding such expected values could be very hard problem, whose answer one would publish in a journal rather than posting here.

As for being "random in the Bayesian paradigm", from one point of view the probability that they are what they are is exactly 1. Bayesianism usually takes provable mathematical propositions to have probability 1 even though in reality there may be reasonable uncertainty about conjectures. The Bayesian approach to uncertainty is quite mathematical in that respect. Michael Hardy (talk) 03:12, 26 April 2009 (UTC)[reply]

(@MH) As to the frequencies, note that at the moment it is not even known if they have a limit.--pma (talk) 08:27, 26 April 2009 (UTC)[reply]
The digits of pi are random in the Bayesian paradigm because it identifies uncertainty with randomness. In my line of research, we treat deterministic computer programs as having 'random' output (the application is climate models that take maybe six months to run). Sure, the output is knowable in principle but the fact is that if one is staring at a computer monitor waiting for a run to finish, one does not know what the answer will be. One can imagine people taking bets on the outcome. This qualifies the output to be a random variable from a Bayesian perspective. I see no difference between a pi-digits-program and a climate-in-2100 program (some people would take bets on the googol-th digit of pi, presumably). I write papers using this perspective, and it is a useful and theoretically rigorous approach. Best, Robinh (talk) 19:59, 26 April 2009 (UTC)[reply]
I wouldn't advise such a bet due to existence of these algorithms. I'm not sure what the constants are for the running time, so it is difficult to know if calculating the googolth digit is practical, but I suspect it is. --Tango (talk) 21:54, 26 April 2009 (UTC)[reply]
In 1999 they computed the 4*1013 binary digit of pi (and some subsequent ones) this way. It took more than one year of computing. I don't know of the further results. It's a 0, btw. --pma (talk) 23:13, 26 April 2009 (UTC)[reply]

Upper bound on the number of topologies of a finite set edit

Hi there - I'm looking to prove that the number of topologies on a finite set ({1,2,...,n} for example) doesn't have an upper bound of the form k^n (assuming this is true!), probably by contradiction, having proved 2^n is a lower bound (n>1) - but I'm not sure how to get started - could anyone give me a hand please?

Thanks very much, Otherlobby17 (talk) 20:48, 25 April 2009 (UTC)[reply]

2^n is an upper bound, isn't it? A topology has to be a subset of the power set, which has cardinality 2^n. --20:58, 25 April 2009 (UTC)
Which, of course, means an upper bound of 2^2^n. I apologise for my idiocy. --Tango (talk) 21:09, 25 April 2009 (UTC)[reply]
A topology on a finite set is the same thing as a preorder (OEIS:A000798). But even the number of total orders is already more than kn for any k. —David Eppstein (talk) 21:03, 25 April 2009 (UTC)[reply]

D. J. Kleitman and B. L. Rothschild, The number of finite topologies, Proc. Amer. Math. Soc., 25 (1970), 276-282 showed that the logarithm (base 2) of the number of topologies on an n-set is asymptotic to n2/4. So it is smaller than any expression 2kn for any k>1, which is probably the question you meant to ask. McKay (talk) 01:31, 26 April 2009 (UTC)[reply]

Very helpful thankyou - but I did mean to ask about   rather than  , having already known that   is a (crude) upper bound, I was wondering if it could be improved to the extent of being OTF   for some k - since I generally see it quoted as   I assumed there was no such form, hence my question. Thanks very much for the information! How do we know it's a preorder? Is the number of total orders smaller than the number of topologies then? Thanks again, Otherlobby17 (talk) 01:58, 26 April 2009 (UTC)[reply]

How do we know: simply define a preorder :   iff  ; see the quoted link.--pma (talk) 08:09, 26 April 2009 (UTC)[reply]

Homework problem edit

As the title suggests, this is a homework problem but I only need to be told what a question means. Given the equation of an ellipse   I am told that "The point N is the foot of the perpendicular from the origin, O, to the tangent to the ellipse at P.' I'm confused by the use of the word foot because if it's being used in the way I've seen it used before then in this case it would mean the origin but that can't be right. What is it meant to mean? Thanks 92.3.150.200 (talk) 21:33, 25 April 2009 (UTC)[reply]

I would guess it simply means the point where the two lines (L1 = the tangent and L2 = the line through the origin and perpendicular to the tangent) intersect. This MathWorld article seems to think the same. —JAOTC 21:45, 25 April 2009 (UTC)[reply]
Yes, that's what the foot of a perpendicular usually means. --Tango (talk) 21:50, 25 April 2009 (UTC)[reply]
So if this makes it clear, you would also find that if the tangent was a vertical or horizontal line, then N and P would be the same point, on the ellipse. Otherwise, N would be outside the ellipse. It's been emotional (talk) 00:02, 26 April 2009 (UTC)[reply]
I think it's not whether it's vertical or horizontal but whether dy/dx at P corresponds to the slope of a circle going through P. The slope of a circle at (x,y) is -x/y. The slope of an ellipse at (x,y) is -x/ay. So when their slopes times a particular constant factor are equal, the OP's N and P are the same point. I guess an ellipse's tangent is equal to a circle's tangent (at the same point) either 4 times (there's your "vertical or horizontal") or at every point (if the ellipse is a circle). .froth. (talk) 03:46, 26 April 2009 (UTC)[reply]