Wikipedia:Reference desk/Archives/Mathematics/2009 February 4

Mathematics desk
< February 3 << Jan | February | Mar >> February 5 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 4

edit

Faber and Faber offered a one million dollar prize for proving the Goldbach Conjecture within 2 years (from 2000 until 2002). They actually insured themselves against someone winning the prize. Does anyone know how much they paid the insurance company, and how this premium (or the probability of someone coming up with a proof) was calculated? Icek (talk) 04:25, 4 February 2009 (UTC)[reply]

I googled and found this, which says that the premium was "in the five figures". I guess that it must be reasonably common for people to take insurance out against things which are so rare, there is no way of getting a good estimate of the probability, so presumably insurers have some kind of standard policy on these things?81.98.38.48 (talk) 17:08, 5 February 2009 (UTC)[reply]

∫f(x) Integration without dx

edit

Can there be an ∫f(x)(I took away the dx on purpose)?I tried to define it as   because   so  .The Successor of Physics 06:21, 4 February 2009 (UTC)[reply]

Good question. The answer is no, however. It is a notation. You can't have a "(" without a ")" for example. It is the same thing here. The ∫ symbol by itself is meaningless without considering the limit that it represents. When you remove the dx, it doesn't make any sense because the reader now has no idea what limit it represents and therefore that particular collection of symbols becomes meaningless. In fact   is not a fraction. You can't just multiply by dx. It simply is a notation that represents the rate of change of the variable y over the variable x, and you can't separate that out without losing that meaning. In some classes the teacher may "abuse the notation" and pretend that you can in order to suggests the correct intuition, when in reality they are not using the notation correctly. Please see Abuse of notation for further details. Anythingapplied (talk) 07:00, 4 February 2009 (UTC)[reply]
A better example might be the "+" symbol. You're question is the equivalent of asking what is just "1+" mean, without having a second number. The normal definition of that symbol doesn't apply. So unless you've given "+" another definition in this special case, it is meaningless to say "1+". You will find that some people do use "1+", for example computer programmers, but they have an established definition. Likewise, you will find in many classes a teacher may write ∫f(x) on the board. The meaning can change depending on the context or what math class you are in. All these symbols depend on having a shared understanding of their definition. Anythingapplied (talk) 07:05, 4 February 2009 (UTC)[reply]
Thanks! The Successor of Physics 07:55, 4 February 2009 (UTC)[reply]
Note also that it is a good rule to write the   symbol, on the right of the integrand, because it clearly tells you at once what is the integration variable and what is the integrand (it is the expression between the   and the  ). Still, you can find an abbreviate notation without  , whenever there is no ambiguity (I suggest that you to write it always, however ,when you do computations). Anyway, you have this list of successive simplified forms:
 .
You can find them all, in books or at the blackboard. Each one carries less information than the preceding ones. The important thing is to state clearly what is the adopted notation, and not to change it in the middle of a text. Here Anythingapplied's preceding sentence applies too.--pma (talk) 09:36, 4 February 2009 (UTC)[reply]
Sometimes it is more convenient to write dx on the left of the integrand, especially with nested integrals:   vs.  . — Emil J. 11:39, 4 February 2009 (UTC)[reply]
That's true, but then I'd also like
 ... pma (talk) 13:47, 4 February 2009 (UTC)[reply]
Sure, as long as the limits give the variables explicitly. But   would be very ambiguous. —JAOTC 14:19, 4 February 2009 (UTC)[reply]
Well if that is acceptable, wouldn't the clearest presentational form be to have the integrand on the left?
  ~Kaimbridge~ (talk) 14:09, 4 February 2009 (UTC)[reply]
No. Formally (in the real nonnegative case), the integral is a sum of formal rectangle areas. It doesn't matter if you compute them by multiplying the height and width and then add them ( ) or compute them by multiplying the width and height and then add them ( ) but you can't add them together and then compute them. —JAOTC 14:19, 4 February 2009 (UTC)[reply]
Jao, there is no ambiguity in
 
as the order of the integrations is clearly indicated: first z, then y, then x (if one agree with the relative "parenthesis-like" convention for the integral signs, of course). IMO, allowing other permutations of integral signs/integrand/differential symbols, certainly does not make it any clearer; it possibly adds a chance of ambiguity. pma (talk) 14:48, 4 February 2009 (UTC)[reply]

dx shouldn't be thought of as only notation. It not only identifies which variable is being integrated with respect to, but also gets the dimensions (or "units", if you like) right. If ƒ(x) is in meters per second and dx is in seconds, then ƒ(xdx is in meters. And so on. Think of dx as an infinitely small increment of x. That is not logically rigorous, but logical rigor isn't everything, and sometimes logical rigor is out of place. Michael Hardy (talk) 00:45, 5 February 2009 (UTC)[reply]

Thanks Guys!!!!!The Successor of Physics 03:51, 8 February 2009 (UTC)[reply]
Integration is just adding up an infinite number of values to get the area under a curve. Since dx is an infinitesimal quantity, f(x)dx is also infinitesimal. So you would be summing an infinite number of infinitesimals, giving you a finite number. When you have ∫f(x), however, you are summing an infinite number of finite numbers, giving you ∞ (or -∞). So I would say that the integral ∫f(x) is possible, but meaningless. The dx is also useful for identifying the variable. In integration by substitution and integration by parts, it is crucial to be able do identify the variable of integration. --Yanwen (talk) 01:00, 10 February 2009 (UTC)[reply]

Confidence interval

edit

Please help me understand this. Let's say I suffer a loss of $100 with probability 10% and I suffer a loss of $0 with probability 90%. My college days have long passed but I seem to recall this being a Bernoulli trial with variance 10% * 90% * $100 and mean 10% * $100. What kind of assumptions would I need to calculate a 99.5% percentile for my potential losses given only this information and how would I do it? Is the following correct: I assume my potential losses are normal with the mean and variance above and since the 99.5th percentile for N(10,9) is 33, the percentile for my losses is 10 + 33*9 = X? --Rekees Eht (talk) 15:40, 4 February 2009 (UTC)[reply]

The variance is 900, actually. More importantly, if you're only undergoing this trial once (which seems to be the case), the assumption of normality is totally unjustifiable. Also, this isn't what confidence intervals are about (the term is usually used to refer to estimating population parameters from sample data). In answer to the actual question, you can say that there's a more than 99.5% chance that your losses are in [0,100] (indeed, there's a 100% chance of that), but you can't say the same of any smaller interval. Algebraist 15:49, 4 February 2009 (UTC)[reply]

Thanks. I see I messed up the variance calc - it is 900. And sorry for using the wrong word. Let me ask the question like this: What is the level of potential losses, say L*, for which the probability that the potential losses is less than L* is 99.5%? What assumptions do I need to make to calculate this? --Rekees Eht (talk) 16:45, 4 February 2009 (UTC)[reply]

You've already stated all the assumptions you need: you suffer a loss of $100 with probability 10% and $0 with probability 90%. Given this, the probability that your loss is less than $x is 90% for 0<x<100 and 100% for x>100. There is no x such that the probability is exactly 99.5%. Statistical ideas, gaussian/normal approximations, and so on will only crop up if you repeat your Bernoulli trial lots of times. Algebraist 17:53, 4 February 2009 (UTC)[reply]

Ah ok I understand that now - thanks. Let's say it was theoretically possible to repeat this trial 100 independent times. How would I calculate L* if L* is now the level of loss for which the average loss over the 100 trials is lower than L* with proabaility 99.5%? --Rekees Eht (talk) 07:12, 5 February 2009 (UTC)[reply]

Consider the random loss L, it's mean value μ and it's standard deviation σ. If you had repeated your experiment many times the L variable has approximately the normal distribution and the percentiles are looked up in a table. The inequality μ−2.8σ < L < μ+2.8σ is satisfied 99.5% of the time. Bo Jacoby (talk) 09:27, 5 February 2009 (UTC).[reply]

Units going bad

edit

I have 15 units in the field. One has gone bad in three years - How many will go bad in eight years? Thank you - Bruce Prather [e-mail removed] —Preceding unsigned comment added by Bruceprather (talkcontribs) 16:31, 4 February 2009 (UTC)[reply]

Who knows? You haven't given nearly enough information to give a sensible answer. What, for example, is a 'unit'? Algebraist 16:33, 4 February 2009 (UTC)[reply]
Since there are 15 of them, Bruce's field is GF(24), of course. Over time, some elements lose their inverses. It happens to us all sooner or later. (Sorry, bored.) —JAOTC 16:51, 4 February 2009 (UTC)[reply]
Ha. Instinct says 8/3, or around 3 units, but you can't be at all confident of that (perhaps someone even more bored could do some kind of confidence interval). Maybe the failed unit was a badly-built dud and the rest will last 100 years, or maybe they're all going to fail around the 3 year mark. The probability of failure in mechanical systems varies over time in a complex way (see Bathtub curve) so as Algebraist says you'd need a lot more data (plotting a large number of failures against time would be a good start). And if the units are connected to each other, there's a whole other set of problems. --Maltelauridsbrigge (talk) 16:58, 4 February 2009 (UTC)[reply]
My instinct says 15(1 − (14/15)8/3) = 2.52… as in exponential decay. That still rounds up to 3, though. — Emil J. 17:25, 4 February 2009 (UTC)[reply]

If the probability that some unit goes bad in some year is  , then the probability that the unit survives the year is  , and the probability that it survives three years is  , and the probability that it has gone bad in three years is  , and the probability that   out of 15 units go bad in three years is   , and the probability that   is  , and the likelihood that   is   where   is a constant. The probability that   out of the 15 units go bad in 8 years is  . These expressions can be simplified. The mean value of   is   and the standard deviation is  . The answer to your question is  , that the true value is approximately equal to the mean value, but the uncertainty is of the order of magnitude of the standard deviation. Bo Jacoby (talk) 19:59, 4 February 2009 (UTC).[reply]

Why are you assuming independence? Algebraist 20:18, 4 February 2009 (UTC)[reply]

I have got no information on dependence. That's why. The result depends on the information given. More information usually leads to a smaller standard deviation meaning a better result. Bo Jacoby (talk) 22:19, 4 February 2009 (UTC).[reply]

Constructing a series

edit

Suppose that   diverges and  . How does one show there exist   with   and   divergent? I did this question ages ago but really can't recall!

Thanks,

131.111.8.104 (talk) 17:03, 4 February 2009 (UTC)BTS[reply]

Dunno what the official proof is but if you let
 
I think this should do it because for groups of   summing up to 1 the corresponding   sum up to something like an entry in the Harmonic series (mathematics)  . Well that's the idea I'm sure one would have to be a bit better at doing the job properly. Dmcq (talk) 17:29, 4 February 2009 (UTC)[reply]


Exact, Dmcq's one is perfect. Actually if you define
 ,
the proof of the divergence is immediate: you can see the nth partial sum   as an upper Riemann sum for   relative to the subdivision whose points are exactly the first   partial sums of the  , so diverges. (In fact, the partial sums of the   are asymptotically the log of the partial sums of the  ). --pma (talk) 18:09, 4 February 2009 (UTC)[reply]

Is it true to say  ? —Preceding unsigned comment added by 131.111.8.102 (talk) 19:00, 4 February 2009 (UTC)[reply]

Not in general, no. Algebraist 19:01, 4 February 2009 (UTC)[reply]

Then how would we know bn/an 0? I'm probably being stupid here sorry =P —Preceding unsigned comment added by 131.111.8.102 (talk) 19:17, 4 February 2009 (UTC)[reply]

We don't. Better hastily redefine   to be  . Algebraist 19:22, 4 February 2009 (UTC)[reply]

Oh dear, I was being stupid! Thanks :D 131.111.8.102 (talk) 19:26, 4 February 2009 (UTC)BTS[reply]

My fault. You can also go back to Dmcq's form; then you obtain lower Riemann sums, but still you can do an estimate from below, whereas now the integral gives a log estimate from above. Note that the asymptotics with log that I mentioned needs some mild assumption on the   (I forgot to say). Note also that if the series of the   converges, so does the series of  , and you have some nice bounds both from below and from above on its sum. --pma (talk) 20:53, 4 February 2009 (UTC)[reply]