Wikipedia:Reference desk/Archives/Mathematics/2009 February 1

Mathematics desk
< January 31 << Jan | February | Mar >> February 2 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 1

edit

Power Series Expansion

edit

I'm trying to find an approximation for the following formula:  

and I want to "obtain by expanding kT as a power series in  ", the approximation  

I've expanded the exponent which I presume is the correct approach but there are so many ways it seems I could go from there - dividing by kT, subtracting kT from each side and so on. I've got as close as   but no better - can anyone see what I should be doing? On an unrelated formatting note, how comes my first LaTeX formula is smaller than the other two?

Thanks,

Spamalert101 (talk) 00:02, 1 February 2009 (UTC)Spamalert[reply]

The first formula is smaller because it only contains simple symbols so can be displayed in HTML. The others contain more complicated symbols (the fractions, probably), so have to be done as an image, which for some reason is always bigger. I'm a little confused by your main question, though, have you copied out the first formula incorrectly? That formula expanded as a power series in epilson is simply  , no approximation. I can't see any way you can get any higher powers from that formula - it's just a polynomial in epsilon (with complicated, but constant, coefficients). --Tango (talk) 00:40, 1 February 2009 (UTC)[reply]
Tango, I think you got confused. You need to solve for kT. That means you shouldn't have kT anywhere on the right side; it should appear only on the left side. You can't do that in closed form, but you can give as many terms of the power series as you want. See below.... Michael Hardy (talk) 02:41, 1 February 2009 (UTC)[reply]

OK, start with

 

Separate the two variables:

 

Expand both sides as power series:

 

Differentiate with respect to ε:

 

Since u = 0 when ε = 0, setting ε to 0 gives us

 

Differentiating again (applying the product rule to the right side), we get

 

When ε = 0 then u = 0 and du/ = 2, so we have

 

Therefore

 

The power series we seek is

 

where abcde, ... are the values of the 0th, 1st, 2nd, 3rd, 4th, ... derivatives of u with respect to ε at ε = 0. So a = 0, b = 2, c = −4/3, and that gives us

 

Michael Hardy (talk) 01:51, 1 February 2009 (UTC)[reply]

The general theory behind Michael's approach can be found at Lagrange inversion theorem. Another approach is to find a contraction mapping. Start from Michael's step

 

Rearrange it like this:

 

Call the right side F(u). Now define a sequence

 

You will find the sequence converges at least one term per iteration. McKay (talk) 09:32, 1 February 2009 (UTC)[reply]

On the LaTeX issue: try \textstyle and \scriptstyle. --pma (talk) 15:01, 2 February 2009 (UTC)[reply]

Number sequence help

edit

What is the next number in this sequence (thankfully this isn't homework)

1 11 21 1211 111221 312211 ?

thanks —Preceding unsigned comment added by 70.171.234.117 (talk) 07:59, 1 February 2009 (UTC)[reply]

Haha. This is an old riddle. Each sequence is describing the previous sequence. How would you say 111221? It has three ones, two twos, and one one. Which gives you 312211. If it is any consolation I had to have this one explained to me too when I first saw it. Anythingapplied (talk) 08:38, 1 February 2009 (UTC)[reply]

wow my friend really did me over then... this isn't even a mathematical sequence at all -__-

Not sure why it's disqualified from being mathematical, but the sequence does appear in the Encyclopedia of Integer Sequences. --Ben Kovitz (talk) 22:18, 1 February 2009 (UTC)[reply]

just for fun, what would be a quadratic function that would actually produce:

f(1) = 1 f(2) = 11 f(3) = 21 f(4) = 1211 f(5) = 111221 f(6) = 312211

? —Preceding unsigned comment added by 70.171.234.117 (talk) 08:52, 1 February 2009 (UTC)[reply]

I very much doubt there is anything as simple as a "quadratic function" for this sequence. For more information see look-and-say sequence. Gandalf61 (talk) 09:22, 1 February 2009 (UTC)[reply]
If you want a polynomial that will go through a given 6 points then, in general, you need at least a quintic (degree 5). --Tango (talk) 13:39, 1 February 2009 (UTC)[reply]
Polynomial interpolation would give you a polynomial of degree at most 5.--Shahab (talk) 13:48, 1 February 2009 (UTC)[reply]
According to [1], the unique polynomial of least degree that fits those six points is -(11597/6)x^5+(100285/3)x^4-(416905/2)x^3+(1766885/3)x^2-(2247664/3)x+337211. I haven't checked whether that's right. Black Carrot (talk) 16:57, 1 February 2009 (UTC)[reply]
Indeed, when I say that in general you need at least degree 5 I mean that there is a way of getting a degree 5 or lower solution for all such problems. You can, however, do it with higher degree if you like (although the solution ceases to be unique), that's why it's "at least" not "precisely". --Tango (talk) 20:52, 1 February 2009 (UTC)[reply]

LOL sorry I meant polynomial function not "quadratic". Thanks anyways —Preceding unsigned comment added by 70.171.234.117 (talk) 18:17, 1 February 2009 (UTC)[reply]

Probability function

edit

Hi there - was hoping to get a hand with this question, I'm useless with probability and it's really doing my head in!

N doctors go into a meeting, leaving their labcoats at the door (they all have coats). On leaving the meeting they each choose a coat at random - what is the probability k doctors leave with the correct coat?

Would I be right in thinking that you have   selections of the k doctors with the correct coats multiplied by the number of arrangements of remaining doctors to wrong coats, over 'n!' ? If so, how do you find the latter?

Thanks a lot, 131.111.8.98 (talk) 09:59, 1 February 2009 (UTC)Mathmos6[reply]

:My knowledge about probability is limited but I'd say you have a binomial distribution here. So the answer should be  --Shahab (talk) 11:01, 1 February 2009 (UTC)[reply]

Oh I think its not the binomial distribution after all. Since the doctors will most certainly pick up coats one by one so the probability of success is changing for a particular trial.--Shahab (talk) 11:19, 1 February 2009 (UTC)[reply]
It's more a problem of enumeration, what you want are the rencontres numbers (for the probability, of course, divide by n!). You also have to decide if you mean "exactly k" or "at least k", the two answers being immediately related with each other. --pma (talk) 12:58, 1 February 2009 (UTC)[reply]

We have a Wikipedia article about this problem: rencontres numbers. Michael Hardy (talk) 17:42, 1 February 2009 (UTC)[reply]

There is also an article on rencontres numbers, that may be interesting too.--pma (talk) 10:57, 2 February 2009 (UTC)[reply]

Zero divisors in polynomial rings

edit
  Resolved
 – --Shahab (talk) 11:31, 1 February 2009 (UTC)[reply]

Hello all. I am trying to prove the following theorem: Let f(x) be a polynomial in R[x] where R is a commutative ring with identity and suppose f(x) is a zero divisor. Show that there is a nonzero element a in R such that af(x)=0.

Now I start by letting f=(a0,a1,...an) and g=(b0,b1,...bm) where g is of least positive degree such that fg=0. I can see that anbm=0 from here and so I can conclude that ang must be zero (for else ang would contradict g's minimality). The hint in the book I am reading asks to show that an-rg=0 where 0≤r≤n. Equating the next coefficient in fg=0 gives me anbm-1+an-1bm=0 but I can't figure out what to do next.

Can anyone help please?--Shahab (talk) 10:12, 1 February 2009 (UTC)[reply]

If ang is the zero polynomial, what does this tell you about each anbk (0 ≤ k ≤ m) ? So what does this tell you about an-1bm ? And what can you conclude about an-1g ? And if you repeat this argument, what can you conclude eventually about an-rg ? Gandalf61 (talk) 11:19, 1 February 2009 (UTC)[reply]
Thanks. I proved it.--Shahab (talk) 11:31, 1 February 2009 (UTC)[reply]

what is the most important mathematical operation

edit

if you could only have 1 mathematical operation, which one would you have? —Preceding unsigned comment added by 82.120.227.157 (talk) 14:36, 1 February 2009 (UTC)[reply]

I'd go with addition. As long as you restrict yourself to the integers, multiplication is just repeated addition and exponentiation is just repeated multiplication, so if you have addition you can do all three, it just takes longer. Even if you work in larger number systems, or even things without numbers, most of the standard operations are ultimately based on addition (eg. in the rational numbers, multiplication of fractions is defined in terms of multiplication of integers, which is defined in terms of addition). --Tango (talk) 14:42, 1 February 2009 (UTC)[reply]
I would have to agree, addition is most important. However, it should be stated that integer exponentiation is repeated addition, non-integer exponentiation gets rather more complicated. -mattbuck (Talk) 14:59, 1 February 2009 (UTC)[reply]
I did state that... --Tango (talk) 20:50, 1 February 2009 (UTC)[reply]
You can only get multiplication, exponentiation, etc. from addition if you allow yourself recursion, which for this reason I would say is a more important operation. Algebraist 17:15, 1 February 2009 (UTC)[reply]
If you can do something once, you can do it lots of times - I think you get recursion for free. (As with any question like this, there are slightly different interpretations which get different answers.) --Tango (talk) 20:50, 1 February 2009 (UTC)[reply]
In that case, why not go the whole way and start with the successor function? Algebraist 21:20, 1 February 2009 (UTC)[reply]
A few thoughts:
1. The question is not entirely well-formed, because an operation requires some objects to apply it to. But that's easy to fix: we just include your choice of objects as part of your choice of operation.
2. If all you could do was recursion, then you couldn't do anything, because recursion is just the ability to do some other thing any number of times.
3. Opposing #2, take a look at the lambda calculus. The lambda calculus contains only one kind of object: functions that take a single argument. There is only one operation: function-application. All you can give to a function is a unary function, and all a function can return is a unary function. It is easy in the lambda calculus to define integers, addition, multiplication, recursion, Boolean operations, etc. Once you have the integers, you can define real numbers, irrational exponents, and anything else you like. So, I guess I'll take "function application" as my sole operation, with "unary functions" as my objects. That buys me everything.
Are there other known ways in math to get everything with just one operation?
--Ben Kovitz (talk) 21:06, 1 February 2009 (UTC)[reply]
I suppose the project to embed all of mathematics in set theory can be seen as reducing everything to the single operation the takes a property φ and outputs the class of all objects which φ. (This idea works better in some set theories than others) Algebraist 21:18, 1 February 2009 (UTC)[reply]
But you still need a way of combining properties (AND, OR, etc [or unions and intersections, depending on point of view]). Is there a version of set theory in which those aren't taken as undefined? --Tango (talk) 21:23, 1 February 2009 (UTC)[reply]
If you can have ω-recursion for free, I can have first-order formulae for free. Algebraist 22:02, 1 February 2009 (UTC)[reply]
It was my understanding that much of modern set theory intended to build up the rest of mathematics from this sort of "single operation" approach. The section Set_theory#Axiomatic_set_theory explains some of the efforts, which seem to have been largely successful. "Nearly all mathematical concepts are now defined formally in terms of sets and set theoretic concepts. For example, mathematical structures as diverse as graphs, manifolds, rings, and vector spaces are all defined as sets having various (axiomatic) properties. Equivalence and order relations are ubiquitous in mathematics, and the theory of relations is entirely grounded in set theory." Nimur (talk) 21:24, 1 February 2009 (UTC)[reply]
That's also my understanding of the main goal of set theory. What would you say is the single operation of set theory? Function-application actually depends on sets for its definition, since a function is a kind of set (of ordered pairs, themselves sets). "One operation" and "defined in terms of" are different, at least as I understand them. --Ben Kovitz (talk) 21:54, 1 February 2009 (UTC)[reply]
As I said above, I take the basic operation of set theory to be 'take a property, form the set/class of all sets/objects with that property'. Of course, some work needs to be done to avoid paradox. Algebraist 22:02, 1 February 2009 (UTC)[reply]
Thanks for repeating your point, Algebraist. I hadn't given it proper consideration the first time, because I was thinking that "property" is too vague for a mathematical operation. For example, the property of "good government" or "wise choice". What do they call that operation (mapping a property to the set/class of all the things that have it)? (Most folks I've talked with about this say that a property is that set/class, so "having a property" simply means "being a member of that set/class", but those folks were philosophers, and I think that's a dumb theory, anyway.) I had been thinking of operation as meaning a function mapping a tuple of elements from a set A to the set A, or something close to that; so, for example, integer addition is an operation that maps to integers to an integer, etc. Am I being too narrow? --Ben Kovitz (talk) 23:02, 1 February 2009 (UTC)[reply]
Are there other known ways in math to get everything with just one operation?: for instance (talking a little bit more about constructions rather than operations) in category theory every universal construction turns out to be a particular case of an initial object, the simplest concept in the theory. The trick of course is that the category changes -and becomes possibly more complicated. --pma (talk) 00:38, 2 February 2009 (UTC)[reply]
Well, there's the fact that all Boolean functions can be formed from iterating the Sheffer stroke. Michael Hardy (talk) 15:49, 3 February 2009 (UTC)[reply]
Sole sufficient operator is a Wikipedia article devoted to answering the question above. Somewhat stubby for now. Michael Hardy (talk) 15:51, 3 February 2009 (UTC)[reply]

Ratio of binomial coefficients

edit

Hiya,

Having shown that   for all  , and supposing  , how would one show that the limit of the ratio of the two sides of the above inequality as   equals 1?

Many thanks for the help!

Spamalert101 (talk) 16:02, 1 February 2009 (UTC)BS[reply]

I'm not sure, but here's the first thought that comes to mind. It might or might not be fruitful. "Ratio = 1" means "they're equal", so just prove that the difference between them gets smaller than any epsilon. You might be able to do that with an inductive proof. --Ben Kovitz (talk) 22:03, 1 February 2009 (UTC)[reply]
Hiyatoo! Look at the LHS: it is, starting the sum from the last term (and the largest):
 .
Notice that it is a finite sum, although with an increasing number of terms. The kth term into the sum converges to   as  . In general, this should not be enough to conclude that the inner sum converges to
 ,
as you want, BUT it's also true that each term is less than the corresponding term  . Then you conclude applying the dominated convergence theorem for series (a toy version of the usual dominated convergence theorem of integrals; its a particular case, as   is a particular case of  ). Is it ok? in case ask for further details. Note that in the same way you can prove (try it) an analogous asymptotics for your sum in the more general case of an integer multiple of m, that is  , instead of  . If you got the geometric series & dominated convergence thing, you can write down immediately the limit of the ratio in terms of p. pma (talk) 23:58, 1 February 2009 (UTC)[reply]

This is Question of sequence & series

edit

This is Q of sequence & series hlp me 1+2+4+8+16+........find nth term of the series & find sum of first n ter? —Preceding unsigned comment added by 117.196.34.27 (talk) 16:11, 1 February 2009 (UTC)[reply]

n'th term is 2n−1. Sum of first n terms is 1+2+4+8+...+2n−1=2n−1. Bo Jacoby (talk) 17:10, 1 February 2009 (UTC).[reply]
Hi. Have you actually tried doing this yourself? -mattbuck (Talk) 18:03, 1 February 2009 (UTC)[reply]

Seemingly straightforward problem

edit

Hi, I'm trying to solve the following, seemingly simple, problem but I'm stumped by something. I want to find the maxima and minima of the following function:

 

I find the following derivative:

 

Which, when set equal to zero, I rewrite to this polynomial:

 

Using the quadratic equation gives me the solutions x = 0.738 and x = 0.6020. Plotting these functions shows that the first is actually the maximum of the function, but the second makes no sense at all. I've gone over it a million times, and I can't find any errors. I was thinking that there might be some complex business that I'm not aware of (like when I assume that  ). Can anybody elucidate? risk (talk) 20:19, 1 February 2009 (UTC)[reply]

The second term in your derivative is (fairly obviously) wrong: I haven't worked it through, but fixing that should help. AndrewWTaylor (talk) 20:30, 1 February 2009 (UTC)[reply]
Sorry, I copied it out wrong. I've fixed it in my original post (to avoid confusion). risk (talk) 20:33, 1 February 2009 (UTC)[reply]
Where did you get that polynomial from? I get  , which has only one solution. Algebraist 20:45, 1 February 2009 (UTC)[reply]
I used the following steps
 
In the first step I multiply by  . In the fourth, I square both sides. Any illegal moves? risk (talk) 20:55, 1 February 2009 (UTC)[reply]
You will see the error if you try to plug the solution   into the equation  ; the left side becomes negative .776, and the right side becomes positive .776.
You didn't make any algebraic errors, but rather a subtle logic error. What your algebraic manipulations show is the following: if x is a solution to  , then x is a solution to  . However, because you squared both sides in one step (and squaring is not an injective function), your proof does not go in reverse; it is not necessarily true that all solutions to the latter equation must also be solutions to the former equation. In fact, you have even constructed an example of a solution to the latter equation which is not a solution to the former equation. Eric. 131.215.158.184 (talk) 21:26, 1 February 2009 (UTC)[reply]
See extraneous solution. --Tango (talk) 21:27, 1 February 2009 (UTC)[reply]

Of course. Thank you both. I should take some time to read the Extraneous Solution article. Could you tell me how you would solve it from  , or how you could tell that it had only one solution? risk (talk) 21:34, 1 February 2009 (UTC)[reply]

It's just a quadratic in  . Use your favourite way of solving quadratics, and remember that   is by definition non-negative. Algebraist 21:39, 1 February 2009 (UTC)[reply]
I solved it by mapping   to a dummy variable, s, and solving the cubic equation in s.  ; taking the derivative,  , solve s by quadratic formula. Then note that one of the zeros is negative and so mapping back into x yields the square root of a negative number. That is the extraneous root. Nimur (talk) 21:37, 1 February 2009 (UTC)[reply]
That approach relies implicitly on the chain rule and the fact that sqrt(x) has no stationary points. Algebraist 21:47, 1 February 2009 (UTC)[reply]
Although your approach can produce extraneous solutions, it definitely won't omit any correct solutions. So you can just take both solutions that you found and plug them into the original equation to verify their correctness, and throw out any extraneous solutions. But Algebraists' approach is better. Eric. 131.215.158.184 (talk) 22:33, 1 February 2009 (UTC)[reply]

Summarizing, it seems worth repeating Nimur's remark. Since you are looking for maxima and minima, it is convenient to make the substitution   from the beginning and look for max & min of   over all   --pma (talk) 00:24, 2 February 2009 (UTC)[reply]