Wikipedia:Reference desk/Archives/Mathematics/2010 March 14

Mathematics desk
< March 13 << Feb | March | Apr >> March 15 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 14 edit

Approximate solutions to equation edit

I'm trying to solve this for x:

 

where n is some given positive number.

I can find numerical solutions to any desired accuracy using standard methods, but this doesn't tell me what I actually want to know, which is how the solution for x mathematically grows with n as n becomes very large. What I'm hoping for is a closed-form formula like

x ~= some expression involving n

which is asymptotically correct as n tends to infinity (and hopefully also gives good numerical approximations for reasonable-sized values of n). I feel this ought to be possible, but I don't think my math skills are good enough to figure it out. Any ideas? 86.136.194.148 (talk) 01:13, 14 March 2010 (UTC)[reply]

Is this homework? Try raising both sides to the (n+1)th power and seeing if you can solve it from there. 66.127.52.47 (talk) 02:48, 14 March 2010 (UTC)[reply]
No, this is not homework, so there's no need to be coy about the answer. At the moment I don't see how your proposal works. Can you be more explicit? 86.136.194.148 (talk) 03:04, 14 March 2010 (UTC).[reply]

My guess is   where c is such that   (Igny (talk) 06:00, 14 March 2010 (UTC))[reply]

Which is  . Also,  . -- Meni Rosenfeld (talk) 12:05, 14 March 2010 (UTC)[reply]
  • Thank you both. Those answers look very plausible as far as they match the numerical results that I can calculate. If you have the time, and it's not going to take pages and pages to explain, I'd like to know how you arrived at this answer. 86.165.23.35 (talk) 20:04, 14 March 2010 (UTC).[reply]
What I did (which only resulted in the approximate solution I gave) is let  . Then the equation is  . Assuming the higher-order derivatives are small, this means   (note - the derivative is wrt to n). The solution then follows without much effort. -- Meni Rosenfeld (talk) 20:16, 14 March 2010 (UTC)[reply]
I had a similar approach. I considered the solution to your equation,  , and looked at equations for   or   Eventually that gave me insight to consider   and it is very easy to check that   satisfies   and you can see that asymptotically   introduced earlier. (Igny (talk) 21:16, 14 March 2010 (UTC))[reply]

Geometry problem edit

Let ABCD be a quadrilateral with AD = BC. Extend AD and BC to intersect at X. Let M be the midpoint of AB and N the midpoint of CD. Prove that MN is parallel to the bisector of angle X.--RDBury (talk) 05:14, 14 March 2010 (UTC)[reply]

This sounds like a homework question, so you will need to tell us what you have tried so far. --Tango (talk) 05:25, 14 March 2010 (UTC)[reply]
Use law of sines lots of times. Rckrone (talk) 07:24, 14 March 2010 (UTC)[reply]

I'd first think about sliding BC along the line in which it lies, to the place where it is just as far from the intersection point as AD is. Then ask whether the line through the two midpoints has the same slope as it did before that move. If so, then you've done most of the work. Michael Hardy (talk) 11:45, 14 March 2010 (UTC)[reply]

It's not a homework question. I was hoping it was a special case of theorem I could reference, or maybe someone could see a relatively simple proof. If not I'm not going to worry about it.--RDBury (talk) 13:17, 14 March 2010 (UTC)[reply]
Here's what I meant before with law of sines. Suppose wolog that AB < CD and say AD intersects MN at E and BC intersects MN at F. Call angle AEM θ and angle BFM φ. Use law of sines on triangles AEM and BFM to show that AE/BF = sinθ/sinφ. Use it again on triangles DEN and CFN to show that DE/CF = sinθ/sinφ. Then AE = BF and so sinθ = sinφ. Either θ = φ or they are supplementary, but they can't be supplementary since AD and BC are not parallel. Rckrone (talk) 18:39, 14 March 2010 (UTC)[reply]
That makes no sense. AE and BF are not in general equal. And θ and φ are not angles in a common triangle, so applying the law of sines won't work directly. Michael Hardy (talk) 19:11, 14 March 2010 (UTC)[reply]
AE and BF are in fact necessarily equal (note: not AX and BX). Obviously I omitted steps but as I said you use law of sines on pairs of triangles to show the relationships between θ and φ. AM/sinθ = AE/sin(angleAME) and BN/sinφ = BF/sin(angleBNF). sin(angleBNF) = sin(angleAME) and AM = BN, so then AE/BF = sinθ/sinφ, etc. Does that make it more clear? Rckrone (talk) 20:13, 14 March 2010 (UTC)[reply]

Simple solution Let u and v be unit vectors pointing out from the point of intersection so that

A = xu
D = (x + a)u
B = yv
C = (y + a)v

Then

M = (xu + yv)/2
N = (x + a)u/2 + (y + a)v/2

The vector from M to N is therefore

(au + av)/2

and that does not depend on x or y! Therefore if it works for some x and some y, it works for all. Now just think about the case where x = y and use symmetry. Michael Hardy (talk) 22:27, 14 March 2010 (UTC)[reply]

This is another simple solution: build parallelograms AA'ND and B'BCN. Note that AA' and B'B are equal and parallel (as they are equal and parallel to DN and NC, respectively, which in turn are collinear and equal), and AM and MB are equal and parallel, too. The parallelism implies the equality of corresponding angles ∠A'AM=∠MBB'. Having equal respective pair of sides and a respective angle, triangles ∆AA'M and ∆BB'M are congruent by SAS rule. Having their opposite orientation we find M being the middle of A'B' line segment, i.e. the middle of the base in the isosceles triangle ∆A'B'N. Now we see NM is a bisector of ∠A'NB', and this directly implies what you need. --CiaPan (talk) 08:38, 18 March 2010 (UTC)[reply]

Terminology: equivalence of ordering or optima edit

I'm looking for terminology related to the following idea. I'm sure this must have been studied and discussed by someone, but I'm having trouble finding the relevant phrases, so I'm getting nowhere googling it. Let's say you have two functions f and g, both   (or something similar). I'm looking for a kind of property (equivalence?) that states that  , and similar for other inequalities. It seems to me that this kind of thing would be very important in fields like optimization, so it's likely to have a name. My guess is that a far more general statement of this does have a name, but don't recognize it as such. Any insights? risk (talk) 13:34, 14 March 2010 (UTC)[reply]

Functions that preserve (or reverse) order are called monotonic functions. Gandalf61 (talk) 14:51, 14 March 2010 (UTC)[reply]
I did get that far, but unfortunately, that doesn't quite capture what I wrote above. I'm looking for the phenomenon where two functions induce the same ordering on a given set (independent of any ordering that set may have had to begin with. Monotonocity may be relevant to this, but I haven't quite worked out how. risk (talk) 15:06, 14 March 2010 (UTC)[reply]

What is needed is not that they are monotonic functions of their argument. If they are monotonic functions of each other, that is enough. But even if they are not functions of each other, I think in some cases the proposed inequality may hold. Michael Hardy (talk) 19:03, 14 March 2010 (UTC)[reply]

Sorry to keep shooting everybody down, but I think you misunderstand what I'm after. I don't want to prove this for specific functions, rather I'm after terminology. If this property holds for two functions, what would you call that? Would that be some kind of equivalence? Isomorphism? Duality?
As an example of why this is relevant: consider an optimization problem where f(x) is the objective function we're trying to maximize. Any kind of hill-climbing or gradient ascent algorithm will then find maxima of f. If the above property holds for f and another function g, then we can replace g with f and the algorithm will still behave basically the same (in some sense). This can be helpful, for instance, if g is easier to compute. risk (talk) 19:55, 14 March 2010 (UTC)[reply]
I think Michael's answer above is what you are looking for. Your property certainly holds when f and g are monotonic functions of each other. I believe this is also the only case. No need for new terminology then. Maximising g=ln(f) instead of f (possible since ln is monotone) is a standard trick.213.160.108.26 (talk) 22:52, 14 March 2010 (UTC)[reply]
Oh crap, I just noticed I made a mistake (which explains a lot). The function definition above should actually read  , so the functions are actually scalar fields. In any case, I can assume that there is no really obvious answer which I should've known about, which is good enough for now. risk (talk) 23:14, 14 March 2010 (UTC)[reply]
But my remarks above still hold. (I think the person who added "only if" may be right. More later.....) Michael Hardy (talk) 04:03, 15 March 2010 (UTC)[reply]
Really? Then what exactly do you mean by "f is a function of g"? For instance, for simple functions, the relation holds for f(x) = 2log(x) and g(x) = 2log(x). These can be rewritten as functions of one another (I think), is that what you mean? If so does it have to be f(x) = h(g(x)) or can it be f(x) = h(g(x), x) as well? And what kind of pair of functions would be an obvious example of a pair that does not satisfy this requirement? (btw, this is starting to look a lot like Topological conjugacy) risk (talk) 10:35, 15 March 2010 (UTC)[reply]
Let   defined on the image of f. Then   is always a singleton (otherwise, there are   such that  , which is a contradiction). So we'll treat h as a real function. Then  , and clearly h is increasing (strictly, if   also holds). -- Meni Rosenfeld (talk) 11:37, 15 March 2010 (UTC)[reply]

Sparse Graph Colorings edit

I'm trying to find examples of sparse graphs with relatively high chromatic number. More specifically, graphs with chromatic number 6, 7, or 8, which have the property that their total number of edges is less than four times their number of vertices (E<4V). The obvious examples are K6 and K7, but I'm having trouble finding others. Could you point me towards any resources? Black Carrot (talk) 16:37, 14 March 2010 (UTC)[reply]

The obvious thing to do is to take a graph of suitably high chromatic number and add loads of isolated vertices (or a long path, if you want connectivity) to bring down E/V. Algebraist 17:43, 14 March 2010 (UTC)[reply]

I was thinking more along the lines of minimal graphs, where removing an edge reduces the chromatic number. (And yes, connected.) Black Carrot (talk) 18:46, 14 March 2010 (UTC)[reply]

Integration edit

How would I integrate   (where a=-1 and b=1)? I know it forms a half-circle and I could solve really easily by just пr2, but I'm curious for a way that works for not just circles. THanks 68.76.147.34 (talk) 17:13, 14 March 2010 (UTC)[reply]

This kind of integrals are usually done using substitution.   works in this case. -- Meni Rosenfeld (talk) 17:24, 14 March 2010 (UTC)[reply]

Look at trigonometric substitution. Michael Hardy (talk) 18:56, 14 March 2010 (UTC)[reply]

OK, I followed it and it really helped to get simpler. But now I'm at another impasse. Using the example from the top, and the advice Meni Rosenfeld gave me:

 
 
 
 
And stuck again. How do you do cos2 θ? 68.76.147.34 (talk) 23:58, 14 March 2010 (UTC)[reply]

Use cos2θ = (1+cos2θ)/2.--RDBury (talk) 00:20, 15 March 2010 (UTC)[reply]

L'Hopital vs. Landau edit

In a previous question by someone else (entitled L'Hopital's rule) there was mention that Landau notation (Big O notation) was superior to L'Hopital's rule in finding limits of indeterminate forms. I find this hard to comprehend, given that Big O notation throws away constants. e.g. For f(x) = 3ex-3 = O(ex) and g(x) = 2ex-2 = O(ex). However both   and   (both by L'Hopital's rule and by inspection), and both values depend critically on the very multiplicative constants Big O notation explicitly discards. Is it true that Big O notation is more applicable in finding limits of indeterminate forms than L'Hopital's rule, and if so, how does one accomplish it? Yes, I'm aware that the ratio f(x)/g(x) can be trivially simplified. I apologize for not having time to find a more complicated example. My key point is that Big O notation discards multiplicative constants, which may play a role in the final limit. i.e. given that it fails in this trivial example, how can you be confident that it would work in a more complicated one? -- 174.21.235.250 (talk) 19:49, 14 March 2010 (UTC)[reply]

You have a choice what to use the Big\little O notation for. Taking for example  , it's   when  . And it's also  . And  . In your example what you want is   when  . Then  . If you mess something up (absorb too many terms in the Landau notation) you'll get an ambiguous result - e.g., for   you get  . -- Meni Rosenfeld (talk) 20:03, 14 March 2010 (UTC)[reply]