Wikipedia:Reference desk/Archives/Mathematics/2010 October 3

Mathematics desk
< October 2 << Sep | October | Nov >> October 4 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 3 edit

k = x + 1/x edit

Why is this equation so ridiculously difficult to solve using standard high school algebra methods? I can only seem to solve it by running it through Mathematica. John Riemann Soong (talk) 00:58, 3 October 2010 (UTC)[reply]

You're solving for x, right? If you multiply through by x you get x2 - kx + 1 = 0 and then you can hit it with the quadratic formula. Rckrone (talk) 01:27, 3 October 2010 (UTC)[reply]
I just realised this. I actually started from the lens equation (k = S/f - 2, where S and f are known constants) and did a lot of rearranging to get k = di/do + do/di. Must have been too fatigued to realise that having a non-constant term on the other side would be a good thing. John Riemann Soong (talk) 01:55, 3 October 2010 (UTC)[reply]

Writing a puzzle edit

Good evening. I'm trying to write a puzzle but I don't know how to state it right. It's going to start like "Find the greatest value of <some f(x)>" and the solution should require changing f(x) somehow to a polynomial function g(x) (I think g(x) might be x^<some large constant>, but I'm not sure on that) that can be evaluated by taking the nth derivative of g(x) such that it is a constant (that is also the maximum of f(x)) and the n+1th (and every subsequent) is zero. I saw this problem way back when I was studying Calculus in high school and I want to give it to my students. The problem is I can't quite think of how it goes, namely what f(x) to use and how it is turned to g(x). I do remember that there was some way to discourage people from using the derivative test for extrema, hence my guess that g(x)=x^<some large constant>. Can anyone think of a way to present this? Many thanks. PS: It's late and I'm not exactly at my best right now so this won't be the most coherent (or concise). If there are questions just leave them here and I'll get back to you in the morning. Thanks again. --Smith —Preceding unsigned comment added by 24.92.78.167 (talk) 03:29, 3 October 2010 (UTC)[reply]

I like "find the minimum of x + 1/x for x > 0". The problem above reminded me of that. 67.122.209.115 (talk) 04:02, 3 October 2010 (UTC)[reply]
(edit conflict) This doesn't exactly sound like what you're looking for, but the easiest way to find the maximum value of, say,   is to factor the numerator and cancel the common factor of   so you get the polynomial function  , which is easy to maximize. [Of course, in this example   except at  .] If you try to use the derivative test on   directly, you have to use the quotient rule and you'll get an ugly mess. Is this example anywhere close to being on the right track? —Bkell (talk) 04:03, 3 October 2010 (UTC)[reply]
You could always compose your polynomial g(x) with a horrific but monotonically increasing function h(x) to get f(x) = h(g(x)). Say, h(x) = e^x*log(x) for x > 0, g(x) = 2x - x^2, giving the deceptively awful f(x) = e^(2x-x^2) log(2x-x^2). The largest input (2x-x^2) is 1 [2x-x^2 -> 2-2x = 0 at x=1 => 2(1)-1^2 = 1], so the answer is e^(1) log (1) = 0. Season the polynomial g(x) and the level of horror of f(x) to taste. 67.158.43.41 (talk) 16:34, 6 October 2010 (UTC)[reply]

Continued Fraction Factorisation Method - What is the point? edit

I've been learning ways of factorising numbers through the Congruence of Squares method, and at the moment I'm looking at the continued fraction method, whereby you use the convergents of a continued fraction expansion of Sqrt(N) in order to find a congruence of squares modulo N: I think http://www.math.dartmouth.edu/~carlp/PDF/implementation.pdf section 2 has one of the few good descriptions of the process I can find online.

However, what I'm struggling to actually see is what the actual advantage of using these convergents is - what do we gain from using the convergents, does it somehow make it easier to find the congruence of squares? Does it typically make it easier to find B-smooth numbers congruent to a square for some small set of primes B, for example? I can see the method is closely related to Dixon's method, but as I said I fail to see why the CFRAC method is actually advantageous - does it perhaps typically find that the An2 (using the terminology of the linked CFRAC explanation) are congruent to a very small number, in which case more likely to factorise under a set of small primes? Or is it less likely to lead to a trivial factorisation, maybe?

I'd really appreciate any enlightenment - I can see why Dixon's method works, but not how this development helps. Ta very much in advance! Mathmos6 (talk) 17:14, 3 October 2010 (UTC)[reply]

Congruence of squares basically requires that you find a bunch of x′s so that x2 mod N is small relative to N. The different variations of the method amount to different methods of generating the x′s. There is generally a trade off between the simplicity of the method and how quickly it works; the continued fraction falls somewhere in the middle.--RDBury (talk) 08:52, 5 October 2010 (UTC)[reply]