Wikipedia:Reference desk/Archives/Mathematics/2010 September 26

Mathematics desk
< September 25 << Aug | September | Oct >> September 27 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 26 edit

Limit of sum edit

Hi, is it right that there is no number x such that the limit of the sum of x + x + x + ... as the number of terms goes to infinity is a finite number? If there isn't, is there some topic in maths that defines a new entity x such that this is true and then examines its properties, or does the whole thing just not make any sense? —Preceding unsigned comment added by 86.184.25.102 (talk) 13:17, 26 September 2010 (UTC)[reply]

The sum has a finite limit for x = 0. --Wrongfilter (talk) 13:22, 26 September 2010 (UTC)[reply]
Sorry, in case it was not clear, I meant finite limit but not zero. Does that alter your answer? 86.184.25.102 (talk) 13:53, 26 September 2010 (UTC).[reply]
I think the hyperreal number   satisfies this. Alternatively, with Ramanujan summation and other nonstandard summations, some divergent sums have a finite value, for example 1 + 2 + 3 + 4 + … = -1/12. -- Meni Rosenfeld (talk) 14:11, 26 September 2010 (UTC)[reply]
The sum can make sense in non-standard analysis, so long as you are careful about what you mean by "...". For example in NSA the real numbers contain "illimited" natural numbers, larger than every "standard" natural number. If n is illimited and you add 1/n to itself n times, you get 1. Tinfoilcat (talk) 18:30, 26 September 2010 (UTC)[reply]
Look at Infinite series. Some infinite series are divergent. Your question seems to relate to convergent series. Dolphin (t) 06:06, 27 September 2010 (UTC)[reply]
There are some interesting articles related to summation of Grandi's series 1 - 1 + 1 - 1 + ... 67.122.209.115 (talk) 18:27, 27 September 2010 (UTC)[reply]
I can't believe I missed this earlier: 1 + 1 + 1 + 1 + …. -- Meni Rosenfeld (talk) 19:34, 27 September 2010 (UTC)[reply]

Function edit

What would be an example of a function that has a horizontal asymptote at  , vertical asymptotes at   and   and a removable discontinuity at  ? So far, I have figured out the asymptotes part, but not the removable discontinuity part. (So far, I'm at 2x2 + x + 1x2 − 9) 72.73.193.227 (talk) 19:19, 26 September 2010 (UTC)[reply]

The function   has a removable discontinuity at   (under one definition of "removable discontinuity," at least). Can you see how to adapt this example? —Bkell (talk) 20:07, 26 September 2010 (UTC)[reply]
For the verticle asymptotes you need ƒ(x) → ∞ as x → ±3. Such as fuction would be:
 
Next, you want a horizontal asymptote of y = 2, i.e. ƒ(x) → 2 as x → ∞. Since ƒ(x) → 0 as x → ∞, lets just add on 2:
 
As for a removable singularity, well, multiply the numerator and denominator by x − 5. This gives:
 
There is a more elegant solution. Notice that
 
and so another example of an answer would be
 
Hope that helps. Fly by Night (talk) 13:28, 27 September 2010 (UTC)[reply]

Numerical precision limitations on matrix inverses edit

I am performing an analysis that requires me to calculate the inverse of many large covariance matrices (e.g. > 10000x10000, see kriging). I know by theoretical considerations on their construction that the matrices should be full rank, but with so many rows, numerical precision starts to be a problem. Sometimes the inverses blow up. I can predict when this will happen by looking at the magnitude of the condition number estimator, but I don't have a good solution for what to do when it occurs. Are their recommended approaches for dealing with matrix inverses when near the limits of numerical precision so as to avoid runaway amplification of round-off errors? I've already tried a variety of matrix decompositions without much success. Any suggestions? Dragons flight (talk) 19:29, 26 September 2010 (UTC)[reply]

Have you tried to use the Newton-Raphson algorithm for matrix inversion:
 
A is the matrix you want to invert and the B_n are the successive approximants for the inverse. I've read that this is less prone to numerical errors (but I don't have experience with this).
Some time ago I had problems with excessive loss of significant digits when using mathematic to solve large numbers of equations, but that was due to Mathematica using too conservative an estimate for the accuracy. I could fix that artificial problem by giving a false initial accuracy for the input. Count Iblis (talk) 20:07, 26 September 2010 (UTC)[reply]
Thanks for the idea. Unfortunately, this doesn't seem any more stable than the approach I had been trying. Right now I've got an approach that sort of works, involving estimating the information I need by taking a large number of inverses on partial submatrices. However, this hack is computationally intense and I would still be interested if there are other suggestions. Dragons flight (talk) 00:33, 27 September 2010 (UTC)[reply]

Could this help?

McCarn, Dan W.; Carr, James R. (1992), "Influence of numerical precision and equation solution algorithm on computation of kriging weights", Computers & Geosciences, 18 (9): 1127–1167, doi:10.1016/0098-3004(92)90037-R, ISSN 0098-3004

It suggests using LU decomposition instead of Gaussian elimination. It also mentions Gaussian elimination can be improved through iteration; I'm not sure how (I only looked at the abstract). You are already using double precision, I presume. If not, that's probably simplest unless the resource usage is completely out of reach for your problem. 10k*10k isn't that large on today's computers though. 67.119.2.101 (talk) 06:41, 27 September 2010 (UTC)[reply]

There is really no substitute for using sufficient arithmetic precision. Bo Jacoby (talk) 07:21, 27 September 2010 (UTC).[reply]
Here is something about iterative refinement. It looks interesting. I also notice the quadruple precision article mentions libc6 has a software implementation of 128 bit FP arithmetic. 67.119.2.101 (talk) 08:36, 27 September 2010 (UTC)[reply]
As covariance matrices are symmetric, surely you should be able to use the Cholesky decomposition rather than LU decomposition? That's what LAPACK does — see [1], e.g. routine DPOTRI. If that's not good enough i'd start to wonder if the issue is in your specification of the problem — that's often the case in statistics when matrices are numerically close to being rank deficient. --Qwfp (talk) 13:18, 27 September 2010 (UTC)[reply]
The Cholesky decomposition routine starts throwing error messages (essentially saying the matrix in not positive definite within double precision) at essentially the same scale that the other routines start blowing up (despite working fine on smaller subsets). At this point I probably need to find a way to either simulate higher precision math or redesign the problem to avoid such ill-conditioned matrices. However, I will note that I found this which looks potentially interesting. Dragons flight (talk) 16:18, 27 September 2010 (UTC)[reply]
If you're using 64-bit double precision on an x86, note that x86's support 80-bit extended precision which much less slowdown than you'd get from a software emulation. There are some IBM workstations (Power architecture) that support 128-bit extended precision but those aren't so widespread. Software emulation may be ok with the method in the Demmel et al article that I linked to, because the corrective step (that needs the extended precision) is O(n^2) in the number of rows, while the inversion step (using ordinary doubles) is O(n^3). So for very large matrices the inversion will use the bulk of the computing time, even if the correction step's arithmetic is simulated. 67.122.209.115 (talk) 17:36, 27 September 2010 (UTC)[reply]


Perhaps integer arithmetic is also worth trying. You then multiply the matrix by some large number and replace the matrix elements by integers. Then you compute the inverse modulo some not too small integer p (take it too small and the inverse won't exist). Then you do successive Hensel lifting steps to obtain the inverse modulo p^r. This does not require computing any inverses. If you keep on using the original inverse Modulo p, the exponent r will increase linearly. By using the newly obtained inverse from the previous Hensel lifting step, the exponent r will double in each step. Now, this process will only yield integers for the matrix elements while the real inverse you seek will be rational numbers.

To get the desired rational numbers from integers modulo some large n = p^r, you need to do a so-called "rational reconsruction", which involves running an extended Euclidian algorithm with integer matrix element and n. A number m mod n can be uniquely written as p q^(-1) mod n with p and q smaller than sqrt(n), and the Euclidean algorithm will yield the p and q. This also tells you when to stop the Hensel lifting steps: You need to continue until p^r is larger than the expected square of the numerators or denominators in the inverse. Count Iblis (talk) 21:09, 27 September 2010 (UTC)[reply]