Archive 1

Link bugs

Why are some of the free links without apostrophes also closed to editing?

Apparently because there are newlines in them in the source --drj

So now Catastrophic loss of significance is fixed. rchase — Preceding unsigned comment added by Rchase (talkcontribs) 22:02, 15 August 2001 (UTC)

Reversion of recent anonimous contributions

The recently added text has quite a bit of flaws:

  1. First sentence is almost as a tautology, saying "numerical analysis is about numerical solutions". For now, the wording "numerical solution" is not defined, that will be dealt with later.
  2. The wording "it deals with" shows up all the time.
  3. The introductory paragraph went into way too much detail, it is meant to be a short overview.
  4. The part in the ==General introduction== goes off to talk optimization, and again, goes into unnecesary detail for this stage.

As such, I found the contributions to be overall poor style, without adding much valuable, so I reverted the thing. Oleg Alexandrov 19:26, 31 July 2005 (UTC)

Moving the software out of the page

I moved the list of software out of the page, to a seperate article List of numerical analysis software. This is for several reasons.

First, the list was outgrowing the page. I felt that the article as losing it's main point as a science topic, and becoming an application topic. Second, it hindered the freedom to add rarely used software, which would be non-notable in the main article. Third, this way, we can add much information to the list of software, may be maintain it as a table (showing which platform they run on, licence, whether they are free/opensource, how large amounts of data they can handle etc.). I hope no one would oppose this move. Greenleaf 07:58, 9 September 2005 (UTC)

I think that shortening the software section in this article and moving most of it to a new article was a good idea. Thanks. Oleg Alexandrov 15:18, 9 September 2005 (UTC)

approximate in def vs. exact in direct methods

The definition and the article seem to be inconsistent.

The definition reads:

"Numerical analysis is the study of approximate methods for the problems of continuous mathematics ..."

The article reads:

"Some problems can be solved exactly by an algorithm. These algorithms are called direct methods."

If the methods are approximate, how can there be an exact solution?

--Jtir 18:18, 29 September 2006 (UTC)

Maybe it has to do with whether real numbers or floating point numbers are being used. --Jtir 18:25, 29 September 2006 (UTC)

Should examples replace informative exposition?

(copied from User talk:Loisel) --Jtir 18:34, 30 September 2006 (UTC)

Hi, adding examples is welcome, but could you do it without zapping some excellent exposition? Perhaps you could put your examples in subsections after the introductory exposition, naming the subsections Example. --Jtir 20:30, 29 September 2006 (UTC)

I am being bold WP:BB, which is policy. My examples are meant to replace the vague exposition that precede it. The style I intend is to start with an example, and end with some sort of general statement that resembles the previous text. I also intend to do that to the rest of the article. Loisel 17:39, 30 September 2006 (UTC)
Actually, WP:BB is a guideline that reads in part "avoid deleting information wherever possible." I found the information in the text you are replacing very useful. I suggest you merge, not replace.
And the talk page is the first place to ask for opinions of editors when you "intend to do that to the rest of the article".
--Jtir 18:34, 30 September 2006 (UTC)

While the current changes added good material, I am not happy with the removal of much of the previous text. That text was more accessible I think, and was not bogging the reader into details that much. Also, the first part of the article is now written in a totally different format than the latter part (which is old text, and which looks nicer). I would vote to rollback most of the current changes, or integrate them in the old text (without too much detail, that could belong in separate articles and linked to from relevant sections). Oleg Alexandrov (talk) 19:28, 30 September 2006 (UTC)

Feel free to edit, merge or delete yourselves. This is the Wikipedia. I added lots of examples because I frankly think that this article was useless to novices and experts alike. It was useless to novices because none of the concepts were explained in any way, and it was useless to experts because all the stuff was trivial. Loisel 20:39, 30 September 2006 (UTC)

Maybe an alternative would be to put the examples in sidebars, like this.

Solving   directly:
 
  substracted 4 from each side
  divided by 3 on each side
  cubic root on each side

Loisel 21:36, 30 September 2006 (UTC)

Old history removed for now

I removed this for now, it could use some work.

History

The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.

To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.

The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.

Loisel 22:19, 2 October 2006 (UTC)

def for the lead sentence

"The overall goal is the design and analysis of techniques to give approximate solutions to hard problems."
This would make a good def for the lead sentence. I like the phrase "approximate solutions to hard problems". I'm not sure what could be done to avoid a double occurrence of analysis. --Jtir 19:35, 3 October 2006 (UTC)

definition of numerical analysis discussion copied from Talk:Mathematical analysis

(This was copied after my ill-fated attempt to define numerical analysis at Mathematical analysis.) --Jtir 19:32, 29 September 2006 (UTC)

The definition read:

"Numerical analysis, the study of algorithms for approximating the problems of continuous mathematics."

It now reads:

"Numerical analysis solves problems of continuous mathematics by iteratively computing approximations until they converge to a numerical solution."

My comments:

  • It is now a declarative sentence.
  • I didn't understand the phrase "approximating the problems", so I replaced it with the "solving problems" formulation.
  • It now identifies both the computational and numerical aspects of the subject.
  • The word "algorithm" got dropped. Maybe it could be worked in.
  • As I read somewhere on WP, some numerical analysts do not actually write or run computational programs themselves. However, running programs is the ultimate objective of the subject. My definition doesn't quite capture these nuances. Maybe saying "analyzes and solves problems" would be broad enough.
  • The definition is longer.

--Jtir 17:20, 29 September 2006 (UTC)

The new definition is wrong. Numerical analysis is not merely about iterative methods. Fredrik Johansson 17:39, 29 September 2006 (UTC)
Thanks for pointing this out. The definition does not account for direct methods. I reverted. --Jtir 19:01, 29 September 2006 (UTC)
Please stick to the formulation at Numerical analysis: Numerical analysis [is] the study of approximate methods for the problems of continuous mathematics. Concerning the use of the word "algorithm": Take for example the Runge–Kutta methods. It is strange and in any case unconventional to call these methods "algorithms". (And, while R-K methods are iterative, it is downright wrong to state that they iteratively compute approximations until they converge.) Likewise, interpolation and extrapolation are methods, not by themselves algorithms.
—The preceding unsigned comment was added by Lambiam (talkcontribs) 18:57, 29 September 2006 (UTC)
There's no hope of getting a precise definition of the field. (This isn't unique to numerical analysis.) Philip J. Davis refused to supply one to me when asked, giving a lengthy reason why it couldn't and shouldn't be done. But, the appendix of Trefethen and Bau's Numerical Linear Algebra is instructive here, especially the comment by Trefethen (if memory serves) that G.E. is a highly unusual algorithm in many ways. The definition shouldn't make or break on that one case. Linear eqs. are a special case, after all. I have several older books on my shelves entitled Numerical Functional Analysis. That's where many eventual numerical analysts came from way back when (including Davis, literally drafted into the USAF to do numerical aerodynamics). To my mind, it's applied functional analysis, and I like that definition for nostalgic reasons even though it no longer captures the field as well as it could. The current definition in this article is essentially the one used by Trefethen, save that he omits the 'approximate'. I would be fine with 'iterative methods'. There aren't enough exceptions to worry about, even though the G.E. exception is so prominent. JJL 19:52, 3 October 2006 (UTC)

comments/questions re numerical examples

They look great. I like the graphs and pics, too. I interpret the ellipses at the ends of the decimal expansions to possibly mean an infinite expansion. How are the examples of iteration generated? --Jtir 17:25, 1 October 2006 (UTC)

The decimal expansion is finite but long, so I put some ellipses.

I built the iteration in the following way. I want to compute sqrt(2) using a fixed point iteration. One way to do this is to write a fixed point iteration for a polynomial whose roots include sqrt(2). When you have a fixed point iteration x[k+1]=f(x[k]), x[k+1] is to the right of x[k] iff f(x[k])>x[k]. So I had to pick a polynomial f(x) that lies above the line y=x near sqrt(2) and such that f(sqrt(2))=sqrt(2). This way, if you start the iteration from the left of sqrt(2), it will move to the right and converge to sqrt(2). If you start to the right of sqrt(2), it will also move to the right and therefore diverge.

Complications arise when you also want the coefficients to be nice. Basically, I took g(x)=x^2-2, which has root sqrt(2), and then I picked f(x)=g^2+x. Clearly g^2 also has sqrt(2) for a root, and so f(sqrt(2))=sqrt(2). Because g^2 >= 0, you have f(x) >= x. Loisel 18:45, 1 October 2006 (UTC)

Thanks, that's excellent. I'd like to put in a footnote(?) about the method so that the example is verifiable. What software are you using to compute the examples? --Jtir 19:53, 1 October 2006 (UTC)
Ok. I use Matlab. Loisel 20:07, 1 October 2006 (UTC)
How do editors feel about using a Matlab snippet together with exposition similar to the above to ensure verifiability and avoid any charge that the example is original research? Obviously the the value of the example would be enhanced by doing this anyway. --Jtir 20:49, 1 October 2006 (UTC)
Not a good idea. Normal people don't have access to Matlab and those people who do, are able to write the Matlab snippet themselves from the description. You have to be very careful about including code in publications because code has a tendency to add a lot of clutter without really helping anyone. Loisel 21:26, 1 October 2006 (UTC)
I agree that too much code in the wrong places would constitute clutter. BTW, I "don't have access to Matlab".  ;-) --Jtir 20:15, 3 October 2006 (UTC)

I definitely like the direction the page is moving! A little Matlab here doesn't bother me. JJL 19:53, 3 October 2006 (UTC)

regarding software

there was a comment in there about matlab's speed advantage over octave. i changed it be more generic as to apply to all the packages listed. i also tacked on a reference to an actual speed comparison (it originally came from the wiki page for R; dunno where they got it).

i was about to add a comment about comparing non-numerical abilities. for instance, matlab's add-on packages make it very attractive for certain specialized applications. in another instance, the graphical layout abilities of R far outstrip any other of the listed products.

but then i stopped myself and began to wonder whether we should really be in the business of pitting one software package against another. that is, imho, wikipedia not a software comparison service. ("just the facts, ma'am.") does it sound reasonable to take out the speed comparison, too? Lunch 23:23, 3 October 2006 (UTC)

That section is looking good. A general discussion of the ways in which NA SW may be compared (speed, accuracy, flexibility, reliability, etc) is useful IMO. The article could generically describe Matlab's features, say, but not name Matlab explicitly. Further, a general discussion would not become dated so quickly. Sourced comparisons from credible, independent (i.e. not any vendor) sources appear to meet the sourcing policy: "articles ... must refer only to facts, assertions, theories, ideas, claims, opinions, and arguments that have already been published by reputable publishers." The benchmark site in the article compares a variety of SW and explains the methodology. Hostname reports that the registrant is at the "UMH, Numerical Ecology Laboratory", Mons, BE. So it seems both credible and independent.
BTW, could something about handheld scientific calculators be added to the discussion? They aren't strictly SW, but the section title could be broadened slightly. What about slide rules?  ;-) --Jtir 00:18, 4 October 2006 (UTC)

Table layout

I changed one table to an alternate layout. I'm not sure I like it, but the previous one is a little cluttered, as Tompw points out. Loisel 22:08, 7 October 2006 (UTC)

I saw it and can only say it is hard to decide. The horizontal arrangement has space for still more images, though. :-) --Jtir 22:47, 7 October 2006 (UTC)

created the Fixed point iteration article

I have created the Fixed point iteration article.
It is a stub with a link to Fixed point (mathematics).
I also started the Talk:Fixed point iteration page. --Jtir 14:34, 8 October 2006 (UTC)

External links

I would like to propose the following external link to a relevant scientific paper:

Izquierdo, Luis R. and Polhill, J. Gary (2006). Is Your Model Susceptible to Floating-Point Errors?. Journal of Artificial Societies and Social Simulation 9(4) <[1]>. This paper provides a framework that highlights the features of computer models that make them especially vulnerable to floating-point errors, and suggests ways in which the impact of such errors can be mitigated. —Preceding unsigned comment added by 80.103.1.204 (talk) 17:05, 4 November 2006

Please explain your proposal. What makes this particular paper so important? There are thousands of papers on numerical analysis and we can't link to all of them. -- Jitse Niesen (talk) 02:34, 5 November 2006 (UTC)
I share the concern. What is so special about this paper? There are many great papers on the subject. JJL 03:21, 5 November 2006 (UTC)

comments on fixed point iteration footnote

That looks good. There are no subscripts on the variables in the fixed point iteration footnote.
The rationale for the second square seemed fairly subtle and thus would be invaluable in the footnote. --Jtir 20:40, 7 October 2006 (UTC)

OK, I just noticed that the footnote says it's an equation, not an iteration --Jtir 20:57, 7 October 2006 (UTC).
Normally, the fixed point iteration article should explain how to generate fixed point iterations from an equation. Basically, you put the subscript k on one side of the equation, and the subscript k+1 on the other side. You are then supposed to check whether the iteration converges. This particular example has peculiar convergence properties for didactic purposes. Loisel 21:39, 7 October 2006 (UTC)
OK on the FPI article for the general method. This is what you said before that I thought was especially interesting:
  • "if you start the iteration from the left of sqrt(2), it will move to the right and converge to sqrt(2). If you start to the right of sqrt(2), it will also move to the right and therefore diverge."
The example itself doesn't put it as directly as you did here.
The footnote could say something to the effect "for the purposes of the example we use [this] equation because of [that]". It may be an instance of stating the obvious per the MOS. --Jtir 22:01, 7 October 2006 (UTC)
I don't see any need for a Matlab snippet anymore. If a reader wanted to reproduce the example and compare his results with the example, what might he expect, supposing he might or might not be using Matlab? --Jtir 22:13, 7 October 2006 (UTC)
To reproduce results that require less than 30 iterations (say), you could use one of:
  • A piece of paper, a pen and lots of grunt work
  • A piece of paper, a pen and a calculator, and less grunt work
  • An Excel spreadsheet
For more than 100 iterations, you would need any programming language such as
  • Matlab (~3 lines)
  • C (~30 lines)
  • Octave etc... might work, but for 1000000 iterations it might take a while. Recall I mentionned that Matlab is a couple of orders of magnitude faster than Octave for loops.
Loisel 22:45, 7 October 2006 (UTC)
Maybe Newton could do 30 iterations on paper ...:-) Would the user get the same decimal expansion at each iteration as shown in the examples? (This may be another one of those stating-the-obvious cases.) --Jtir 23:04, 7 October 2006 (UTC)
Not an identical expansion, especially for the slow iteration. Loisel 01:17, 8 October 2006 (UTC)
The new version is clear and concise. I like that you used the inequalities. This now makes a nice example of a footnote going into more detail than what can be said in the main article. --Jtir 22:29, 7 October 2006 (UTC)

Had an idea to say: if we had instead used the function  , which also has   as a solution, such-n-such would have happened. --Jtir 22:42, 7 October 2006 (UTC)

That iteration is not as interesting in this case because it diverges regardless of where you start, for different reasons. Because   to the left of   and   to the right of  ,   is a repelling fixed point. Because the slope is too steep at  , that's also a repelling fixed point. Loisel 22:53, 7 October 2006 (UTC)
I think the contrast between the behavior of the two functions is what is interesting. That's perfect for the [slowly growing] footnote. :-) --Jtir 23:10, 7 October 2006 (UTC)
I've seen attractive graphs illustrating the progress of fixed point iteration. They would obviously not be suitable for a footnote. :-) Maybe for the Fixed point iteration article? --Jtir 00:39, 8 October 2006 (UTC)
The iteration   is indeed less interesting, but I'd say that makes it more proper to be used as an example here. The more interesting iteration   is also more complicated, possibly confusing, and artifical; of course, it's good for the (as yet) imaginary fixed point iteration article.
Another thing: Are you sure the divergent iteration is said to be "numerically unstable"? It's indeed unstable (in the sense of dynamical systems), but personally I wouldn't call it numerically unstable, which makes me doubt that other people do this. -- Jitse Niesen (talk) 03:45, 8 October 2006 (UTC)
Summary:   is a bad example, and   is much better and is numerically unstable.
The first time I was introduced to numerical instability, it was the following example. Note that  , which leads to the midpoint rule of ODE integration   for the system  , which has local truncation error   and therefore (one hopes) global error  .
One can easily implement this for the well-conditioned problem   (solution is a constant times  .) In MATLAB, it looks like
f=@(y) -y; h=0.01; y=[1 1+h*f(1)]; ts=[0 h]; for k=2:round(10/h) y(k+1)=2*h*f(y(k))+y(k-1); ts(k+1)=ts(k)+h; end; plot(y);
This integrates from t=0 to t=10 using a stepsize of 0.01, which is apparently very small, considering how nice this problem is. Unfortunately, if you do this, you get this picture.
 
The numerical solution is completely wrong, that's numerical instability and it's a very famous example (the midpoint rule is unconditionally unstable: it is unstable no matter how small h is, and no matter how nice the RHS is.) But why does it happen? The way to analyze this is to plug in   in the iteration, one obtains the three-term recurrence  , whose general solution is   where  . Regarless of  , you will always have   so that the solution is guaranteed to explode when  . With proper initial data,   will be small, but it will never be zero, or will become nonzero due to roundoff.
Another way of checking that this iteration is unstable is to view the ODE solver as a fixed point iteration for the function   where A is the matrix [-2h 1;1 0] whose spectral radius is h+sqrt(h^2+1)>1. Stated in this way, suddenly it doesn't sound like numerical instability at all, but rather it sounds like stupidity on the part of the designer. Who would expect this to work? The problem is, once you know the magic trick, all numerical instabilities seem like stupidity.
The crucial elements are: you state an algorithm, which for some reason you believe it will work, and you try it and it works a little bit. If it starts failing mysteriously when you twiddle it around, it's numerically unstable.
It is very hard to state an algorithm which will sound natural to the layperson, and certainly giving the midpoint rule would be much more complex than what I did in this article, requiring an understanding of ordinary differential equations.
The iteration I gave has many virtues:
  • If you know the fixed point iteration, it's obvious that sqrt(2) is a fixed point, so it's obvious why you would try this.
  • If you don't know the fixed point iteration, we have an example, starting from 1.4, where it converges to the right thing, so this establishes the credibility of the iteration.
  • We have a perturbation (starting from 1.42) where it fails to converge. This is akin to taking slightly wrong initial data for the midpoint rule.
  • It can be understood by anyone in high school.
The main difficulty with the iteration for   is that it becomes impossible to establish the credibility of this algorithm to laypeople: there is no iteration that will converge.
Loisel 07:52, 8 October 2006 (UTC)
The midpoint rule is an excellent example, we both agree on that, but it would be nice to have an easier example. I'm interested in your "definition" of numerical instability (when algorithms should work but don't), because it agrees with how I think that the term is used, but I could not find a source for that definition when doing a quick hack on numerical stability. Do you know any? The same goes for my question (are you sure the divergent iteration is said to be "numerically unstable"); that was meant to be short for asking for a reference. I think this is especially important for such a nebulous concept as numerical stability.
I don't think that   is that obvious. When designing a fixed point iteration, you start with the equation you want to solve:  . From this, one is naturally (IMHO) led to  , which unfortunately does not work. Then, one can try taking the average (which yields Newton) or rewriting the equation as  . From this point of view, the iteration   is more credible than  . The credibility of fixed point iteration can be established quickly by doing the Newton iteration before the unstable iteration.
In the end, which iteration is used is not that important to me. I'm more interested in trying whether we can simplify the example. If we move Newton before the unstable iteration, I think we can remove the first half of the paragraph "numerically unstable algorithm" (the part which starts from 1.4). What do you think of that?
By the way, I like your examples / illustrations and I think they're a huge improvement, even though some may question the place of examples in an encyclopaedia. -- Jitse Niesen (talk) 12:16, 8 October 2006 (UTC)
I just realized I never answered this. Indeed, your suggestions for alternate iterations sound reasonable. For a definition of stability, Conte and de Boor says "...instability describes the sensitivity of a numerical process for the calculation of f(x) from x to the inevitable rounding errors committed during its execution in finite precision arithmetic." Loisel 17:59, 4 December 2006 (UTC)

That is a fantastic graphic and it would be perfect for the Fixed point iteration article, as would the associated discussion of it.
I favor the current example because it is easy to understand while illustrating an important phenomenon. The NA article does not need to explain how the example was designed. That can be done in the FPI article. A footnote can briefly explain why the example was chosen instead of the "simpler" alternative and link to the FPI article where the two can be compared in detail.

--Jtir 15:50, 8 October 2006 (UTC)

Well, actually, lemme just add one thing. The spectral radius of A is approximately 1+ch for some c, so if you take N=1/h integration steps (to integrate over [0,1]), the explosion is (1+ch)^N=(1+c/N)^N, which converges to exp(c) as h goes to zero. So some people consider that the midpoint rule is actually stable, because the integration rule does not explode more as you make h smaller.
An example that does explode more the smaller you make h is this. You start with   to obtain the iteration  . You apply it to the test problem   and you obtain a fixed point iteration whose matrix has spectral radius is 2.69+2.04h (rounded to two digits). So if you take N=1/h steps to integrate over [0,1], your blow up will be (2.69+2.04h)^N>2.69^N which goes to infinity very fast as h goes to zero.
That's not the example I gave because more people are familiar with the midpoint rule. I guess with the midpoint rule, you can take h sufficiently small, and be sufficiently careful with the initial condition, that it becomes usable. It's just really hard to do. Some people will still say that the midpoint rule is unconditionally unstable. Loisel 18:41, 8 October 2006 (UTC)

Deletion of Examples

I feel that the inclusion of the examples on this page does not fit the tone or style of an encyclopedia. I realize they were added to help clarify the technical nature of Numerical Analysis and also provide the layperson with a concrete example. However, you will note that none of the subcategories of Mathematics have examples in this style. Also since each example was for a subtopic of numerical analysis I feel they might be more appropriate if they would appear on these subpages. I realize that the deletion of content is a sensitive issue. But I feel that this deletion will improve the quality of the article. CMaes 05:03, 4 December 2006 (UTC)

If you see above, there are at least four of us (Jtir, JJL, Jitse Niesen, and myself) against one of you (I'm not counting Oleg Alexandrov, because he hasn't said clearly anything.) Therefore I am reverting your deletion. Loisel 17:42, 4 December 2006 (UTC)

Please, let's not say it as 'against'! I can see both sides here. I think the examples are valuable. The direct vs. iterative example makes the point in a way that words can't for someone who hasn't had calculus. The well-posedness example is also esp. valuable, I think. Yet, I understand the stylistic concern, and I do think we should take some care to match the rest of the math. articles. Some such pages do have examples, e.g. Spectral method (where I think the example is more obtrusive than the ones here, which are side-barred). I suppose the question is, to what extent is this page a general area like Applied Mathematics as opposed to a specific idea like interpolation (which also has an example)? I don't know the answer. I'm open-minded to moving some of the examples to subpages, but I'm not too enthusiastic about it. JJL 18:15, 4 December 2006 (UTC)

Numerical Analysis using Excel

I have built Excel files for doing numerical analysis using some information from the wiki pages. See [2]

219.74.237.29 02:52, 21 December 2006 (UTC)

Inaccurate definition

"Numerical analysis is the study of algorithms for the problems of continuous mathematics (as distinguished from discrete mathematics) that do not have an explicit, direct solution." Gaussian elimination is one of the most prominent algorithms in numerical analysis. It has nothing to do with continuous mathematics and the solution is direct. That's just one example. I think that the inaccuracy can be characterized by the definition including only methods which suffer from truncation error. It neglects methods which suffer from only round-off error.

The definition is certainly not supposed to include only methods suffering from truncation error. Gaussian elimination solves Ax = b where x is a vector of reals. This is a problem in continuous mathematics (x is chosen from a continuum) and not in discrete mathematics. The phrase "that do not have an explicit, direct solution" serves to distinguish numerical analysis from symbolic mathematics. However, I agree that the word "direct" is ill-chosen since Gaussian elimination is called a "direct method" as opposed to an "iterative method" like Gauss-Seidel. Do you have a proposal for a better definition? -- Jitse Niesen (talk) 13:25, 29 November 2005 (UTC)
Would it work to substitute closed-form solution? Otherwise how about just removing the direct. Explicit already gets across part of the idea. - Taxman Talk 13:45, 29 November 2005 (UTC)
I thought about replacing it by "closed-form", but I fear that that is also open to misinterpretation: for instance, computing the sine is also part of numerical analysis, I think, but expressions like sin(1) are general considered to be closed form. I hadn't thought about just removing "direct". I like that, so I changed the definition, expecting that people who disagree will say so. -- Jitse Niesen (talk) 14:47, 29 November 2005 (UTC)
Yes, but in that sense the definition is wrong anyway, because numerical analysis includes evaluating polynomials which are clearly closed form. So perhaps it should say numerical analysis techniques are particularly valuable for evaluating problems where there are no closed form solutions, but the methods can also be useful in situations where there are. I couldn't think of a way to say that, so I'll let you have a stab at it. - Taxman Talk 14:58, 29 November 2005 (UTC)
How about getting really basic? Something about how we want analytic solutions. If we can't get them, then we go to approximate (with some control over error), and eventually to numerical (as a last resort). Additionally, aren't we really talking about solving potentially sophisticated problems using ONLY basic arithmetic (computing)? I'm just an undergrad, so I'm not qualified to contribute to the article yet, but this is how I see things. DB Nov 29 2005
That is a good point, and I was surprised to see that it is not mentioned in the article. It should definitely be mentioned somewhere. As you probably know, analytic formulas are not always preferable; sometimes, they are so complicated that it's hard to make sense of them. The relation between approximate / asymptotic and numerical results is even less straightforward. -- Jitse Niesen (talk) 13:35, 30 November 2005 (UTC)
So it could be said that we seek a practical analytic solution first. Failing that, we might pursue an approximate solution within a radius of convergence perhaps. Assuming this does not yield a practical solution, we attempt to find a numerical solution to describe our problem.
We have to assume that an exact solution exists, and that the exact solution depends continuously on any data (i.e. that the problem is well-posed), otherwise we have no convergence theorems. So why not just define NA as numerical analysis is the study of algorithms for obtaining approximations to the exact (but usually unknown) solutions of problems in continuous mathematics. ? --Des-Downunder 02:31, 3 September 2006 (UTC)
What about the point about numerical solutions are found using only basic arithmetic? Is that a valid point? DB, Nov 30 2005
I believed the "basic arithmetic" part needed to be removed. Left as it was, it sounded as if numerical analysts are merely adding and subtracting numbers--don't get confused here. . . this is not what we do in the profession. -- The preceding unsigned comment was added by 24.18.243.107 (talk • contribs) 07:55, 7 January 2006 (UTC)
I don't see your point. The basic steps in all algorithms are merely adding and subtracting numbers, and using these steps we solve complicated problems, don't we? I also don't like your edits which says that numerical analysis "is largely related to functional analysis". As I see it, the two main theoretical disciplines of maths used by numerical analysis are matrix theory and approximation theory, and matrix theory is usually not taken as a part of functional analysis. So I reverted your edit. -- Jitse Niesen (talk) 13:08, 7 January 2006 (UTC)

I have to agree with the anon at a certain point. There are better things to say in the introduction I think than that numerical analysis is done using addition and multiplication. While that's true, you could say in the same way that computer science is the study of things which are sequences of zeros and ones. So I would cut that part. Oleg Alexandrov (talk) 17:48, 7 January 2006 (UTC)

I see what you mean. It is indeed not that important by itself. However, the point is that we should distinguish in some way between numerical analysis and symbolic computation (see the above discussion). As far as I know, nobody has come up with a better way to achieve this then referring to these basic arithmetic operations. -- Jitse Niesen (talk) 21:58, 7 January 2006 (UTC)
I also believe "basic arithmetic. . . addition" is misleading and should be removed from the definition. This page should describe the core aspects of Numerical Analysis and it should be careful to distinguish itself from Numerical Methods. What is more, including "basic arithmetic. . .addition" is not only misleading, but it is incorrect. I am at a loss for words here . . . I think this should be clear to anyone who manages this entry; there are entire journals devoted to NA and these articles usually never even mention or include algorithms. One of the definitions I find useful is "Numerical Analysis is Functional Analysis with an emphasis on algorithms". The preceding unsigned comment was added by ExamplePuzzle (talk • contribs) .
I don't like either the basic arithmetic or the functional analysis descriptions. But I don't have any suggestions. Oleg Alexandrov (talk) 09:33, 11 January 2006 (UTC)
Hey. I would not vote to use my "functional analysis with. . ." description on this page either. I was just trying to move the focus away from including "basic arithmetic". I will go ahead and remove it and then let other users read over it and offer their comments (this includes you :D).

I like the definition "Numerical Analysis is Functional Analysis with an emphasis on algorithms" (supra).

The view of NA as a bunch of "numerical recipes," devoid of any underlying theory, and which may or may not work for mysterious reasons, is a terrible misrepresentation of the theory. Some people may use numerical analysis this way, but that doesn't mean that the subject is as hit-and-miss as some think! (No offence to the authors of the book by Press et. al.)

Most importantly, the only mathematically rigorous definitions of Numerical stability, consistency, and convergence, and the proof of the Lax Equivalence theorem, all require at least some functional analysis. (I hope to add something more comprehensive to the Numerical stability Discussion section soon.)

Des-Downunder 04:41, 22 January 2007 (UTC)

Writing about numerical analysis in terms of functional analysis would make the text harder to understand I think. Not everybody has a good functional analysis background. Oleg Alexandrov (talk) 05:08, 22 January 2007 (UTC)
I agree. I certainly think of the subject as being numerical functional analysis but that's hard to bring out at this level. The current definition (Lloyd Trefethen's, I think) works for me. Part of the issue is distinguishing between numerical analysis, computational science (and engineering), scientific computation, and any of several other terms now in vogue. I commented on this here Talk:Numerical_analysis#definition_of_numerical_analysis_discussion_copied_from_Talk:Mathematical_analysis also.JJL 18:30, 22 January 2007 (UTC)
I'm also wondering whether functional analysis covers all of numerical analysis. Granted, it's used heavily in approximation theory and numerical PDEs, but where does functional analysis enter when studying Gaussian elimination (which is indeed a special case, as JJL observed), Newton's method or the Runge-Kutta method? -- Jitse Niesen (talk) 00:11, 23 January 2007 (UTC)
Well, functional analysis is the key idea in the backward stability of G.E., and analyzing N.M. as a fixed point iteration also qualifies. Runge-Kutta is, from one point of view, approximations of Taylor's series. Not all of these are very deep applications of the subject, perhaps, but they still fit the mold. Many numerical methods fit the over-arching theme (as I view it) of projecting an infinite-dim. operator on an infinite-dim. space to finite spaces one can work with, which is a Banach space model. So, functional analysis is always there...but less visible, it seems, every year. JJL 00:58, 23 January 2007 (UTC)

To amplify JJL's point, the usual algorithm for the Cholesky decomposition is just Gaussian elimination, modified to exploit the special structure of the matrices involved. (Cholesky decomposition applies to Hermitian, positive semi-definite matrices.)

While the proof of the existence of Cholesky decompositions, for the positive definite case, mainly uses matrix theory, the extension of the proof to the positive semi-definite case relies heavily on functional analysis.

Perhaps we need a new Theory of Numerical Analysis section that's seperate from the general section. Does anyone like this suggestion of a two-layered approach?

Des-Downunder 03:16, 23 January 2007 (UTC)

I'm not sure if it's needed...but I've been singing this song since 1989 on sci.math and would enjoy seeing such an article whether it's needed or not! Too many of my colleagues think I program computers. At the Joint Meetings this month I had a pure mathematician I had just met explain to me that numerical analysis is very easy. I thought: Lax Equivalence Theorem, Kreiss Matrix Theorem, etc....it's most aggravating! Count me in on this. I was just discussing an ind. study from Theoretical Numerical Analysis: A Functional Analysis Framework (by Kendall Atkinson and Weimin Han) with an undergrad. student this afternoon. JJL 04:35, 23 January 2007 (UTC)
You seem to have a more inclusive view of functional analysis than I have. I would call Taylor series and norms and continuity on R^{n \times n} as used in Cholesky decomposition#Proof for positive semi-definite matrices just real analysis, which is part of (mathematical) analysis, but that's just a question of taste, I guess, and not terribly important. I'll have a look at the book by Atkinson and Han that JJL suggests, to satisfy my curiosity.
I'm not sure what is meant with "a new Theory of Numerical Analysis section", but theory should certainly not be ignored; it's much too beautiful to be kept secret! Incidentally, there is much room for improvement in article on the Lax equivalence theorem. -- Jitse Niesen (talk) 12:48, 24 January 2007 (UTC)
I think we are largely in agreement on what is real vs. functional analysis. I was trying to get at that when I said Taylor series etc. were not "very deep applications of the subject" (i.e., barely qualifying as functional vs. real analysis). But to me, the key model is a Banach space, since I need norms to measure various errors. What happens in that space and the associated spaces is "functional" enough for me, and I do tend to view real analysis as a special case. I'm sure my colleagues with a more topological view of real analysis would disagree.
I think that a theory of n.a. section would be most useful as a single paragraph that basically says "yes, of course we prove things" and mentions ideas like functional analysis, then points to a separate article. I'm not convinced WP needs such an entry...but I do think it'd be a nice reference page to have for those who think numerical analysis is principally a list of specific numerical methods. But it's a pet peeve of mine, so I'm waiting to see if there's consensus among more objective folks that it's needed. I'll look at the Lax Equiv. article.
Getting off-track for a moment, the Atkinson/Han book is written at a level I can use with an advanced undergrad. who has taken my n.a. class and has had real analysis. Last time I used it to give a "functional analysis with a numerical analysis flavor" ind. study with a student who was headed to an applied math. Ph.D. program and who had missed our every-other-year undergrad. functional analysis course. It has the preliminary info. one needs for that before jumping into n.a. topics. It's definitely worth a look if you need a text like that; I still go to Stoer and Bulirsch first for specific results. JJL 13:47, 24 January 2007 (UTC)
As in previous discussions, we believe that the word 'approximate' does not belong in the initial definition. For more information on this topic, see Numerical Linear Algebra by Trefethen and Bau (first addition, page 323). Also, why does every page tend to begin with "Since the beginning of time . . .", or "Human kind has always . . .", or <insert your own grandiose phrase>? This page is no exception. I have my eye on the phrase 'For thousands of years, man . . .'. What do you think? —Preceding unsigned comment added by ExamplePuzzle (talkcontribs) 05:49, 7 December 2006 (UTC)

Improved layout?

I think I have improved the breakout boxes significantly, by removing one and making the rest of them much smaller. Is it better?

I'm thinking now that the prose in the main text would require work.

Loisel 03:09, 12 May 2007 (UTC)

Thanks! That looks much better.
I am seeing the General introduction header displaced to the center of the page.
There are some equations in the text that could be converted to HTML too.
The use of right-angle triangle is perfectly clear, but I would use right triangle. Triangle identifies several naming variants.
I have long been confused about the orientation of the triangle in the roof beam example. The first time I read it, I thought the hypoteneuse was parallel to the ground. Now, I am not so sure.
--Jtir 10:04, 13 May 2007 (UTC)
I also think that the hypotenuse is parallel to the ground, but the expression "central support beam" is a bit odd in that case.
I've always had my doubt with the example. The text flows nicely, but somehow I doubt that it's extremely important for carpenters to know the square root of 2. Wouldn't they just take a beam which is somewhat too long and saw of the bit that sticks out? Even if it is important for them, carpenters would probably just measure it instead of using numerical analysis. -- Jitse Niesen (talk) 11:36, 13 May 2007 (UTC)
I changed the example and added a reference. Is it better? Loisel 19:41, 13 May 2007 (UTC)
Good idea — a square doesn't have to be oriented. The ref is fine — you got one before me.
Why did you remove the part of the note showing the use of the Pythagorean theorem? --Jtir 20:16, 13 May 2007 (UTC)
Unintentional. The text had to be changed so I deleted it, but I forgot to place a new one. Loisel 20:39, 13 May 2007 (UTC)
I'm very impressed that you found the CARPENTRY THEORY reference. -- Jitse Niesen (talk) 03:26, 14 May 2007 (UTC)
Heh. :) Oleg Alexandrov (talk) 04:05, 14 May 2007 (UTC)

TOCleft gives indentation issue

The TOCleft template at the top of the article gives an erroneous indentation of the first line when I am viewing it (I have disabled table of contents in my preferences). TOCright circumvents this bug. I am using the Monobook skin. --Berland 12:45, 18 May 2007 (UTC)

The rule seems to be only to use TOCleft (and, to a lesser degree, TOCright) when there is a reason for it. No reason is obvious to me at the moment, so I removed it. -- Jitse Niesen (talk) 13:05, 18 May 2007 (UTC)
I agree with the removal of TOCleft. Such TOCs simply move the text into a narrow column which is rather ugly. It is best if the TOC is the default one I think. Let the readers read the intro, then read the toc, and then read the article. That's how it should be, I think. Oleg Alexandrov (talk) 15:41, 18 May 2007 (UTC)
I agree too. But now there needs to be an image or example so the lead doesn't look so bare. An image of the Babylonian tablet would be ideal and it looks as though the copyright holder might allow its use. --Jtir 21:41, 18 May 2007 (UTC)

YBC 7289

Wow! I had a conversation with Bill Casselman and he graciously made Image:Ybc7289-bw.jpg available to us, but Jtir beat me to the punch. You, dude, have some sort of robotic automatic something!

Anyway, looks good.

Loisel 18:00, 25 May 2007 (UTC)

<g> I emailed Bill on Monday about it. It's a beautiful image and having the annotation is fantastic. Thank-you Bill! --Jtir 18:34, 25 May 2007 (UTC)

Layout

I raised the level of the section on errors and error propagation.

Henk Langeveld (talk) 12:22, 8 January 2008 (UTC)

Good.  --Lambiam 16:23, 8 January 2008 (UTC)

Errata

under notes, 3 Pythagoras ex. should read ~4.45 vice 4.55 but I do not know how to edit in the correction--Billymac00 (talk) 04:35, 28 May 2008 (UTC)

  Fixed, thanks.  --Lambiam 17:13, 28 May 2008 (UTC)

Comment

"The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done."

It could be argued that computers evolved from the Hollerith machines, and thence from administrative purposes. Why does that sentence belong in an article about a mathematical research field? (Please move to the appropriate section in the talk page) 189.0.224.34 (talk) 20:10, 16 March 2009 (UTC)

Broken link

The following link seems to be broken.

— Preceding unsigned comment added by 91.65.66.229 (talk) 10:50, 24 March 2009 (UTC)

WHY did nobody catch this?!

SHOULD BE NOTED THAT THE BISECTION METHOD BELOW ISN'T EXACTLY WHAT IS DESCRIBED ON THE BISECTION PAGE

I think we should either remove the warning and make it so the table uses the exact method as described in the main article, or delete the whole table so we don't have a a BIG warning, in all CAPS, in very big letters, on one of the top 500 articles about Math! A BIG STINKING WARNING ISN'T going to help anything. Someone call in a WikiFairy and a couple WikiGnomes. We might need them.

So what do you guys think?

A. Remove the warning and change the method.

B. Delete the table itself.

C. Keep the table as is.

A. I think we should remove the warning and change the method. Math321 (talk) 12:19, 21 June 2009 (UTC)

Scientific computing

As far as I know, numerical analysis is nowdays called scientific computing. It seems natural to me to use the modern term rather than the old term. Should this article be moved/redirected to scientific computing? --DanielJanzon 15:22, 3 November 2004 (UTC)

You don't have to do any "computing" to do numerical analysis. Gershgorin discs fit under numerical analysis, but not necessarely under scientific computing. Loisel 17:00, 3 Nov 2004 (UTC)
Exactly, the theoretcial analysis side of it is separate, though closely related to the computing issue. - Taxman 18:17, 3 November 2004 (UTC)
Should scientific computing have it's own article? On my (Swedish) university (of Uppsala) all courses previosuly given in the field numerical analysis are now regarded as courses in "science of computing", which I assumed to be synonymous to the english term scientific computing. Do you guys know what you are talking about or do you just assume that numerical analysis is not fully contained in scientific computing because it doesn't sound like that listening to the term "scientific computing"? A search on Amazon seems (to me) to favor your view. Sorry for the rambling, --DanielJanzon 13:09, 4 November 2004 (UTC)

The words are interchangeable to a degree, but yes I do know what I'm talking about. I'm doing a postdoc at university of Geneva, more or less in scientific computing and numerical analysis. Loisel 14:25, 4 November 2004 (UTC)

I can confirm what Loisel says. Numerical analysis refers more to the theoretical side (for instance, the Gershgorin discs mentioned above) and scientific computing more to the practical side (for instance, MPI = Message Passing Interface), but there is a large overlap. Some people indeed use scientific computing in a broad sense to include all of numerical analysis, but other people use numerical analysis in a broad sense to include all of scientic computing. Ask some professors in Uppsala if you don't believe us. -- Jitse Niesen 15:44, 4 November 2004 (UTC)
Since "numerical metehods" (NM) redirects to "numerical analysis" (NA) I think "scientific computing" (SC) should do that also. To me, NM and SC seem very close to each other (closer than NA and SC). For instance Hamming writes in "Numerical Methods for Scientist and Engineers" p. 4 "Numerical analysis seems to be the study in depth of a few, somewhat arbitrarily selected, topics and is carried out in a formal mathematical way devoid of relevance to the real world. Numerical methods, on the other hand, try to meet the need for methods to cope with the potentially infinite variety of problems that can arise in practice". A bit frenzied, and maybe not very up-to-date, but I still believe SC should redirect to NA if NM does. Am I right? --DanielJanzon 19:33, 5 November 2004 (UTC)
Yes, I think so too. -- Jitse Niesen 23:11, 5 November 2004 (UTC)

I thought numerical analyis was essentially aimed at approximating continous mathematics. This is not obvious in the definition in the article. Someone competent in the matter might consider adding that to the article. --DanielJanzon 19:33, 5 November 2004 (UTC)

You're right. I think the article is generally not in a very good shape (I worked a bit on it myself, so hopefully nobody will take this personally), but I haven't found the energy to try and improve on it. Perhaps later. -- Jitse Niesen 23:11, 5 November 2004 (UTC)

The generation and propagation of errors

We should add the basic formulas for error calculation. Create a new page for it, or somehow include it in Wikipedia and link to it from the "error propagation" section.

http://phoenix.phys.clemson.edu/tutorials/errorp/index.html  —Preceding unsigned comment added by PAStheLoD (talkcontribs) 08:09, 10 November 2009 (UTC) 

Merge proposal - for section "The generation and propagation of errors"

I recommend merging the "The generation and propagation of errors" section of this article with the article on Discretization error forming an article which encompasses the topics discussed in both. Jwesley78 16:18, 23 March 2010 (UTC)

I also added Numerical stability to the merge recommendation. (The merge article might more appropriately be placed there.) Jwesley78 16:26, 23 March 2010 (UTC)

Personally I think merging is a bad idea. (1) The discretization error page emphasises discretization error should not be confused with round-off error or any problem related to floating point arithmetic, thereby declaring itself the wrong place for discussion of roundoff error; (2) numerical instability is such an important topic in algorithm design, and underlies so much computing work, that it shouldn't be a mere redirect to a term that, perhaps, is less widely-used in numerical computation texts? 149.155.96.5 (talk) 15:29, 19 April 2010 (UTC)

  •   Disagree ! Complete merge two articles is not rightfully, it's different topics.--Ll0l00l (talk) 20:59, 26 April 2010 (UTC)
  •   Agree ! However, numerical errors especially those in regards to large step-size selection in Numerical ODEs are more often than not called "discretization errors". These are distinct from round-off errors which occur more often when the step-size is too small. The term numerical error is too vague. I think they should be merged or a disambiguation page added.

— Preceding unsigned comment added by 129.118.30.87 (talkcontribs)

  •   Disagree I must agree that the merging is a bad idea. Although there is a logical argument for merger, we need to keep in mind that the generation and propagagation of error needs to be accessed from multiple levels of Academic, Industrial, and Student research. The ability to access the information contained herein should not be hidden by merging under a different title or location because it is more mathematically appropriate to discuss it elsewhere. If we are to spread knowledge we need to be cognizant of the fact that it may be needed by others who may not be as versed in Mathematics as we are. If there is a stronger call for a disambiguation page we should add it. Otherwise My feeling is that it should stay as currently edited

— Preceding unsigned comment added by Chiassons (talkcontribs)

  •   Disagree When one is interested in algorithms, one looks for numerical stability, not discretization error. Stability in itself is a more important notion both practially and theoretically than the discretization error which may logically be at its foundation. The proposal is a bit like "Merge the article on the limit of a function into the articls Small number" Saaska (talk) 10:43, 2 August 2010 (UTC)
  •   Disagree Discretization error and Numerical stability pages should not be merged. The former term should be used for numerical questions related to the approximation of continuous problems. The latter can be, and is most precisely (see Higham, "Accuracy and Stability of Numerical Algorithms", 2nd ed., SIAM Press 2002) used in linear algebra, where there is no continuous problem being approximated. Certainly these two topics are related and should link each other as appropriate. Merging them would, however, destroy the usefulness of both pages in teaching numerical analysis, which is a relevant use of these technical pages.

— Preceding unsigned comment added by 137.229.25.210 (talkcontribs)

comment: Per belated-WP:SNOW after a year should end this now. History2007 (talk) 23:38, 2 June 2011 (UTC)

  •   Disagree The two concepts are distinct: 'Numerical stability' refers to the behaviour of an algorithm, while 'Discretization error' refers to a source of inaccuracy in numerical computation. While, discretization error can lead to instability, the two are different phenomena. Merging would be detrimental to both articles. 134.76.223.1 (talk) 18:55, 27 July 2011 (UTC)

Bisection method for finding the root of polynomial in MATLAB

    %AIM : "To find root of polynomial by bisection method"  
    clc
    disp('ROOT OF POLYNOMIAL EQUATION BY BISECTION METHOD')
    p = input('Enter the polynomial [] ');
    x1 = input('Enter the upper guess ');
    x2 = input('Enter the lower guess ');
    tic;
    a = polyval(p,x1);
    b = polyval(p,x2);
    y = a-b;
    i = 0;

if (a*b<0) while abs(y)>=0.001

    i = i+1;
    x = (x1+x2)/2;
    c = polyval(p,x);

if (a*c<0)

    x2 = x;

else

    x1 = x;

end

    a = polyval(p,x1);
    b = polyval(p,x2);
    y = a-b;

end

  disp('Root of the polynomial is x:');
    x
    n = -10:.5:10;
    q = polyval(p,n);
    plot(n,q);
    hold on;
    grid minor;
    ylabel('function values');
    xlabel('interval');

else

    disp('Enter the correct guesses!');
    disp('We can not find the root of perfect square,and even power of perfect square!');
    disp('Only one root should lie in interval of guesses!');

end

    disp('Number of iterations are:')
    i
    t = toc;
    disp(['time taken for execution : ',num2str(t),' sec.'])
     plot(x,c,'o');

OUTPUT:- ROOT OF POLYNOMIAL EQUATION BY BISECTION METHOD Enter the polynomial [] [1 1 -2 -4] Enter the upper guess 1 Enter the lower guess 2 Root of the polynomial is x:

x =

   1.6590

Number of iterations are:

i =

   14

time taken for execution : 0.068207 sec.

 — Preceding unsigned comment added by Lokeshb.elec (talkcontribs) 18:08, 10 August 2011 (UTC)