Wikipedia:Reference desk/Archives/Mathematics/2007 June 2

Mathematics desk
< June 1 << May | June | Jul >> June 3 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 2

edit

Block Toeplitz Matrix Inversion

edit

Does anyone has code in Matlab/C/C++/Fortran for inverting/solving Block Toeplitz matrices that I can use. I have to solve a huge system of equations that have a block Toeplitz structure. deeptrivia (talk) 05:39, 2 June 2007 (UTC)[reply]

The question really is about inverting Toeplitz matrices, because you can use the formula at Invertible matrix to invert matrices with block structure.
I don't know much about this, but it appears that Levinson recursion may be what you're looking for. (Of course, it's just an algorithm, not code, and you'd have to adapt it for block matrices.) Kfgauss 18:46, 2 June 2007 (UTC)[reply]
Sometimes web searches produce instant gratification, but do not always turn up high quality numerical code. Try a reliable source like netlib, which has a Toeplitz library described here. If that doesn't give satisfaction, cast a broader net with this guide and the handy GAMS. --KSmrqT 05:52, 3 June 2007 (UTC)[reply]

One-tailed or two-tailed test of significance

edit

Kindly excuse my ignorance. I found lots of materials on the web including wiki articles but could infer very little from them. I want to see if two variables are significantly correlated by running Pearson correlation. However, I am not sure whether to use one-tailed or two-tailed tests of significance. From whatever I learned from other sources, I got the impression that if previous studies have already established a directional relationship between the two variables (positive correlation or negative correlation) we should use one tailed test. I am not sure whether I have got that right. My pair of variables "should" be positively correlated according to previous studies. What should I do? Thanks. --Ilovenepal 06:12, 2 June 2007 (UTC)[reply]

It depends on your original hypothesis. If you want to test the hypothesis that X and Y differ, then it is likely that a two-tailed test would be best, while if you want to test the hypothesis X>Y or X<Y, then a one-tailed test is more appropriate. Since you are testing for correlation, my guess is that you want to test that X and Y are correlated against X and Y not correlated, so it would probably be a two-tailed test. Disclaimer: I hate statistics and my knowledge is A-level-standard only. x42bn6 Talk Mess 23:27, 2 June 2007 (UTC)[reply]
This is correct. The test used should not depend on what you expect or suspect to be the case based on hearsay or similar cases. The question is really: what is it you want to test for? If you want to test the hypothesis that there is a positive correlation against the null hypothesis that there is no positive correlation, then use a one-sided test. This would, for example, be the thing to do when testing the hypothesis that eating more carrots has a beneficial effect on night vision.  --LambiamTalk 00:53, 3 June 2007 (UTC)[reply]
I think there is some confusion here. The Pearson product-moment correlation coefficient is not a statistical test by itself - it is a measure of the sample correlation between a set of simultaneous measurements of two random variables. Under certain assumptions about the distribution of those random variables, it may also be a good estimate of the population correlation, and you can calculate the distribution of the population correlation given that you have a sample with a particular sample correlation. In this case, you can create a statistical test based on the sample correlation. If you frame your null hypothesis precisely, so that it says "the population correlation is between r1 and r2" or "the population correlation is at least r1", then you can calculate the probability that the null hypothesis is true, given the observed sample correlation. However, "significantly correlated" is not a sufficiently precise null hypothesis for this approach. Gandalf61 11:03, 3 June 2007 (UTC)[reply]
The null hypothesis here would be that the random variables are independent, and not correlated. This is a very precise hypothesis. The test statistic is the Pearson correlation coefficient. Its distribution under the null hypothesis is known for two independent variables having normal distributions, usually a fair assumption, and a simple transformation turns it into a statistic having Student's t-distribution. Basically, if r is the correlation coefficient, and there are n observations, use
 
You can then test for a significant deviation of the test statistic from the expected value 0 under the null hypothesis using Student's t-test. If the test rejects the null hypothesis, you might reasonably call the correlation found "significant".  --LambiamTalk 23:11, 3 June 2007 (UTC)[reply]
No, I don't think you could conclude that the correlation found was "significant". All that a positive result tells you here is that with 95% confidence (or whatever confidence level you select) there is a non-zero correlation between the variables. But the correlation could be, say, 0.1, which is hardly a significant correlation. So you could have a significant probability of a non-significant correlation ! To establish a significant probability of a significant correlation requires a hypothesis that excludes "non-sigificant" correlations - "the population correlation is at least 0.5", for example, or "the population correlation is not between -0.5 and 0.5". Gandalf61 14:13, 4 June 2007 (UTC)[reply]
Calling a result "significant" is totally standard terminology in statistics. See our article on Statistical significance. It is always relative to a "significance level" (or "confidence level"). It is a technical term that does not imply significance in the common household sense. However, the use is sufficiently established that the technical meaning is also found in dictionary definitions.[1]  --LambiamTalk 08:47, 5 June 2007 (UTC)[reply]
Indeed, and when the OP used the term "significantly correlated" he obviously did not mean "statistically signifcant" - he was using "significant" in the "common household sense". This is the misunderstanding that I was trying to clear up. As the statistical significance article says : "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important or significant in the usual sense of the word. Gandalf61 09:52, 5 June 2007 (UTC)[reply]
It may be obvious to you that the questioner did not mean "statistically significant", but it is not obvious to me. In fact, the questioner tells us that they want to test if two variables are significantly correlated, and in the next sentence that they are not sure whether to use one-tailed or two-tailed tests of significance for this. I take the part I emphasized by making it bold as an unmistakable sign that the intention of the questioner is to test for statistical significance.  --LambiamTalk 21:06, 5 June 2007 (UTC)[reply]
But a given value of the cc is more significant (whatever that means) as n increases, so it has no absolute importance. IMV there is so much confusion and misunderstanding about significance testing, the whole reporting style should be changed. In the current case (given some assumptions on the nature of the distribution of the samples), what is being said is that, given no correlation between the variables, then the result actually found would occur less often than (some smallish %, to be calculated) purely by chance. A conditional probability is involved, i.e. result given hypothesis. This says precisely nothing about the probability of the observed result being obtained given a different hypothesis, nor about the conditional probability the other way round (hypothesis given result, which seems to be most people's unrigourous understanding of significance teasting).--86.132.166.138 19:01, 4 June 2007 (UTC)[reply]

Simple language

edit

The following is an answer to the question asked above: "what is mathematics?"

"The untrained man reads a paper on natural science and thinks: ‘Now why couldn't he explain this in simple language.’ He can't seem to realize that what he tried to read was the simplest possible language – for that subject matter. In fact, a great deal of natural philosophy is simply a process of linguistic simplification – an effort to invent languages in which half a page of equations can express an idea which could not be stated in less than a thousand pages of so-called ‘simple’ language." —Thon Taddeo in A Canticle for Leibowitz (Tamfang 01:52, 1 June 2007 (UTC))

Is there any book that tries to explain the content of a halp-page long paper on natural science or mathematics by using a thousand or many thousand pages of simple language, that everyone can understand? I read somewhere that Nicolas Fatio de Duillier once tried to adapt Newton's Principia to simpler language. He said that the book would be a lot bigger, and a lot more people would be able to understand it.

I can also remember an English dictionary that had a list of 2000 simple words, and all the definitions of the dictionary would use only those 2000 words, so all words of the English language would be understood by those who understand those 2000 simple words.

I would also like to know whether there are articles or books on trying to do that sort of thing, whose name I don't know.

I think this question could also belong to the language and the science desks :-) A.Z. 19:08, 2 June 2007 (UTC)[reply]

I use such a dictionary actually. It's a very good one. However, you need to know a fair bit of English for such a dictionary to be useful, beginners are much more comfortable with bilingual ones. (It's Paul Procter (editor), Dictionary of Contemporary English, A Langenscheidt-Longman Dictionary. Longman Group Ltd., 1985, 7th printing.) – b_jonas 21:10, 2 June 2007 (UTC)[reply]
There are a lot of writers who try to explain complicated mathematical results to laymen (e.g. Ian Stewart, Martin Gardner), but I guess you're looking for a much grander, more systematic project. I don't know who the intended audience would be, since anybody willing to spend that much time reading books thousands of pages long would find it much easier to just learn the math in the traditional way. nadav (talk) 21:38, 2 June 2007 (UTC)[reply]
I would like to learn math, but I think the traditional way is too hard. If there were a complicated paper on mathematics, and a book with a lot of volumes and tens of thousands of pages with the sole purpose of translating that paper into simple language, I would read it, if I thought there would be a chance that, after reading it, I would become able to fully understand the complicated paper.
It's hard to find a good book to learn math. I never came across one that didn't use complicated language. Would you (all) be able to provide the names of books appropriate for someone to learn math from scratch? A.Z. 22:09, 2 June 2007 (UTC)[reply]
Ha Ha Ha. You can spend your entire life learning Mathematics and yet only learn 0.1% of all the Mathematics in Existence. And this is the truth. 211.28.130.164 23:13, 2 June 2007 (UTC)[reply]
Well, but I guess all mathematicians know some basic things that can be learned in some years. I guess there's a way to learn enough of the "language" so that one becomes able to understand most of the papers, even though one doesn't have time to learn about all of them before dying. A.Z. 23:20, 2 June 2007 (UTC)[reply]
I admire your enthusiasm. As a beginning at least, I would suggest What is Mathematics? and Concepts of Modern Mathematics, but it's an arbitrary choice. The latter one actually does a good job in explaining a few somewhat advanced topics in an approachable way. nadav (talk) 00:55, 3 June 2007 (UTC)[reply]
If there were any easy way, don't you think everyone would use it? A famous quotation from antiquity sums it up: "There is no royal road". The task is to build and explore worlds in the mind, and that takes time and effort. Some of these lands are well scouted, with convenient maps and tour guides — perhaps trails and comfort stations; others are primitive wilderness where only the hardiest explorers have been. We never know what we'll encounter, what hazards and what beauty. We want to tell others, but it's not so easy to do, and in the end there is no substitute for direct experience. --KSmrqT 06:17, 3 June 2007 (UTC)[reply]
Here is an attempt to explain Maxwell's equations in simple language. It's a good attempt - certainly better than I could manage - but there is still a lot of what mathematicians call "hand waving" in there. If you know the mathematics then the English version reads like a loose translation - you keep thinking "yes, sort of, but it's not exactly like that, and this bit is missing ...". A translation is better than nothing, but being able to understand the original language is always best. Gandalf61 11:37, 3 June 2007 (UTC)[reply]
I think there's two separate questions here. The first is, "How much technical language has been reworded into more verbose but more natural language without losing very much in the translation?" and the second is, "Is there any way to understand a concept without learning anything it's based on?" The answer to the first is that people who write books respond to demand, so there are lay-guides in precisely those areas that many lay people want to understand - fun things like puzzle solving, for instance, and high-profile things like relativity. In general, any area that lay people actually want to read about can be reworded with enough work into something that will satisfy them. It's worth keeping in mind, though, that natural language doesn't make the subject matter any simpler. Only oversimplifying it makes it simpler. That brings me to the second question, which I think is the one you're really asking, and which relates to Gandalf's response. Just because you don't need to learn new words (thanks to verbal gymnastics on the part of the author), doesn't mean you don't need to learn new concepts, possibly concepts that few people ever really wrap their head around. A meromorphic function by any other name would require just as much knowledge of complex analysis. Considering how many people have trouble with the existence of a square root of negative one in the first place, jargon may not be the problem. The best thing you could do, as someone suggested, is read the standard textbooks from Kindergarten up through those relating to the paper's subject matter. This would require reading at most two dozen textbooks, totalling maybe thirty thousand pages. If you actually take the time to understand each subject, in order, the language flows freely and you can advance to the next with confidence. This is the best plan for two reasons: standard textbooks are by definition easy to find, and anything that actually worked would have to be equivalent to them in content. What would these "lots of volumes" and "tens of thousands of pages" you're imagining contain? Since I'm assuming you don't know anything about the subject, and you're asking to understand it by the time you're through, it follows they'd have to cover in detail all background, context, and especially prerequisites. This would continue backwards until it met and merged with your final year of schooling. Whether it was written in French, natural English or Technobabble, it'd have to cover the same material. BTW, how much schooling in math have you had so far? Maybe we could suggest good textbooks at your level, and you could work up from there. Also mention what you're trying to learn about - if it's in graph theory, for instance, calculus isn't a prerequisite, but for differential topology (by definition) it is. Black Carrot 12:43, 3 June 2007 (UTC)[reply]
BTW, I don't want to give the impression that it will actually take two dozen textbooks to explain something to you. Most stuff you'd care about, it wouldn't take nearly that much. Let's take Gandalf's suggestion as an example. I'll assume for the sake of argument that you've been through Arithmetic and Algebra and have a good grasp of both. You'd need to understand the basic units of information involved (vectors), so you'd need a course in that. No amount of rewording would get around the need to understand the mechanics involved. That's one textbook. It could follow immediately after Algebra, they're pretty similar. You'd probably want vector fields too, along with their context, say another textbook. (I'm ballparking, with fairly generous overestimation.) You'd need Physics, of course, say the first two years, giving two more textbooks. That should carry you all the way into Electricity and Magnetism, which would give you the background for this and, depending on the depth and quality of the course, may even cover these equations. Before that you'd need some calculus, since even introductory physics makes use of it. (Hell, even the equation for a falling body uses the power rule.) Let's say two more textbooks to cover that, up to an introduction to multivariate calculus. And let's toss in two more for good measure, in case there's something I missed. Maybe you don't know any trigonometry, for instance, which both vectors and calculus make use of. I believe that adds up to eight, covering the background to the subject fairly amply. Add another book to explain the theory itself in as much detail as you like, and we have a total of maybe 10,000 pages. Since I guarantee you can handle every one of those courses (taken in the correct order with a healthy dose of patience), that should satisfy the requirements - several books, 10,000 pages, explaining half a page of compact symbology to a layman. Would you like some suggestions on the particular books you'll be reading? It's very important to choose the right ones, there's a wide range of quality on the market. Black Carrot 13:25, 3 June 2007 (UTC)[reply]
The thing with learning mathematics is that it's a bit like building a skyscraper. You don't try building the whole thing without ever taking your feet off the ground. You build a bit, and then you stand on it, and that lets you build the next bit. Then you can stand on that, and build the next bit above it. In the same way, learning mathematics by breaking it all down into "simple language" is a hopeless task. Higher level mathematics is just too intricate to express interesting statements in terms of the fundamental building blocks. The key to understanding maths, instead, is to lift yourself up so things that once seemed incomprehensible can be expressed in terms of things you now understand. Then you can use those new ideas to understand yet more complicated ideas, and so forth.
As a simple example, consider multiplication of natural numbers (a.k.a. positive whole numbers). Now, multiplication can be defined as repeated addition (4 x 3 = 4 + 4 + 4), and you understand addition fine. But you don't want to be trying to think about all multiplication situations in that way. You don't want to calculate 7 x 5 x 2 x 100 by working out 2+2+2...+2 with a hundred twos, then adding that to another 2+2+2+...+2, then doing that again until you've done it five times, and then doing that whole thing seven times over and adding the results. Once you have a solid conceptual grasp of multiplication, you don't need to think about it in terms of addition any more. Sometimes you do anyway, of course, depending on the situation; but there are times when you want to use multiplication as a building block for more advanced concepts, and if you kept trying to understand everything in terms of addition you'd go completely insane. The secret to understanding how a pocket calculator works is not to consider it as a grand sum of protons, neutrons and electrons, but rather to understand how those simple ideas give rise to intermediate concepts like electric current, conductors, and insulators, then to electronic transistors and liquid crystal displays, and so forth. Maelin (Talk | Contribs) 10:12, 5 June 2007 (UTC)[reply]