Wikipedia:Reference desk/Archives/Mathematics/2008 January 31

Mathematics desk
< January 30 << Dec | January | Feb >> February 1 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 31

edit

  operator

edit

I have looked at the page on the operator   , but I don't really understand it. Would someone please explain the function of   in layman's terms? I think it means integral or something, but I don't know. Zrs 12 (talk) 02:43, 31 January 2008 (UTC)[reply]

Take a look at integral, but the easiest way to describe it might be "area under the curve". Or to give you a better idea of some of its uses, maybe it would be helpful to know that position is the integral of velocity, and velocity is the integral of acceleration? - Rainwarrior (talk) 04:07, 31 January 2008 (UTC)[reply]
I didn't find a page about the operator itself, could you provide a link? Anyway, the symbol is used for two related, but conceptually different things. The first is for the antiderivative, and the second is for the definite integral. I'll try to explain the second first:
The definite integral can be thought of as a summation process, resulting in the area under a curve, as stated by Rainwarrior. You have limits for integration, denoted by little symbols (say a and b) below and above the ∫. You divide the x-axis in the interval a..b into tiny pieces, and calculate the sum of the products of each tiny x-axis piece multiplied by the corresponding height of the curve. As you reduce the length of each piece of the x-axis, the sum will converge into a well-defined number, provided the function is "well-behaved". This gives you your area, which may have a physical interpretation as stated by Rainwarrior.
The antiderivative of a function f(x) is written as ∫f(x)dx, without the little symbols below and above the ∫. The reason why the same symbol is used, lies in the fundamental theorem of calculus - once you know the antiderivative of a function, say that ∫f(x)dx = F(x) + C, where C is a constant term, you can calculate the area under the curve by simple subtraction - the integral from a to b is F(b)-F(a). However, not all functions have closed-form antiderivatives. --NorwegianBlue talk 16:25, 31 January 2008 (UTC)[reply]
Note that area under the curve only really makes sense when you're talking about a curve defined on and into the real numbers. It's a good way to start thinking about integration, but be aware that integration can be generalised to many different spaces. -mattbuck 16:52, 31 January 2008 (UTC)[reply]
Thank you all for your input. I went to this [[1]] link and found it was fairly helpful as it provided graphs (there is also a link on this page to derivatives). However, I have one more question. What is the purpose for defining these? Why would anyone need to calculate the area under the curve or get a derivative of a function? Zrs 12 (talk) 20:27, 31 January 2008 (UTC)[reply]
To answer the side question... the integration symbol is a Unicode character, U+222B, which in UTF-8 e2 88 ab which leads to http://en.wikipedia.org/wiki/%E2%88%AB being the article about the symbol itself. (I suppose you could make a proper link by putting that single character inside double brackets, but I have bad luck inserting fancy characters into wikipedia articles) --tcsetattr (talk / contribs) 21:28, 31 January 2008 (UTC)[reply]
In response to User:Zrs 12 - Why would anyone need to calculate the area under the curve or get a derivative of a function? - the answer is Calculus and, in particular, Calculus#Applications. Just about every field in physics, and many other sciences, has a set of differential equations that describe some kind of behaviour, and without the ability to differentiate and integrate functions those equations can't be solved. Some examples are Maxwell's equations, which underpin electromagnetism and hence make your computer work, Newton's law of universal gravitation, which tells you how things fall down and how planets orbit each other, the Einstein field equations of General relativity, which do the same thing but better, and everything in Fluid dynamics#Equations of fluid dynamics and aerodynamics, which let ships sail and planes fly. Admittedly these are all more complicated than just an "area under the curve", but they all come from differentiation and integration.
For a simpler example, like User:Rainwarrior said above, differentiation can take you from distance to velocity, and from veloity to acceleration, and integration can take you in the opposite direction. Confusing Manifestation(Say hi!) 22:28, 31 January 2008 (UTC)[reply]
Integration is also important in probability theory, and thus in statistics and hence all the social sciences. Algebraist 14:52, 1 February 2008 (UTC)[reply]
The page intergral should be helpful. Ftbhrygvn (talk) 05:37, 5 February 2008 (UTC)[reply]

Determinants

edit

I know that the determinant is a multiplicative map, meaning that det(AB)=det(A)det(B). I also know that det(A+B) is not necessarily the same as det(A)+det(B). Is there a special case where det(A+B)=det(A)+det(B)? What are the restrictions on matrices A and B in this case? A Real Kaiser (talk) 04:47, 31 January 2008 (UTC)[reply]

This is certainly not a general case, but if A and B share a nontrivial eigenspace with eigenvalue 0, then we have det(A+B)=0=0+0=det(A)+det(B). For example, this holds if A and B represent linear transformations with ker A   ker B ≠ {0}.
Also, if A = -B and the dimension of your matrices is odd, then det(A) = -det(B) and you have det(A+B)=det(0)=0=det(A)+det(B). Probably these are not the kind of examples you were looking for. Tesseran (talk) 09:40, 31 January 2008 (UTC)[reply]
det(A) and det(B) are essentially the volumes of parallelpipeds with vectors corresponding to rows or columns of the matrix. So you need something so that the vector sum parallelpipeds have volume equal to the sum of the vector parallelpipeds. You can probably construct some with good enough use of algebra, but there doesn't seem to be an easy answer.--Fangz (talk) 16:06, 31 January 2008 (UTC)[reply]

What about if the matrices A and B are (real) orthogonal matrices (of any dimension nxn)? Actually, I am trying to prove that if A and B are real orthogonal matrices and if det(A)=-det(B), then A+B is singular. We can also use the fact that |det(A)|=|det(B)|=1. I was thinking about showing that det(A+B)=0 which would be really easy if det(A+B)=det(A)+det(B)=det(A)-det(A)=0. If the determinant is not additive in this case, then how can I prove the above proposition? A Real Kaiser (talk) 17:17, 31 January 2008 (UTC)[reply]

The fact that |det(A)| = |det(B)| = 1 shouldn't matter: if det(A+B) = 0 then det(λA+λB) = λndet(A+B) = 0 for all scalars λ. —Ilmari Karonen (talk) 20:25, 31 January 2008 (UTC)[reply]
I don't think determinants are a good way of solving this problem. Remember that an orthogonal matrix has determinant +1 for a rotation, and -1 for a reflectionnon-rotation.--Fangz (talk) 01:01, 1 February 2008 (UTC)[reply]
While considering the geometric interpretation is often a good trick for linear algebra problems (at least as motivation of a proof, if not for the proof itself), what is the geometric interpretation of addition of matrices? I may be missing something obvious, but I can't see one... --Tango (talk) 22:28, 1 February 2008 (UTC)[reply]
What we want is, given arbitrary rotations (=det 1 orthog operators) A and B (A a rotation, B a -rotation), there exists a vector v s.t. A(v)=-B(v). In other words, we want a v s.t. v=-A-1Bv. -A-1B is just an arbitrary rotation/-rotation (depending on dimension), so we want that all rotations (n odd)/det -1 orthogonal maps (n even) to have an eigenvalue 1. The rest is fairly easy (using fact that orthogonal matrices are diagonable over C). Algebraist 23:03, 1 February 2008 (UTC)[reply]
I just wrote that as I thought it, and realised only at the end that it was in no way geometric. Oops. Looks like I'm incurably tainted with algebra. Algebraist 23:04, 1 February 2008 (UTC)[reply]
I'm not following you - I don't see how that proves the required theorem. For a start, the theorem says det(A)=-det(B), so one is a rotation and one a rotation+reflection, secondly, the theorem says it's over the reals, and you've explicitly mentioned doing it over C (in the real case, an eigenvector with eigenvalue one would correspond to an axis of rotation, which I seem to remember reading only exists for n=3, although it could be that there's more than one for n>3, I'm pretty sure there are none for n=2). Lastly, Av=Bv => A-B is singular, not A+B (this error compensates for the det=-1 vs det=1 error for odd numbers of dimensions, but n odd was not given in the theorem). --Tango (talk) 23:20, 1 February 2008 (UTC)[reply]
Sorry, I got slightly very confused in my typing. The argument works, I just typed it erroneously. And the fact that I'm using C is not a problem; I can give the details if you want. Algebraist 23:46, 1 February 2008 (UTC)[reply]
I'll take your word for it. The question that I'd like an answer to (because it's really bugging me now) is what the geometric interpretation of addition of matrices is. Matrix multiplication is just composition of linear maps, but what is addition? Any ideas? --Tango (talk) 00:15, 2 February 2008 (UTC)[reply]
Well, I suppose you'd have to base it on the geometric interpretation of vector addition, but it's not very nice in any case. Indeed I arrived at my proof by the thought process 'Matrix addition is horrible. Let's get multiplication involved.' Algebraist 00:19, 2 February 2008 (UTC)[reply]
Yeah, I guess so, but I still can't really see it. You're adding a vector to itself, but transforming each copy in some way... I can't see what significance that has. Getting rid of the addition by moving one term onto the other side of the equals sign is good trick, though. (Of course, it only works if the sum is singular, otherwise you just get a different addition.)--Tango (talk) 01:08, 2 February 2008 (UTC)[reply]
In response to Tango's question: matrix addition is just pointwise addition of linear maps. Let f and g be linear maps from a vector space V to a vector space W. Then we can define the linear map f + g from V to W by (f + g)(x) = f(x) + g(x). The coordinate representation of this is just matrix addition. I would hardly call it 'horrible'. -- Fropuff (talk) 00:42, 2 February 2008 (UTC)[reply]
That's just a restatement of the definition of matrix addition, really. Restated in the same terms, the question becomes: What is the geometric interpretation of pointwise addition of linear maps? A linear map is basically a transformation (stretching, squashing, twisting, rotating, reflecting, whatever) - what does it mean to add two transformations together? --Tango (talk) 01:08, 2 February 2008 (UTC)[reply]

(undent) Would you find it easier to geometrically interpret the mean of two vectors? The sum, after all, is simply twice the mean (and, in particular, the sum of two vectors is zero if and only if their mean is also zero). In terms of geometric operations, to apply the mean of two transformations to a figure, just apply each of the original transformations to the figure and then average them together — i.e. map each point in the original figure to the point halfway between its images under each original transformation. —Ilmari Karonen (talk) 01:34, 2 February 2008 (UTC)[reply]

Thank you! That's an excellent way to look at it. --Tango (talk) 13:46, 2 February 2008 (UTC)[reply]

Predicate logic

edit

Express the following statement as a well formed formula in predicate logic:

"There is a natural number which is a divisor of no natural number except itself."

Here's what I've put:

 

 

 

I just wanted to see if it was valid. Damien Karras (talk) 20:43, 31 January 2008 (UTC)[reply]

(ec) It strikes me that you're assuming that every natural number is a divisor of itself. That's true, of course (at least, if you adopt the convention that zero divides zero, which I do), but it's not a purely logical truth. --Trovatore (talk) 20:50, 31 January 2008 (UTC)[reply]
(off-topic) Are there really people who have 0 not dividing 0? How bizarre. Algebraist 21:09, 31 January 2008 (UTC)[reply]
I don't think so. We have
 
 
 
 
 
 
 ,
which is a faithful translation. There may be an hidden assumption underlying the way the phrase is formulated (i.e. the choice of the word 'except'), but it doesn't appear in the formula. Morana (talk) 21:36, 31 January 2008 (UTC)[reply]
I don't agree that your last equivalent is a faithful translation. "No natural number except itself" does not imply that "itself" is included; it simply fails to exclude it. --Trovatore (talk) 21:57, 31 January 2008 (UTC)[reply]
I don't know, your interpretation seems unusual. It tried reformulating the given sentence in all the ways I could think of but I cannot possibly understand it like you do. Can you justify this interpretation ? This is more an issue of grammar than logic. In any case this is an interesting illustration of the many ambiguities of the natural language. Morana (talk) 00:13, 1 February 2008 (UTC)[reply]
Seems straightforward to me. When I say "no natural number" unmodified, that means none at all. If I say "except" I'm weakening that claim. Just weakening it, not making any new claim.
Example: If I say "all eighteen-year-olds in Elbonia enter the military, except for the women", I am not wrong if it turns out that some women actually do join the military. --Trovatore (talk) 06:28, 1 February 2008 (UTC)[reply]
Did you notice the negation ? I can't make a poll right now but I argue that if you say "no eighteen-year-olds enter the military, except for the women", any normal (i.e. non-mathematician) person would think that you just claimed that women enter the military.
Do you have any kids ? I you do, tell them "you cannot take any candy, except for the blue ones", and tell me if they didn't jump on the blue ones. Morana (talk) 09:11, 1 February 2008 (UTC)[reply]
At the very least the formulation in natural language is ambiguous. If I say "Noone except idiots will buy that", I don't claim that idiots will buy that.  --Lambiam 10:56, 1 February 2008 (UTC)[reply]

I agree with Trovatores interpretation. "no eighteen-year-olds enter the military, except for the women" a normal person would draw the inference that there exists women entering since otherwise you could have made a stronger statement with easier form. However a valid interpretation would go, I know that no guys enter the military, but I don't know about the women. "you cannot take any candy, except for the blue ones" means that there is no rule saying the kids can't take the blue candy, so of course they take it. I write the statement as  , using Trovatores interpretation of except. Taemyr (talk) 10:59, 1 February 2008 (UTC)[reply]

Regression analysis question

edit

Given that X (X-bar) = 3, Sx = 3.2, Y (Y-bar) = 2, and S1.7 = 3.2, and r = -0.7, is the least squares regression line Ŷ (y-hat) = 3.12-0.37x, or something else? I think I'm right (it seems right) but I want to check. 128.227.206.220 (talk) 22:22, 31 January 2008 (UTC)[reply]

It's not exactly right, as Y ≠ 3.12 - 0.37X. Further, X = 3, Sx = 3.2 gives the number of observations as the non-integral 3.2/3, and what is the meaning of S1.7? Nor does mixing x, X, y and Y help. At least one of us is deeply confused.…86.132.166.138 (talk) 11:58, 1 February 2008 (UTC)[reply]

Infinite set containing itself

edit

Is it possible for an infinite set to contain itself? For example, would {{{...{{{}}}...}}} be considered a valid set?

As an unrelated question, how many sets are there? I know it's an infinite number, but which infinite number? — Daniel 23:52, 31 January 2008 (UTC)[reply]

In standard set theory (the study of the von Neumann universe) it is not possible for a set to be an element of itself. There are alternative set theories in which it is -- see anti-foundation axiom. There are a proper class of sets (that is, there are more sets than can be measured by any transfinite cardinal number). --Trovatore (talk) 00:09, 1 February 2008 (UTC)[reply]
It is the possibility of a set containing itself that led to Russell's paradox, that in turn led to the formal axioms of set theory. Confusing Manifestation(Say hi!) 05:38, 1 February 2008 (UTC)[reply]
No, I don't really agree with you there. The Russell paradox was the fruit of conflating the notion of intensional class with the notion of extensional set. It's probably true that historically the antinomies led to the axiomatizations, but it is quite possible to have a non-axiomatic conception of set that avoids them. That non-axiomatic conception also avoids sets that are their own elements, but that isn't the reason it avoids the antinomies. --Trovatore (talk) 06:13, 1 February 2008 (UTC)[reply]
Well, the set of all sets has cardinality greater than that of the real numbers  , in fact it would be greater than that of the power set on the real numbers,  . I suppose then you could take sets which are in the power set of the power set of the real numbers, so you get  , etc etc etc. If you assume the continuum hypothesis is true, and that   then the cardinality of the set of all sets would I suppose be  , though I expect it would be larger. -mattbuck 11:54, 2 February 2008 (UTC)[reply]
See Trovatore's answer: in conventional set theory (eg ZF), the class of all sets is not a set, and thus does not have a cardinality at all. Algebraist 14:28, 2 February 2008 (UTC)[reply]
Oh, and   doesn't make much sense. I suspect you mean  . Also, you're assuming (some cases of) the generalised continuum hypothesis, not just CH itself. Algebraist 15:18, 2 February 2008 (UTC)[reply]
Mmm, actually   is a pretty common notation, though it's true I haven't seen it very often for n=0. You could write  , but for whatever reason, people usually don't, in my experience. --Trovatore (talk) 01:12, 5 February 2008 (UTC)[reply]
To the OP: by the way, your notation {{{...{{{}}}...}}} is somewhat ambiguous. It could mean a set A whose only element is A, or it could mean (for example) a set A such that A={{A}} but A   {A}. Both these (and much more) can exist in the absence of the axiom of foundation. Algebraist 15:23, 2 February 2008 (UTC)[reply]
I don't know anything about foundations (of mathematics), but is it really possible to have A = {{A}} but A ≠ {A}? In what sense can they be different? -- BenRG (talk) 14:02, 3 February 2008 (UTC)[reply]
The usual sense - that one has an element not in the other. I don't know enough about ZF sans foundation to support Algebraist's claim, but I can say that the scenario is not immediately inconsistent. A is different from {A} because A's unique element, {A}, is different from {A}'s unique element, A (yep, sounds circular. But again, I am only showing internal consistency here). -- Meni Rosenfeld (talk) 14:19, 3 February 2008 (UTC)[reply]
The most common anti-foundation axiom, due to Aczel ("every directed graph has a unique decoration") implies that if A={{A}}, then A={A}. Otherwise you would have two different decorations of the graph that has a top element and then just descends infinitely in a straight line.
However, a more ontologically expansive axiom, due to Boffa ("every injective decoration of a transitive subgraph of an extensional graph extends to an injective decoration of the whole graph", or something like that) implies that there is a pair A={B}, B={A}, but where A and B are different. This axiom is consistent with ZF. I did some work on this a long time ago but unfortunately never published it. --Trovatore (talk) 22:22, 3 February 2008 (UTC)[reply]
Um, of course I mean it's consistent with ZF minus the axiom of foundation. Obviously it contradicts the axiom of foundation. --Trovatore (talk) 18:35, 4 February 2008 (UTC)[reply]