Wikipedia:Reference desk/Archives/Mathematics/2008 April 2

Mathematics desk
< April 1 << Mar | April | May >> April 3 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 2

edit

Infinities

edit

I know infinity is pretty big, and you cant get bigger. But what if you raised infinity to the power infinity? What happens then/ —Preceding unsigned comment added by 79.76.250.241 (talk) 00:52, 2 April 2008 (UTC)[reply]

Well... there are different infinities. It's also rather deceptive. For instance, say we start with the counting numbers (1,2,3,etc). Now, it's quite clear that these are a strict subset of the integers (that is all whole numbers, positive or negative). But we can label the integers by the counting numbers. Let teh 1st integer be 0, the 2nd be 1, the 3rd be -1, the 4th be 2, the 5th be -2, etc. Then it is clear that we can just keep on counting to infinity, and that thus they have the same cardinality - essentially, there are the same number. We can do the same with the rationals. However, we cannot do this for the real numbers - see Cantor's diagonalization argument. This infinity is much bigger. Welcome to maths. -mattbuck (Talk) 01:14, 2 April 2008 (UTC)[reply]

Your question has already been treated by Georg Cantor in the 19th century. Generally, the concept of "infinity" has two different meanings: infinite ordinal number, and infinite cardinal number. If you mean "infinite ordinal number", then what you call "infinity to the power infinity" is traditionally symbolized:  . See here for a deeper discussion on  , as well as on its full meaning. A parallel concept of the desired power exists also for the infinite cardinal number. Eliko (talk) 01:28, 2 April 2008 (UTC)[reply]

I think it's a bit misleading to say that infinity has two meanings, since that suggests it has only two meanings. For instance, the phrase "infinity to the infinity" makes sense as a limit, in which context it's unambiguously infinity. On the other hand, in the nonstandard reals infinity to the infinity is different from both infinity and any interpretation in terms of cardinality or ordinality of sets. Black Carrot (talk) 08:45, 2 April 2008 (UTC)[reply]
When I wrote "two meanings" I meant "at least two meanings". Anyway, thank you for your comment. Eliko (talk) 13:03, 2 April 2008 (UTC)[reply]
Sorry, Kakarot, but that wouldn't be "infinity to the infinity, but rather "infinity to the infinity'th" or "[...] infinity'th power." Or you could be right as you stand. I don't really know. Anyway, here's a picture of what's being dealed with here:
  flaminglawyerc 21:32, 2 April 2008 (UTC)[reply]
"infinity to the infinity" and "infinity to the infinity'th (power)" sound like different ways of saying exactly the same thing to me. What distinction are you trying to draw? --Tango (talk) 00:42, 3 April 2008 (UTC)[reply]
It is linguistic nitpicking: just like we don't vocalize the written expression 33 as "three to the three", the fiery jurist objects to voicing ∞ as "infinity to the infinity". We do say "three to the power three". The intention of Black Carrot was clear enough. To the nitpicker: the past participle of the verb to deal is dealt. More importantly, recasting the question as ∞ completely ignores the earlier contributions pointing out that there are several notions of "infinity" in mathematics, for most of which the symbol ∞ is not appropriate.
In how many ways can we make a sequence of length 6 if each element of the sequence is picked from a set with 10 members (for example the set of the numbers 0 through 9)? The answer is 106. In general, there are NL different sequences of length L if each element belongs to a given set of size N. So an interpretation of "infinity to the power infinity" is: the number of different sequences of infinite length, each of whose elements belongs to a given set of infinite size.
In this interpretation we are talking about cardinalities, which measure set sizes. The simplest kind of infinite sequence is one whose elements can be indexed by natural numbers, and the simplest infinite set is again that of the natural numbers. So we are talking about the size of the set of infinite integer sequences with only nonnegative elements. The "size" (cardinality) of the natural numbers is denoted by ℵ0 (pronounced like "Aleph 0"), the smallest infinite cardinal. So then the question can be formulated as: how big is ℵ00?
For finite sequences, a somewhat surprising result (surprising until you get used to it) is that the length does not make a difference:
02 = ℵ03 = ℵ04 = ... = ℵ0.
Once you are used to this, the next surprising thing is that in the limit it does make a difference:
00   = ℵ1   > ℵ0.
1   ℵ00 is also the size of the set of real numbers, and is therefore known as the cardinality of the continuum. It is also denoted by   or ℶ1. --Lambiam 08:32, 3 April 2008 (UTC)[reply]
Did it become standard to assume the continuum hypothesis when I wasn't paying attention? Matthew Auger (talk) 21:09, 3 April 2008 (UTC)[reply]
While we don't say "three to the three", we do say "e to the x", not "e to the x-th". Both ways are clearly used in some contexts. I think in the context of infinity, "infinity to the infinity" is the clearest way to phrase it. Exactly what it means depends very much on what you mean by "infinity". (And Matthew is right - you probably shouldn't assume the continuum hypothesis without saying you're doing so.) --Tango (talk) 21:17, 3 April 2008 (UTC)[reply]
Goody! I like linguistic quibbles. My rationalization would be that adding a -th suffix assumes we're dealing with the ordinal properties of infinity, which I was explicitly trying to move away from. I was treating infinity as an ideal element and exponentiation as a notational convenience, referring to an abstract binary operation. So, it was quite acceptable to phrase it that way. Black Carrot (talk) 09:39, 4 April 2008 (UTC)[reply]
I do say "three to the three".--196.209.177.241 (talk) 11:53, 5 April 2008 (UTC)[reply]

Prime ideals in a ring

edit

I'm struggling to prove something about prime ideals for an assignment: Given R, a nontrivial commutative ring with unity, use Zorn's lemma to show that the set of prime ideals has minimal elements with respect to inclusion, and that any prime ideal contains at least one such minimal prime ideal.

I've managed to show the existence of minimal prime ideals, but I'm struggling to figure out how I can show that every prime ideal contains a minimal prime ideal. I tried doing it by contradiction (assume that every prime ideal contains a smaller, nontrivial prime ideal) but I can't see any problems with that assumption that would give me a contradiction. The "turtles all the way down" notion seems a little counterintuitive, but not actually contradictory. Can anybody give me some pointers? Thanks, Maelin (Talk | Contribs) 01:34, 2 April 2008 (UTC)[reply]

Well, if R is also an integral domain, (0) is a minimal prime ideal. So that case is simple, and then assume you have zero divisors which gives you something else to work with, that may or may not be useful with what else you have. GromXXVII (talk) 11:03, 2 April 2008 (UTC)[reply]
What about applying Zorn’s lemma kind of backwards. That is, viewing the partial ordering as containment instead of inclusion. Then if there is an “upper bound” which in this case is a prime ideal contained in all prime ideals, Zorn’s lemma gives you precisely that every prime ideal contains a “maximal” prime ideal, which in this case is a minimal prime ideal. I’m not sure how to show that there is such an “upper bound” though, or if it even exists, because in general the intersection of prime ideals is not a prime ideal. GromXXVII (talk) 11:33, 2 April 2008 (UTC)[reply]
The intersection of a decreasing chain of prime ideals is prime, though, so it's indeed an application of Zorn's lemma. Bikasuishin (talk) 12:22, 2 April 2008 (UTC)[reply]
As a matter of interest, how did you manage to prove the existence of minimal prime ideals without proving the second part? To me they seem to both require essentially the same application of Zorn. Algebraist 14:54, 2 April 2008 (UTC)[reply]

Tennis superiority

edit

This should be slightly more complicated than it seems.

A tennis tournament has every participant playing every other participant. No ties are allowed. A player A is superior if for every other player B, either A beats B or there is a third player C such that A beats C and C beats B. Prove that if there is only one superior player then he or she beats every other player. Reywas92Talk 01:37, 2 April 2008 (UTC)[reply]

Here are a few steps towards the goal: Assume that A is the only superior player, but A' beats A. Then for any player B in the tournament, either A' beats A beats B, or there is another player C(B) such that A' beats A beats C(B) beats B (the (B) to explicitly state that the choice of C depends on the choice of B). From there, if you can prove that A' must be superior, you reach a contradition. Confusing Manifestation(Say hi!) 03:12, 2 April 2008 (UTC)[reply]

Algebra

edit

Okay, I help with this question here, but I really don't get it.

Given any three natural numbers, show that there are two of them, a and b, that a^3b-b^3a is divisible by thirty. I only understood to the even/odd part. Thanks, 76.248.244.196 (talk) 01:42, 2 April 2008 (UTC)[reply]

a^3 b - b^3 a = a b (a - b) (a + b), and to be divisible by 30 you need it to have factors of 2, 3 and 5. You say you understand the odd/even bit, so let's move on to divisible by 3. When Keenan talked about modulo arithmetic, what he means is talking about numbers in terms of their remainder when divided by various numbers. In the case of "modulo 2", there are two types of number - odd and even. In the case of "modulo 3", there are three - multiples of three, numbers one more than a multiple of three, and numbers one less than a multiple of three. We can write these as 3n, 3n + 1 and 3n - 1 respectively, where n is an arbitrary whole number.
This may be the long way of doing it, but consider all three cases for each of a and b, and what they mean for the factors a, b, a - b and a + b. For example:
If a = 3n + 1 and b = 3m - 1, then neither is divisible by 3, but a + b = 3n + 1 + 3m - 1 = 3n + 3m = 3(n + m), which is. If you go through all nine combinations, you will eventually see that in each case, at least one of a, b, a - b and a + b will be divisible by 3.
The "divisible by 5" part is what requires the pigeonhole principle. Again, for the sake of enlightenment, go the long way and consider all of the possibilities modulo 5 (so let a = 5n, 5n + 1, 5n + 2, 5n + 3 and 5n + 4, or if you're a fan of symmetry, 5n, 5n + 1, 5n + 2, 5n - 1 and 5n - 2 is equivalent). You'll see that there only some combinations are guaranteed to make one of the factors divisible by 5, so what you have to prove is that, given three numbers, there are always 2 that will fall into one of those combinations (for example, if at least one of them is divisible by 5 you're home and hosed). Hope that helps, Confusing Manifestation(Say hi!) 02:57, 2 April 2008 (UTC)[reply]
Thanks a lot! I fully understand now, but it will be a little difficult to put all the possibilites in a proof. Thanks, 76.248.244.196 (talk) 20:40, 2 April 2008 (UTC)[reply]
Listing all possible cases is considered a valid, if slightly ugly, method of proof. For an extreme case, check out the Four color theorem. Also, like I said, my method goes the long way but there are several shortcuts along the way depending on how comfortable you get with the modular arithmetic - for a start, you can simplify things by simply stating at various points "modulo 5" and then drop out all the 5n terms. And the 5 x 5 table in the last step comes down to a single statement involving one trivial case and one where there are two possible categories for the pigeonhole principle. Confusing Manifestation(Say hi!) 22:59, 2 April 2008 (UTC)[reply]
For showing that 5 can be made a factor by choosing a and b: consider the remainders modulo 5 of the three numbers, giving three numbers from the set {0,1,2,3,4}. If any of the three is 0, we are done (because of the factor ab). If any two are the same, we are also done (because of the factor a−b). Otherwise, we have a subset of three different numbers of the set {1,2,3,4}; only one of those four is missing. We need to show that we can pick a & b such that the final factor a+b is 5. If the missing fourth number is 1 or 4, take those giving remainders 2 & 3, otherwise take those giving 1 & 4. To complete the proof, show that for any pair of numbers a & b the number a3b − b3a is a multiple of 6.  --Lambiam 09:09, 3 April 2008 (UTC)[reply]

Geometry

edit

In triangle ABC, draw angle bisectors AD and CE, where D is on BC and E is on AB. If angle B is 60 degrees, show that AC=CD+AE.

I've figure out that if the intersection of AD and CE is F, then <CFA and <EFD are 120 degrees and <DFC and <EFA are 60 degrees, but I'm not sure what's next. I'm reposting from here, but I'm not sure how to use the law of cosines on it, as I don't know any sides. Thanks, 76.248.244.196 (talk) 02:21, 2 April 2008 (UTC)[reply]

Also draw the third bisector. What point does it go through (other than B)? Figure out the sizes of some new angles that have appeared now. Do you spot any congruent triangles?  --Lambiam 09:21, 3 April 2008 (UTC)[reply]

Distribution of a maximum of multivariate normal

edit

Does anyone know (or know a reference) on the distribution of max(X) if X is a multivariate normal random vector with various means (can be assumed to have constant, ie  , variance). Simulation tells me it is gamma-like, but I haven't been able to prove it. Any ideas? Thanks, --TeaDrinker (talk) 02:28, 2 April 2008 (UTC)[reply]

Unless I have made a mistake or misunderstood your question, this is a straightforward calculation:
 
 
It's possible that this is harder if the covariance matrix is not diagonal. -- Meni Rosenfeld (talk) 15:08, 2 April 2008 (UTC)[reply]
It is more complicated if the components are not independent. By the way, a standard notation for the pdf Φ' is φ.  --Lambiam 09:27, 3 April 2008 (UTC)[reply]
Thanks! I tried this approach, but didn't see any slick simplifications (which I was hoping for--the iid case, of course simplifies nicely, but perhaps this one is just irreducibly ugly). Thanks again, --TeaDrinker (talk) 16:50, 2 April 2008 (UTC)[reply]

Square digit chain

edit

As in this problem. Why can't there be other loops? Can someone show me a proof that all possible chains will end in 1 or 89? 70.162.25.53 (talk) 03:43, 2 April 2008 (UTC)[reply]

It is not overly clever, but note that if you start with a four digit number, the largest number you can get is 324, and in fact, any number with four or more digits must decrease (and does so very rapidly until there are only three digits). Thus every number eventually becomes a number smaller than 325. It is therefore sufficient to simply check all the numbers 2 through 324 to verify that 1 and 89 are the only possible outcomes. It is not my area of expertise but I don't see a slick solution. --TeaDrinker (talk) 05:03, 2 April 2008 (UTC)[reply]
Actually, the bound can be brought down to 162. To see why, consider any x<324 that largest digit square sum you will find is 166, and then consider any number less than 166, that one that will produce the largest digit sum is 99, not 159 81+81=162, vs 1+25+81=26+81=107. so with that you can refine the bound to 162. Another hint, you can replace 89, by any of the other numbers after the first 89 in the sequence they provided, because those numbers are part of the loop. Additionally, any number who's sum converges to any of those number in that loop will go to 89, and every other number in that loop. A math-wiki (talk) 06:30, 2 April 2008 (UTC)[reply]
Going beyong squares, we can generalise this problem to iterating the sum of any power of a number's digits, and indeed generalise further to any base. Using similar arguments to the above, we can show that there is always some N such that the set of integers {1...N} is mapped to a subset of itself, and any integer greater than N is mapped to a strictly smaller integer. So all chains eventually enter the region {1...N} and once inside they never leave it. The long-term behaviour of any chain must therefore be to settle into a loop or to reach a fixed point (which is only a loop of length 1). Loops and fixed points can then be found by a finite number of trials.
Fixed points of these "powers of digits" functions are called narcissistic numbers. Considering sums of cubes of digits, for example, there are four fixed points (as well as the trivial fixed point 1), two loops of length 2, and two loops of length 3 [1]. Gandalf61 (talk) 11:21, 2 April 2008 (UTC)[reply]
Another loop is 00 (or is 0 no longer a number)?  --Lambiam 09:39, 3 April 2008 (UTC)[reply]
But no other number has a sequence that reaches zero, so it's kind of useless. — Kieff | Talk 10:18, 3 April 2008 (UTC)[reply]
Must true facts in mathmatics are "kind of useless". The issue was that the statement "EVERY starting number will eventually arrive at 1 or 89" is false. Unless I made a mistake, the answer to the question "How many starting numbers below ten million will arrive at 89?", counting only nonnegative integers as starting numbers, is 8581146. This is completely useless knowledge, and only an idiot or a mathematician would pose that question.  --Lambiam 11:29, 3 April 2008 (UTC)[reply]
So the ratio of numbers that arrive at 89 to numbers that arrive at 1 is approximately 8:1, and the 89 loop also has length 8. That's interesting ... Gandalf61 (talk) 22:59, 3 April 2008 (UTC)[reply]
The ratio is much closer to 6:1.  --Lambiam 23:27, 3 April 2008 (UTC)[reply]
Ah, yes, 6:1. I wonder if that ratio converges as the upper limit increases ... Gandalf61 (talk) 08:31, 4 April 2008 (UTC)[reply]
Apparently, the behaviour of the ratio is rather erratic. See OEIS:A068571 which gives the number of happy numbers (numbers which arrive at 1) less than or equal to 10n for n=1 to 21. An optimist might see signs of convergence to some proportion around 12%, but I wouldn't put money on it ! Gandalf61 (talk) 10:36, 4 April 2008 (UTC)[reply]

(exdent) I'm pessimistic about there being a limit. Defining H(n) to be the fraction of happy numbers ≤ 10n, I find that H(43) = 0.1607595665888606352231265289855421563588180, while H(233) starts off like 0.13206866.... These are local extrema in the sequence.  --Lambiam 22:55, 4 April 2008 (UTC)[reply]

Area and Volume Ratios

edit

I'm having trouble understanding Area and Volume ratios as the teacher didn't fully explain it at the time... does anyone have a basic summary of it? —Preceding unsigned comment added by Devol4 (talkcontribs) 06:46, 2 April 2008 (UTC)[reply]

A given 3 dimensional shape will have a surface area and a volume. The surface area to volume ratio is the surface area divided by the volume. That ratio is smallest for a fixed volume or surface area if the shape is a sphere. Does any of that help? If not, what specifically is confusing you? --Tango (talk) 12:39, 2 April 2008 (UTC)[reply]
Could it be about the fact that scaling a figure by a factor k will scale its area by a factor k2 and its volume by a factor of k3 ? -- Xedi (talk) 19:18, 2 April 2008 (UTC)[reply]
For a simple example, if you have a cube whose sides have a length of 1cm, then its surface area A = 6cm2, and its volume V = 1cm3. Then the ratio A/V = 6cm2/1cm3 = 6cm−1. If instead we take a block of dimensions 0.5cm × 1cm × 2cm, A = 7cm2 while V = 1cm3 as before, so now A/V = 7cm−1.
We also have an article Surface area to volume ratio, which however is not about the maths but about the significance in physical chemistry and biology.  --Lambiam 09:58, 3 April 2008 (UTC)[reply]

In 5-Card Draw Poker, what are the odds of a royal flush without a wild card when 2 are in play?

edit

A friend and I were playing 5-card draw Poker, using 2 joker cards as wild cards. This was in addition to the standard 52-card deck. We each drew 3 cards on our turns, and I ended up with a royal flush (A,K,Q,J,10 all of the same suit), without a wild card. I don't recall what my friend ended up with, so I hope it doesn't matter much. I do remember that he didn't get a wild card either. I know the odds of a royal flush are about 650,000 to 1, but that's without a draw, in which case the odds are higher. So, i've been wondering ever since what odds I overcame to draw this hand without getting one of the 2 wild cards that were in play. Thanks so much to whoever can answer this for me. MoeJade (talk) 09:22, 2 April 2008 (UTC)[reply]

Because of the draw, there isn't a unique answer - it depends on your decision making process. I would think the decision making process that would give the highest odds of a royal flush (but would probably not be a good strategy in practise) would be to keep only cards 10 or higher, and only cards of the suit that would leave you with the most cards. If we assume that strategy, we can calculate the odds. It's not a quick calculation though, and I don't have time right now. If no-one else tackles it before I find time, I'll give it a go. My method would be to calculate it separately in the cases of drawing no cards, 1 card, up to 5 cards, and weight each of them by the chance of that case occurring. --Tango (talk) 12:32, 2 April 2008 (UTC)[reply]

I was afraid of that. Unfortunately, I have only a basic understanding of statistics, so I quickly became lost trying to determine the odds. Don't spend too much time on this problem; I was hoping it wouldn't be too complicated, but if it is, just give me a ballpark range if you can. MoeJade (talk) 14:41, 2 April 2008 (UTC)[reply]

How does "very unlikely" suit you? While it's going to be more likely than getting it straight away, it will still be very unlikely. We can find a lower bound by finding the chance of getting a royal flush when dealt 10 cards (since that's the most cards that can involved in one hand of 5-card draw, so you can only get a royal flush if there is one somewhere in those 10 cards). That's an easier calculation. There are 4 choices of Ace, and once that's decided, the next 4 cards are determined, you can then have anything as the remaining 5, and all of those can appear in any order. So that's (4*1*1*1*1*(10!/5!)*47*46*45*44*43)/(52!/(52-10)!)=0.00039, about 1 in 2578. So, we know the odds are somewhere between about 650,000 to 1 and 2500 to 1. Probably closer to the higher number, but it depends on your strategy. --Tango (talk) 17:43, 2 April 2008 (UTC)[reply]
If you have four 8's and a 9, will you throw all 5 cards away? If you don't, the probability of a royal flush is 0. If you do, you increase that probability. Someone who's trying for a royal flush at all costs will have a slightly better chacne of making one than someone who's using a sensible strategy. --tcsetattr (talk / contribs) 20:47, 2 April 2008 (UTC)[reply]
Exactly. The strategy I described above should maximise your chances of getting a royal flush, but it's a pretty stupid strategy in a real game. Trying to calculate the odds for a realistic strategy is pretty much impossible - you would have to define that strategy precisely, and that's going to be very complicated. --Tango (talk) 21:04, 2 April 2008 (UTC)[reply]
When you say "complicated," how complicated do you mean? I would like to, even if it takes me a couple weeks to memorize, know how to get a royal flush. I could pwn some n00bs in online poker... flaminglawyerc 21:20, 2 April 2008 (UTC)[reply]
If you're going to play draw poker by whatever strategy maximizes your probability of getting a natural royal flush, let me just say that I'd love to have you in my game :-) --Trovatore (talk) 22:42, 2 April 2008 (UTC)[reply]
I said "realistic strategy", I didn't say "strategy that will always get a royal flush". Such a strategy is obviously impossible. --Tango (talk) 00:37, 3 April 2008 (UTC)[reply]
It was a joke. The strategy is not very realistic for someone who does not want to lose most of the time.  --Lambiam 11:36, 3 April 2008 (UTC)[reply]

The original poster only asked for an estimate of the odds. One thing that matters but wasn't specified is how many cards you are allowed to replace on the draw. To start with I'll assume that at most 3 can be replaced. (Even if you allow more, what you really want is the odds of getting a natural royal flush after drawing no more than 3 cards, right? Because that's a memorable aspect of the event.)

If the original hand had a pair or any higher combination, or any wild card, you'd keep it and therefore would be unable to improve your hand to a royal flush on the draw. I'll assume that in the absence of such a combination you would keep any holding that might be part of a royal flush, and discard anything else. (This is slightly wrong because you might keep a partial flush or partial straight; and also if your hand has two such combinations, like A-K of spades and Q-J of hearts, you might keep the wrong one — you keep A-K of spades only to draw A-K-10 of hearts — but these cases are pretty unlikely as long as you have to keep two cards, and won't affect the estimate much.)

Based on these assumptions we want the probability that 8 specific cards from the deck will include a natural royal flush, but the first 5 of them will not include a pair, straight, flush, or joker. I will make a further simplification and say that any 5 of them must not include a pair, straight, flush, or joker — since the cases where you will draw 3 cards are much more common than the cases where you will draw 2 or less, this won't affect the odds much. (That is, say the first 5 cards are two clubs and A-K-Q of spades; you keep the spades and draw J-10 of spades. The 8th card is the king of hearts. You'll never see it, but my simplification treats this case the same as if you got the king of hearts as the first case in place of one of the clubs and therefore would never draw enough cards to fill the straight flush.)

Okay, now, there are 54C8 = 1,040,465,790 combinations possible for 8 cards. Of these how many are suitable? First, there are 4 possible royal flushes, and for each one there are 47C3 = 16,215 combinations of other cards none of which are jokers. The first other card has a probability of about 15/47 that it will pair with a card in the royal flush, so 32/47 that it won't. Since we're only doing a rough estimate, we can pretend the probability is the same for the second and third other cards, and say that (32/47)³ of the cases are suitable. The case where the other three cards complete a natural flush has only 8C3 = 56 possibilities and can practically be ignored; for a straight it is not much higher, and I'll ignore that case too. So my estimate of the probability is (16,215 × (32/47)³) / 1,040,465,790 or say 1/200,000. Or in the form of odds, 200,000 to 1.

E&OE and HTH. :-)

--Anonymous, 00:27 UTC, April 5, 2008.

statistics

edit

Hi, I am a teacher trying to determine validity of test questions. I understand what validity is. My question is what percent of content must be measured in order to make a test valid. For example if there are 100 vocabulary words a test has only one question about them then the test could not be accurately measuring knowledge. If there are ten questions about them does that 10% represent enough of a statistical population to ensure the validity of the testing device. Is there a minimum percentage needed to do so? Thank you. 206.219.72.83 (talk) 15:20, 2 April 2008 (UTC) I am not sure what symbols to show?[reply]

Do you really mean validity? I believe you mean precision and/or accuracy. Accuracy is not the issue if you picked the question(s) randomly: even with one question, if I knew (say) 80 of the 100 answers, the average proportion of right answers would be 0.80, as I would get it right 80% of the time, and wrong 20% of the time. But this alludes to the real problem: precision. For one question, your assessment of me will be off by at least 20% all of the time. But if you quantify that (lack of) precision correctly, your results can still be valid, even if they are not useful.
You may wish to look at the binomial distribution article for a followup. Baccyak4H (Yak!) 18:00, 2 April 2008 (UTC)[reply]
I interpreted it as a question about statistical significance. See sample size - that article should get you started, at least. --Tango (talk) 18:46, 2 April 2008 (UTC)[reply]
Assuming all questions are equally difficult, and the student scores P correct answers out of a total of N randomly selected questions, the best estimate for the fraction of items known to the student in the total universe of possible questions is P/N. However, this is only an estimate. Let c be a measure of confidence we want to achieve, say c = 0.95. If the true fraction of items known to the student is x, the probability of observing P or more correct answers out of N is:
 
As x gets smaller, this probability gets smaller, but if it goes below (1 − c)/2, it becomes unlikely small, so that means x is too small. Define xlow as the value of x for which the probability equals (1 − c)/2.
Likewise, the probability of observing P or fewer correct answers out of N is:
 
As x gets larger, this gets smaller, and eventually 0 (unless P = N). Define xhigh as the value of x for which this probability equals (1 − c)/2. With some confidence we can say that x is in the range from xlow to xhigh. This depends on N, P, and c (but not on the size of the universe of questions). Given fixed N and c, the worst case (the widest range) is when P is about one half of N.
If N is fairly large and P = 1/2N, we may approximate the two probabilities above by
 
and
 
where
 
and Φ is the cumulative distribution function of the standard normal distribution.
How large N should be depends on how high c must be, and how narrow the range [xlow, xhigh] is required to be.
Mathematically this is not easy to handle, but since the critical area is around x = 1/2, we can slightly simplify things by just putting
 
A 2-sided confidence interval for c = 0.95 means that xhighxlow is about 4σ. Suppose that we require this width to be at most 1/10. Then the simplified formula for σ tells us that N must be at least 400.  --Lambiam 12:37, 3 April 2008 (UTC)[reply]
One other factor to consider is that you must give students enough time to complete the test (even the slowest students). It always seemed to me that tests in school put way too much emphasis on doing things quickly and not nearly enough emphasis on doing them correctly. In the real world, getting the correct answer 70% of the time is not a "passing grade". On the other hand, in the real world you're often expected to double check your work, and timed tests don't seem to encourage this. StuRat (talk) 18:48, 4 April 2008 (UTC)[reply]

Afraid of sets

edit

Since I realised that not every carelessly defined set is acceptable, I have been afraid that I will break something, and I don't know enough to understand the relevant axiom in ZFC (if that's even the axioms to use!). When I can figure out a bijection between my proposed set and something I know is a set, it's OK I guess, but how about "the set of all functions from   to  "? What are your suggestions for figuring out what is an acceptable set and what is not? Thanks. —Bromskloss (talk) 16:39, 2 April 2008 (UTC)[reply]

ZFC is as good an axiomatization of set theory as any other, and I think the axioms guaranteeing the possibility to construct certain sets are easy enough to understand. Basically there are a few legal operations you can perform on sets, such as power sets, subsets and binary unions and Cartesian products. The collection of functions from   to   is a subset of   and thus a set. -- Meni Rosenfeld (talk) 16:57, 2 April 2008 (UTC)[reply]
Ah, a subset of  . Of course! Thanks. Speaking of ZFC being as good as any other, what difference does the choice of axioms do? Is ZFC the one everyone is using today? With other axioms, would the resulting mathematics similar at all? Would it still work for physics? Bromskloss (talk) 17:07, 2 April 2008 (UTC)[reply]
The choice of axioms can make a lot of difference - they basically define what a set is. You define a set differently and it will behave differently. ZFC is pretty widespread these days. There are a few people working on alternative axioms (eg New Foundations). The axiom of choice (the C part) isn't always used (it's independent of ZF, that is, all the other axioms, so you can choose to include it or not - it usually is, though, in my experience). The things you might think are sets which actually aren't are called proper classes - they're usually very big (for an appropriate and vague definition of "big", at least) collections, like "all sets". A set of functions between two sets is quite small by comparison, and is a perfectly fine set. Most things you'll come across work perfectly well as sets - just stay away from anything like "the set of all sets", and you should be fine. --Tango (talk) 17:23, 2 April 2008 (UTC)[reply]
In practice I think a lot of working mathematicians use NBG. It's lighter than ZFC, and the set/class distinction is good enough for pretty much anything non set-theorists are ever going to do. If you're not doing set theory you just need to be able to steer clear of the most glaring pitfalls/paradoxes, and sets and classes, or a fixed universe of sets, or any similar light-weight type theory is good enough for that. -- Leland McInnes (talk) 18:05, 2 April 2008 (UTC)[reply]
In practice, working mathematicians don't use a formal axiomatic system at all. They work as Platonists, whether they claim to be or not. --Trovatore (talk) 18:09, 2 April 2008 (UTC)[reply]
A fair call - Ultimately I simply meant that the sets vs. classes distinction that NBG uses is the only point that I've seen crop up in non set theorist mathematics with regard to set theoreti paradoxes. Thus working mathematicians take NBG's sets and classes and pretty muh ignore the rest of axiomatic set theory.
As a side note, I'm more of a structuralist and never could entirely understand the platonists. I would prefer and welcome a category or topos theoretic foundation. -- Leland McInnes (talk) 01:13, 3 April 2008 (UTC)[reply]
I was using "Platonist" as an (admittedly imprecise) shorthand for "realist", which certainly includes structuralists -- in fact structuralism is the main branch of realism in contemporary foundational philosophy, in the sense of considering isomorphic structures to be the same, except when you need to distinguish between the way they're embedded in some larger structure. Note that structuralism is already a sufficiently strong form of realism to guarantee that, for example, the continuum hypothesis has a well-defined truth value. --Trovatore (talk) 02:39, 3 April 2008 (UTC)[reply]
That depends on the school of structuralism in question. The realist structuralists in mathematical philosophy, while hurdling the representation issue, still put themselves in what seems to me to be an untenable position. Count me in the same school as Bell, Mac Lane and McLarty I guess. -- Leland McInnes (talk) 12:29, 3 April 2008 (UTC)[reply]
Not directly, sure, but working mathematicians build on what's gone before, and that will be built on particular axioms (perhaps not historically, but someone will have formalised it at some point). Generally, mathematicians use whatever axioms are needed to do what they're trying to do (eg. if you need AC to prove your theorem, then you assume AC, otherwise you ignore it completely). The various (common) choices of axioms don't usually contradict each other in the kind of work most mathematicians do, so people don't generally worry about it. --Tango (talk) 18:36, 2 April 2008 (UTC)[reply]
"Someone will have formalized it at some point". Yes. But that does not mean that it is "built on axioms". The axioms are extracted from the mathematics, not the other way around. Oh, it's true that there's an interplay between them; the picture of the von Neumann hierarchy did not really come clear until after Zermelo's formalization, though in retrospect we can see that it was inherent in Cantor's work all along. --Trovatore (talk) 18:40, 2 April 2008 (UTC)[reply]
But until something is formalised, you can't be sure it's right (even then, Godel gets in the way, but lets not complicate things any further). While a lot of maths was (and, I guess, still is) invented/discovered without a formal axiomatic approach, it's only since it's been formalised that we've been able to be sure we're not talking nonsense (for example, see Russell's Paradox). So, while the maths itself may not be built on axioms, the proof that the maths is correct is. Whether you choose to distinguish between the two is a matter of definition of the word "built", I suppose. --Tango (talk) 18:54, 2 April 2008 (UTC)[reply]
You can never be "sure" it's right. Only "pretty sure". Essentially mathematics is an empirical science, subject to the limitations of human knowledge like any other; the differences are quantitative rather than qualitative. --Trovatore (talk) 19:02, 2 April 2008 (UTC)[reply]
Are you referring to Godel? If so, I already dismissed him as an unnecessary complication to this discussion. If not, then I disagree. By my understanding, assuming you're working in a consistent and complete framework, a mathematical proof is absolute. There is no need for any kind of experimentation. --Tango (talk) 19:18, 2 April 2008 (UTC)[reply]
Well, I could point out that it's a bit cavalier to dismiss as "an unnecessary complication" a proof that the thing you're assuming (depending, I guess, on what precisely you mean by a "framework") doesn't exist. But in a sense you're right; it is an unnecessary complication, because the error of Euclidean foundationalism should have been clear even in Euclid's time. It's an infinite regress -- you claim that your results are not subject to error because they are derived from your alleged foundation, but how do you know that the foundation itself, or your method of derivation, is not subject to error? --Trovatore (talk) 19:51, 2 April 2008 (UTC)[reply]
It's extremely cavalier, but you can't get bogged down in the details in every discussion. Our axioms are as good as it's possible to get them, so most of the time, it's easiest just to assume they are right. --Tango (talk) 20:55, 2 April 2008 (UTC)[reply]
That's fine by itself. But it's not a very convincing as part of an argument that formalization is some sort of absolute defense against error. My position is that the objects of study (natural numbers, real numbers, functions, sets) are the primary notion, and the axioms that describe them are the subordinate concept. You don't have to agree with that, but making the axioms primary on the grounds of certainty is not going to work. --Trovatore (talk) 21:15, 2 April 2008 (UTC)[reply]
How certain it is isn't really relevant. Without a formal axiomatic approach, you have very little certainty at all. You can't really prove anything about natural numbers until you know what a natural number is - the naive approach of "they're the numbers you count with - a 2 year old child could do it!" is just as dangerous as naive set theory and could well lead to equally paradoxical results. While we can't be completely certain we've got it right, what certainty we can have comes from the axiomatic approach. You can do an awful lot of maths without worrying about rigour, but there's always the chance you'll trip up like Russell did. An axiomatic approach doesn't eliminate that danger, but it gets pretty close. --Tango (talk) 23:44, 2 April 2008 (UTC)[reply]
No, the axiomatic approach is pretty much irrelevant to the question of eliminating the danger. Note that Russell's paradox was discovered in a completely formalized version of set theory, due to Frege, who thought he was formalizing informal Cantorian set theory but may well have been wrong about that (Wang Hao thought so -- it may depend on which moment in Cantorian thought you choose to look at). At best, axiomatics provides a way of, as it were, "localizing" the risk, saying, "if there's an inconsistency, it should show up here". When an apparent inconsistency shows up in informal argument, that's a useful thing to have. But confidence in the correctness of the results is fundamentally not about axiomatics. --Trovatore (talk) 00:04, 3 April 2008 (UTC)[reply]
Oh, I should say: irrelevant, except insofar as the formalization helps to find the errors. That is arguably what happened in the Frege-Russell case -- when Frege formalized his notion of set (which he thought was the same as Cantor's but he was probably wrong about that), he made it easier for Russell to find the error in the concept. This is a useful aspect of formalization. But the fact that a deductive system is formal, by itself, does not provide any extra confidence whatsoever in its results. --Trovatore (talk) 02:19, 3 April 2008 (UTC)[reply]
Certainly, a formal system can be just as wrong as any other system. However, it is possible to prove the consistency of a formal system (you need to use a different system to do it, of course, so it's far from perfect), it isn't possible to do so for an informal system. It's all just moving the risk around, as you say, but that can be quite handy. --Tango (talk) 12:43, 3 April 2008 (UTC)[reply]
It's an interesting thing to do for lots of reasons, but increasing confidence in the results is not really one of them, because to prove the consistency of a formal system you need to use a stronger theory (one even more "likely" to be wrong"). Tango, I really think you have the wrong idea about formal systems; you've let yourself be taken in by the errors of the formalists. Formal systems are more an object of study than they are a tool for deriving conclusions. --Trovatore (talk) 15:56, 3 April 2008 (UTC)[reply]

Are the ZFC axioms sufficient to rule out the possibility of a self-referential set, i.e., a set that contains itself as an element? -GTBacchus(talk) 17:50, 2 April 2008 (UTC)[reply]

Yes. It follows from the axiom of foundation that there is no such set. --Trovatore (talk) 17:58, 2 April 2008 (UTC)[reply]

From the discussion above, I seem to be able to extract that the most commonly discussed axiomatic set theories will eventually lead to the same mathematics, once you get past the most fundamental levels. I.e., most concepts in one theory would have its counterpart in another. They would all be useful for physics. Is that correctly understood? Is it possible that some completely different theory would lead to useful mathematics that is out of reach of what we are using now? —Bromskloss (talk) 19:43, 2 April 2008 (UTC)[reply]

Yeah, I think all the common axiomatic systems would agree on anything used in physics (as long as you can construct the real numbers in all of them, you're pretty much sorted, I would think). As for your last question - probably depends on what you mean by "useful". Different choices of axioms will certainly lead to new and interesting maths. Whether it will have any uses, it's hard to say, but people seem to be able to find real world uses for the strangest of mathematical concepts (which are generally created because a mathematician was curious as to what would happen, so thought "why not find out?", and just gave it a go), so I think there's a very good chance someone will make use of any maths we can come up with. --Tango (talk) 20:55, 2 April 2008 (UTC)[reply]

Open loop transfer function

edit

In my syllabus we've got study of control systems, and it says that that Open Loop transfer function for a normal feedback system is G(s)*H(s). Even if it is not a unity feedback system. I don't understand how it makes sense - open loops means that H(s) is disconnected from the system, yet how is it being considered? All my textbooks use G(s)*H(s), and I asked my professor, he could not come up with a satisfactory answer.

The example control system which I'm talking about is this - http://classes.engr.arizona.edu/ame558/public_html/topic1/bd2.gif That, incidentally, seems to be "standard" control system - almost all formulae and theorems are derived keeping that control system in mind.

Thanks! --RohanDhruva (talk) 19:50, 2 April 2008 (UTC)[reply]

"Open-loop", in this case, does not refer to the situation where you control your system without a feedback. Instead, you should unplug   from the circle (which calculates the difference) and consider the signal path from   to  . That gives   (or   as it seems to be written in the picture). They actually define it just like that, as  , on one of the accompanying pages. —Bromskloss (talk) 20:37, 2 April 2008 (UTC)[reply]
OK, if you did send in a signal   after unplugging   from the difference circle, you would be controlling the system without a feedback, but that's not the point. —Bromskloss (talk) 20:47, 2 April 2008 (UTC)[reply]
Thanks Bromskloss. I still don't understand the "logic" behind it being defined as  . In open loop, you mean to say, the signal   is not connected to a summing point, instead, it acts as a point to take the output from? If so, what happens to the output signal  . The wikipedia page for Closed loop pole defines "Open loop transfer function" as -- The open-loop transfer function is equal to the product of all transfer function blocks in the forward path in the block diagram. Would that clarify anything? --RohanDhruva (talk) 02:10, 3 April 2008 (UTC)[reply]
I agree with you that it does not seem obvious that   would be of any interest, but I think it can be useful when studying the porperties of the whole feedback system. I don't remember exactly how, though. In the mean time,   just sits there. —Bromskloss (talk) 07:03, 3 April 2008 (UTC)[reply]

Dilemna!

edit

I don't know how to write this using <math> tabs, but I'll do my best to explain it. If you can, please write it in those "magic words."

The problem is:

There's a magic thing in algebra. Pick any positive whole number. Square it. Add [twice the original number]. Add 1. Now find the [square root] of that number. Subtract the original number, and the answer is 1.

Now that's fine and dandy. Even the reasoning isn't too hard.

 

Find quadratic roots...

 

Meaning...

 

Meaning...

 

That's fine and dandy. But if you don't find the quadratic roots - ah! A dilemma. See:

 

We have to square every term in order to get rid of the [square root sign]. So:

 

Take out the zero pairs.

 

Subtract...

 

But alas! The number you are told to pick has to be a positive whole number. And zero isn't positive (or negative). So it shouldn't work. But try as hard as you like; as long as x is a positive whole number, it works. Can someone please explain this? flaminglawyerc 20:50, 2 April 2008 (UTC)[reply]

It's just an algebraic error:  . --Tango (talk) 21:01, 2 April 2008 (UTC)[reply]
(Edit conflict.) (Fixed <math> notation.) You've made an error going from   to  . Note that   does not simplify to  . —Bromskloss (talk) 21:04, 2 April 2008 (UTC)[reply]
(edit conflict squared) "We have to square every term in order to get rid of the [square root sign]. " Ummmm, no, it doesn't work that way ... Go ahead and add   to both sides in place of that step, then square both sides (and most decidely not "every term"). --LarryMac | Talk 21:08, 2 April 2008 (UTC)[reply]

Oh, so you would square the sides. I thought it was every term. I get it now. (it was kind of new territory for me, since I am only taking Algebra 1) flaminglawyerc 21:13, 2 April 2008 (UTC)[reply]

can u plz solve this?

edit

(x+1)^10 —Preceding unsigned comment added by 64.229.54.4 (talk) 21:35, 2 April 2008 (UTC)[reply]

It's equal to (x+1)(x+1)(x+1)(x+1)(x+1)(x+1)(x+1)(x+1)(x+1)(x+1) --Carnildo (talk) 22:39, 2 April 2008 (UTC)[reply]
Check out Binomial expansion. And that can't be solved, because it isn't an equation, but it can, like I had linked, be expanded. Confusing Manifestation(Say hi!) 22:49, 2 April 2008 (UTC)[reply]


Thanks!!... it was easy:P...thanks! —Preceding unsigned comment added by 64.229.54.4 (talk) 23:24, 2 April 2008 (UTC)[reply]

  --wj32 t/c 06:14, 3 April 2008 (UTC)[reply]