Wikipedia:Reference desk/Archives/Mathematics/2012 February 19

Mathematics desk
< February 18 << Jan | February | Mar >> February 20 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 19 edit

Geometric algebra question: degrees of freedom to a k-blade edit

I am well-aware that for a geometric algebra over the reals, the number of (real) degrees of freedom to a k-vector is given by the appropriate binomial coefficient, i.e.   is the number of real numbers required to specify a 2-blade for a geometric algebra over an  -dimensional real space. However, clearly the number of real numbers required to specify a 2-blade is often less than this; given that every pair of vectors can be specified by  , and any 2-blade can be expressed as the exterior product of 2 vectors, a 2-blade cannot represent more degrees of freedom than this, yet the space of 2-vectors is larger for  . So, how many degrees of freedom are required to specify a general k-blade?--Leon (talk) 14:55, 19 February 2012 (UTC)[reply]

For a general k-vector, it's n choose k (see exterior algebra). The subvariety of k-blades is (projectively) the Grassmannian of k-dimensional subspaces of the n-dimensional space and has dimension k(n-k) (projectively) or k(n-k)+1 (nonprojectively). Sławomir Biały (talk) 17:11, 19 February 2012 (UTC)[reply]
The number of degrees of freedom for picking 2 vectors would be n2, not 2n, so it's not less than the binomial coeff. This corresponds (sort of) to the dimension of the tensor space being n2; the space of 2-vectors is a quotient of this.--RDBury (talk) 21:22, 19 February 2012 (UTC)[reply]
But the question isn't how many degrees of freedom are involved in picking two vectors. A two blade is the wedge product of two vectors (not the tensor product.) The question is equivalent to asking how many degrees of freedom there are in picking two dimensional subspaces (projectively, add one for a scale). In R3 for instance, this is equal to 2+1: a two-dimensional subspace is determined by its unit normal, plus one for scaling. Similarly a k-blades is the wedge product of k vectors. These are in one-to-one correspondence with a trivial line bundle over a Grassmannian. See exterior algebra for a discussion. Sławomir Biały (talk) 02:06, 20 February 2012 (UTC)[reply]
Sławomir, I think this is a pretty useful result in general, since blades serve a significant role in GA. Do you know of a suitable reference, so that it could be added perhaps to Geometric_algebra#Representation_of_subspaces or some similar section? — Quondum 06:17, 20 February 2012 (UTC)[reply]
I don't know about any references that focus on geometric algebra, but for Grassmannians (including the result about dimension) a good book is: Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-05059-9, MR 1288523. The proof of the dimensionality is actually straightforward. Take k vectors and wedge them together   and perform elementary column operations on these (factoring the pivots out) until the top   block are elementary basis vectors of  . The wedge product is then parametrized by the product of the pivots and the lower   block. Sławomir Biały (talk) 11:29, 20 February 2012 (UTC)[reply]
Thanks for the detail; I've taken the liberty of copying the reference and your explanation into Blade (geometry) in the interim. Later, we could copy it to the Geometric algebra article. — Quondum 13:59, 20 February 2012 (UTC)[reply]
It looked to me like there were two questions, first how many degrees of freedom are there in a picking a blade; and second, how can this number be greater than 2n if that's the degrees of freedom in picking the vectors individually. I was answering the second question.--RDBury (talk) 09:17, 20 February 2012 (UTC)[reply]
There was one question, and I (think I) knew that the dimension of a  -blade could not have dimension greater than  , for an underlying basis consisting of   basis vectors. Anyway, thank you, Slawomir, you have answered my question completely! As for adding the result to the geometric algebra article, I have several textbooks on the matter that don't answer the question (otherwise I wouldn't have had to ask), and given that these are fairly "standard" textbooks on GA I must conclude that it may be difficult to find a reference on GA specifically that is suitable. Is there a name for this result? If so, I can have a glance through Google Scholar to find a citation.--Leon (talk) 12:04, 20 February 2012 (UTC)[reply]
I didn't really understand the last part of the question, and it looked like you were replying to me instead of the original poster. (The reply did seem strange to me.) Sorry for my confusion. Sławomir Biały (talk) 11:29, 20 February 2012 (UTC)[reply]
No problem, confusion often abounds with text based dialog and I was a bit confused myself.--RDBury (talk) 13:41, 20 February 2012 (UTC)[reply]

describe conditions that would allow you to martingale edit

here is an example of a system that allows you to martingale: let's simplify roulette to be a game of red, black, or green, with red and black having a nearly 50%, and equal, chance each of occurring (e.g. 48%), and green being the house's edge (e.g. 4%). The green pays off only slightly better than red or black (e.g. 4x, 10x, whatever), and red or black pay off at an "even" rate (ten dollars gets you twenty plus change - whatever makes it an even bet at 48% chance of winning - if you win, zero if you lose).


a casino feels that roulette players are put off by excessive strings of reds or blacks, so it tweaks the random number generator to work thusly: rather than pick each red/black/green in succession based on a probability, instead it allots 12 red, 12 black, and 1 green token into randomly into a block of 25, doles out the 25 results in succession, then repeats this random allotment. This has the advantage, that, for example, nobody could bet green 50 times in succession without a payoff, nor would it be common for anyone to observe, joining the table, that out of 20 bets on red, only three or four pay off. This would otherwise be a common occurrence.

naturally, although this "seems" more random to the observers or players, it is in fact less random. this is so because rather than being a random walk, the system in some sense has memory.

it is hard to game, as it is difficult to know where the block boundaries are. To really have some confidence in a bet on red, you would have to join after observing 24 blacks: 12 at the end of the first block, and 12 at the beginning of the next. Then you could be sure the halfway point was the 25-block boundary, and now you know the rest of the block will be reds or greens. You can wait for a green if you want to have 100% payoff on a red.

All right, then, it's a bit difficult to game, but nevertheless imminently possible. The reason it is possible is obvious: given sufficient observation and the nature of the system, there are locations wherein a bet has positive expectation.

Now suppose that some stock on the stock market were literally cyclic (like a sine wave) and you had some guarantee of this. Naturally, if you have a guarantee that it will reach 0 again, any bet below zero will have a positive expectation (you can just hold the stock until it reaches 0). Of course, the time value may be such that this is not a worthwhile investment. So, we must consider stocks guarantee4d to follow sine waves and guaranteed to do so with high frequency. For best results, with high amplitude as well, though of course you can repeat smaller bets by buying low and selling high.

Thus, were any stock to be a sine wave of high frequency, we would be guaranteed to make money by placing bets with positive expectation.


returning to games. I would like a generalization of the sufficient and necessary conditions a system must meet, in as general terms as possible, to allow an observer to join and make bets with positive expectation. I gave two systems above: a stock roughyl following a sine wave in price, and a casino that doles out roulette results by allotting them in chunks instead of randomly one after the other. I would like a generalization that extends to every system that can be gamed, and how one can prove or disprove that in such a system, an observer can place bets with probability 1 of a positive payoff, and probability 0 of a negative payoff: and we are talking a single bet. (So that someone with this in proof who is infinitely risk-averse will still place the bet, provided he or she is a good mathematician and believes the premises or conditions are really a correct model of the system in question). --80.99.254.208 (talk) 15:14, 19 February 2012 (UTC)[reply]

What you're describing is basically the martingale characterization of algorithmic randomness. In this case, the system must not be computably random.--121.74.109.179 (talk) 20:15, 19 February 2012 (UTC)[reply]
Thanks for the link, feel free to make my ramblings leading to it smaller. Could you tell me what it means to be "computably random" and general strategies for deciding this. Could you give me a formalism that I could apply to arbitrary systems to determine whether they are "computably random"... --80.99.254.208 (talk) 20:32, 19 February 2012 (UTC)[reply]


What I mean here, is I would like to be able to describe the properties of a system, and apply whatever you tell me to, to decide whether I can ever game it as described. Of course, my assumptions are very important, but so are the tools I'm asking from you in order to be able to follow through on deciding the consequences of those assumptions. Thanks for anything you might have along these lines. --80.99.254.208 (talk) 20:35, 19 February 2012 (UTC)[reply]
Unfortunately, no. The definition of computable randomness is basically "can't be gamed". So I'd have a hard time giving you an alternate way to recognize it. A good rule of thumb, though, is to consider the law of large numbers; is every sequence of outputs possible? In the roulette case, 49 blacks isn't possible, so you know right away that the system can be beaten.--121.74.109.179 (talk) 21:16, 19 February 2012 (UTC)[reply]
To get back to the hypothetical roulette game, actually it would be relatively easy to game. The probably that a green follows another green would drop from 1 in 50 to 1 in 2500 and it's certain that someone would notice. Knowing this, even without knowing there the boundaries are, you'd just have to wait for a green to appear and bet on a color other than green on the next roll to beat the house advantage. This is actually a variation of what card counters do in blackjack, in fact it would be much simpler than card counting.--RDBury (talk) 21:42, 19 February 2012 (UTC)[reply]

followup question edit

in this hypothetical roulette game, after joining randomly how many turns would you have to wait on average before you were 100% sure of the boundary? 84.2.147.177 (talk) 15:06, 20 February 2012 (UTC)[reply]

A number n is a possible boundary if the preceeding 25 outcomes contains exactly 12 reds, 12 blacks, and 1 green. The probability for this to happen by chance can be computed, see multinomial distribution. It is   So the first 50 outcomes contains one true boundary and, with 6% probability, one additional false boundary. The true boundary is reproduced after the next 25 outcomes, while the false one is eliminated with 94% probability. Bo Jacoby (talk) 06:53, 21 February 2012 (UTC).[reply]
Sorry if I misunderstand you, but I am not interested in confidence levels, but with 100% confidence. Let me give you an example. If you join and the first two results are greens, you have just gained 100% certitude, in two turns, of where the boundary is. (It's in between the two, the first is at the end of the previous group of 25, and the second is the first or the next group of 25 results). So, this is ONE case where you reach 100% certitude. This case has length 2.
And so when you join, there is always a length at which you reach 100% certitude of the boundaries.
What is the average of all of these lengths? (In other words, if I have just joined, I have an average wait time of x turns before I can become 100% sure of where the boundaries are. In practice, it could turn out that I reach certitude after 2 turns, but before I have seen a single result, what is my expected wait for 100% certitude?) --80.99.254.208 (talk) 10:59, 21 February 2012 (UTC)[reply]
I do understand your question, but I did not provide the complete answer. I merely provided a step. After two results you know the answer (2) with probability 1/25^2=0.0016. So the product 2*0.0016=0.0032 contributes to the average length. After three results you know the answer (3) with probability 24/25^3=0.001536 and the product 3*0.001536=0.004608 contributes to the average length. The calculations becomes increasingly complicated, so it is tempting to make some approximations. The probability that you know the answer when n=25 is very low, and the probability that you know the answer when n=50 is very high, so the average is somewhere in between. Bo Jacoby (talk) 15:19, 21 February 2012 (UTC).[reply]

f(1/x) edit

If I have the graph for a function f(x), what happens to the graph when I turn the function into f(1/x)?190.24.187.123 (talk) 15:39, 19 February 2012 (UTC)[reply]

First off, I'd think of the fixed points of the transformation. That is, where x = 1/x. Those would occur at x = 1 and x = -1. The graph at exactly those points will be the same. As the transformation on the x coordinate is continuous and differentiable in those regions, the area around the fixed points before will also be around the fixed points after. However, as x -> 1/x flips the ordering ( 0.99 < 1 before, but 1.0101... > 1 after), the graph will likewise be flipped around the points at x = 1 and x = -1. So if the graph before is increasing through x = 1, afterwards it will be decreasing, and vice versa. The other place to look is at discontinuities. The behavior around x=0 will be interesting. Points just near the right hand side of zero will be thrown right, toward positive infinity, and points just to the left of zero will be thrown further left, towards negative infinity (the zero point itself will drop off the graph). As the transformation is self-inverse, we can also conclude the reverse will happen - points near positive and negative infinity will be brought in toward zero. As the transformation on the x coordinate is continuous and differentiable on each side from very near zero to both positive and negative infinity, you can envision the transformation as a flipping and stretching. - In summary, take each of the positive and negative sides of the graph, flip them around ±1, as appropriate, and then stretch or squash them to fit in their new ranges (non-uniformly, so the most stretching/squashing happens near zero/infinity). Given the non-uniformity in the scaling and the fact that the infinities don't stay out at infinity, it's difficult to exactly visualize what the after graph will look like, but that's the general idea. -- 67.40.215.173 (talk) 18:57, 19 February 2012 (UTC)[reply]