Wikipedia:Reference desk/Archives/Mathematics/2009 August 4

Mathematics desk
< August 3 << Jul | August | Sep >> August 5 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 4 edit

Pseudoscalar edit

Whilst this is supposedly my area of expertise, I feel I need some advice regarding this article. As you can see from the edit history, I removed the Clifford algebra section, not because it is wrong per se, but for the reason that it was inconsistent with the rest of the article. However, despite a Google scholar search, I'm still not too clear on what a pseudoscalar is, or if there really are two distinct definitions.

If we are to take the statement that "The pseudoscalar is the top-grade element of a Clifford algebra" as truth, then it is wrong to state that it need commute with all other elements and change sign under parity inversion. These are both true if the algebra has an odd underlying basis (number of basis vectors), but neither are true if the underlying basis is even.

Unfortunately, some of the same literature that state one definition seem to suggest that the other is equally general. Thus I am unsure how to proceed.

Given that only odd-dimensional space possesses anything like a pseudoscalar as currently defined in the article, I am slightly suggesting scrapping the current content and replacing it with the Clifford algebra definition (the bit I removed, perhaps wrongly). I am also questioning the article's status on the importance scale as "low", given the number of references in physics publications.

This paper seems to introduce the commuting and parity-inversion properties but in passing for a geometry in which this is true. [[1]]

Please may I have some advice. --Leon (talk) 10:21, 4 August 2009 (UTC)[reply]

I don't know that much about Clifford algebras, but the definition of a pseudoscalar as a top-grade element makes sense. Top-grade elements form a 1 dimensional space and are invariant under rotations so they "behave like scalars." The Clifford algebra page also corroborates that definition. In even dimensions it's true that the sign of a top-form element doesn't change when you flip everything, but I don't think there is any element that behaves like what you're asking for (besides 0). Can you link a source that makes these claims about the properties of psuedoscalars that isn't specifically talking about 3D? Rckrone (talk) 17:32, 4 August 2009 (UTC)[reply]
Sorry, I simply explained badly in the last paragraph. What I mean is that the current definition of a pseudoscalar (element that is scalar-like but flips under parity inversion) only holds for odd-d space, and thus perhaps should be replaced with the top-grade of Clifford algebra definition. The stuff about the pseudoscalar having the sign-change property with odd-d spaces should be mentioned, especially since we're usually considering 3D space.

Please let me know what you think. --Leon (talk) 18:11, 4 August 2009 (UTC)[reply]

AHH, regarding your last question: no. But my point is that those properties appear to be given as defining pseudoscalars, in texts that make no mention of Clifford algebras. --Leon (talk) 18:14, 4 August 2009 (UTC)[reply]

What I'm asking is can you give an example of somewhere those properties are used as a general definition? For even dimensions, there are no non-trivial elements that satisfy that definition. I think you're right that the general definition should be given as a top-grade element of a Clifford algebra. Although for physics it's not really necessary to get into that stuff. Rckrone (talk) 18:25, 4 August 2009 (UTC)[reply]

Showing a function is right continuous edit

Let   be Lebesgue measurable. Define the function   on   by

 ,

where m is Lebesgue measure.

(a) Prove that   is right continuous.

I have been working on this for a while, looking at solutions from others even. I used the fact that   is right continuous at y if, and only if, for every monotone decreasing sequence   which converges to y, we have  . So, we just let   be any such sequence and define

 .

So, what I want to show is  . The solution I am looking at has

 

and this confuses me. First of all, my textbook never dealt with limits of sets, so although the right equality seems like it should be true, I am not sure. Obviously, the E_i are getting larger and larger and are getting closer and closer to E. But, this is not rigorous. And, I'm not sure why it is okay (if it is) to move the limit inside the measure. Any ideas? Thanks. StatisticsMan (talk) 16:42, 4 August 2009 (UTC)[reply]

Measures are indeed continuous from below in the sense required here. The proof is a simple application of countable additivity. Algebraist 16:52, 4 August 2009 (UTC)[reply]
Cool thanks. I have seen it with decreasing sequences I don't remember it for increasing. StatisticsMan (talk) 17:15, 4 August 2009 (UTC)[reply]
I was wondering. This function   seems pretty special. Does it happen to have a name that any one knows of? StatisticsMan (talk) 15:36, 5 August 2009 (UTC)[reply]
If f were a probability density function, then ω would be the associated complementary cumulative distribution function. I don't know if anyone uses such terminology more generally. Algebraist 18:44, 5 August 2009 (UTC)[reply]
Don't you mean that   for a random variable  ? --Tardis (talk) 22:05, 5 August 2009 (UTC)[reply]

Solving a system of matrix equations edit

I have a set of equations of the following form:  , where the product on the right hand side is always a finite sequence of the matrices A B and C. I know the values of all matrices X_i, and I want to solve these for A, B and C. I'm trying to find the right brand of mathematics to deal with this situation, but whatever I google for turns up methods for solving systems of equations with the help of matrices, rather than solving matrix equations like these. I have the following questions. If anybody can point me in the right direction, I'd be much obliged:

  1. Assuming all matrices are nxn, and the right hand side always contains a fixed number of matrices, how can I tell whether the problem is over or underdetermined?
  2. If there are enough equations for the problem to be overdetermined, is it likely that an analytical least squares solution can be found?
  3. What kind of numerical method would be appropriate for this kind of problem? I'm using a kind of genetic algorithm at the moment, but I expect there are less general methods.
  4. Is there a way to transform a set of equations like this to the form Ax = b or some other general form?

Thank you. risk (talk) 17:52, 4 August 2009 (UTC)[reply]

Well, if somebody has a good clue to this problem, I'm curious to learn it too. In the meanwhile, my obvious remark is that if you have some more information (on the data Xi, or on the form of the equations) things may be different and easier . For instance, if the Xi commute, it could be natural to look for A,B,C commuting too, which changes a lot. In general, a possible approach is maybe to study the map taking (A,B,C) into (X1, X2,..) from the point of view of differential geometry and nonlinear analysis, and apply some principle for global invertibility or surjectivity of differentiable mappings (like e.g. Hadamard's theorem: "a C1 map f:E→E with invertible differential satisfying |Df(x)-1| ≤ a|x|+b, is a global diffeo"). As to the last point, I would say not: I don't see how one could reduce such a nonlinear problem to linearity. --pma (talk) 21:42, 4 August 2009 (UTC) A further remark: the toy version of your problem: "solving U=AB, V=BA for A and B", already shows that there are nontrivial necessary conditions for the solvability. Assuming that U or V is non-singular (hence both), the equations imply that U and V are similar matrices (U=AVA-1). This can be checked from their spectral theory, and of course it is also a sufficient condition. Also, some material possibly related with your problem comes back googling "(system) of equations in groups", "equations in matrix groups" and similar combinations.--pma (talk) 08:45, 5 August 2009 (UTC)[reply]
Thank you very much. That's already given me a lot to digest. I'll start reading up on your suggestions. I hadn't really realized that the problem was nonlinear. Can you explain how you determined that it wasn't?
I can give you little more information on the equations. The full set contains   equations, each of the form   where   is a sequence with elements in [1, n]. So we have an equation for every such sequence, where   is always known, and the Y's in the right hand side aren't. The matrices can be assumed to be non-singular. (To make it even more interesting, the end result will be an algorithm where n and d are variables). Do you have any ideas what numerical approaches people would normally use for this kind of problem? I'm using a kind of genetic algorithm at the moment, but I'm sure something more elegant is possible. risk (talk) 13:16, 6 August 2009 (UTC)[reply]
I think I should also point out that I'm really looking for more of a least squares solution than an analytical solution. It may be possible to use only the n equations of the form   to derive a unique answer, but if the system doesn't have a unique solution, I would need the error to be distributed evenly over all equations. risk (talk) 13:35, 6 August 2009 (UTC)[reply]
FWIW, there is a least squares procedure called seemingly unrelated regression (SUR) that generates the solution to a system of equations when the stochastic term is correlated across equations. Each equation can be represented in matrix form, and so SUR solves the system of matrix equations. Wikiant (talk) 13:46, 6 August 2009 (UTC)[reply]

What angle do you need to get a certain length of an arc? edit

Hi there, fellas!

I had nothing to do today, so in my boredom I decided to develop a cool little graphical app in Java. The thing is, there's a whole lot of arcs and circles in it, and trigonometry wasn't my star subject in high school, and by at this point I'm going insane with all the sines, coses and arc-tangents. So now I have a small little problem which is probably ridiculously easy for anyone with an actual brain, but I would really appriciate some help with it.

The problem is that I want to draw an arc of a circle where I can decide exactly how long the arc will be. So this is the problem: say you want to draw an arc that's P pixels long, on a circle that has a radius R pixels. What is the angle (in radians or degrees) of that arc?

As I said, it's probably very easy, but my brain is getting very, very tired, and it's a subject I'm not too comfortable with to begin with. 83.250.236.75 (talk) 21:17, 4 August 2009 (UTC)[reply]

You know the circumpherence of a circle or that radius...right? You also know that a pie-wedge taken out of that circle has an arc that's the same fraction of the circumpherence as the angle at the center is a fraction of 360 degrees. So for an arc of length 'L' for a circle of radius 'R', you need an angle that is: A = L * 360 / ( 2 * PI * R ) (in degrees of course). In radians, the 2*pi's cancel nicely so you get the much more elegant A = L / R. SteveBaker (talk) 21:56, 4 August 2009 (UTC)[reply]
Right, I figured it was something simple like that :) Thanks! 83.250.236.75 (talk) 22:07, 4 August 2009 (UTC)[reply]


(ec) Maybe it is safer to redirect this to the science/RD, specifically to Steve Baker. Oh, but he's already here! Anyway, isn't the number of pixels in a segment, or in an arc, depending on its position and not only on its length? Assuming the arc starts from the X-axis and goes counter-clockwise, and that it is within an angle α not larger than π/2 radiants, I imagine that the number of pixels in it is the same as in its vertical projection on the Y-axis. So, if the ratio s:=(no. of pixels in the arc)/(no. of pixels in the horizontal radius) is less than sqrt(2)/2 my guess is α=arcsin(s). Analogously, compare a short arc starting from the Y-axis with its horizontal projection. In general, I guess one reduces to these cases by adding/subtracting arc segments. --84.220.119.55 (talk) 22:19, 4 August 2009 (UTC)[reply]
As I said, I'm not terribly good at this, but Steve's solution worked perfectly. The arcs became the right lengths with his simple formula. 83.250.236.75 (talk) 00:59, 5 August 2009 (UTC)[reply]
Yeah - it's really dangerous having me on the math desk...I have this annoying tendancy to want to give answers that people can actually understand. Being a computer graphics geek - I understood perfectly that you meant to use the word "pixel" as a unit of measurement (like "inch" or "centimeter") and not in the sense of "a picture element". Indeed if you had wanted to count the number of picture elements along your arc, life would be difficult and painful and we'd CERTAINLY have to send all of the Mathematicians out to buy more pencils then lock the doors after them. It's a tough question to answer because a lot would depend on which of the 30+ known ways of drawing a circle you were actually using. That's because they tend to hit different numbers of pixels. What '84 is trying to tell us (using "math") is that counting the pixels is non-trivial. Consider that a thin straight (diagonal) line drawn from (x1,y1) to (x2,y2) hits the exact same number of pixels as a much shorter horizontal line drawn from (x1,0) to (x2,0) or a vertical line drawn from (0,y1) to (0,y2) - whichever is greatest. Hence, the number of pixels required to draw a circle is something rather complicated. On an infinitely high resolution display, the number of pixels would simply be four time the radius of the circle - but we don't have infinitely high resolution displays - and that makes the answer horribly complex - and (as '84 does not comprehend) critically dependent on which algorithm you use for drawing your circle. Fortunately, I happened to be here asking about 4D cross-products so none of this uglyness proved necessary!  :-P SteveBaker (talk) 13:25, 5 August 2009 (UTC)[reply]
Anyway, I was really thinking of you as the most qualified person to answer the OP, and when I saw you there I was glad of my perspicacy. Sure, I suspected that the answer depends on the algorithm and I opted for the most obvious to me  :-P )))) and these pharentheses represents sound waves  ;-) --84.221.69.165 (talk) 22:50, 5 August 2009 (UTC)[reply]

4D analog of a 3D cross-product. edit

I have a function that takes a 2D vector and returns me another 2D vector that's at right angles to it. I have the standard cross-product function that takes two 3D vectors and returns me another 3D vector that's at right angles to both of the other two...what's the equivelant thing in four dimensions? I presume it needs three 4D vectors as input...but I'm having a mental block as to how to mush them together! (Oh - and is there a general name for these functions?) SteveBaker (talk) 21:50, 4 August 2009 (UTC)[reply]

The cross product only exists in 3D. The closest thing in higher dimensions is the wedge product. I've never studied it, though, so I'm not sure if it can be used to do what you want. However, you could easily use the dot product to get a system of simultaneous equations to solve. (Just call the desired vector (x,y,z,w) and dot it with each of the vectors you want it to be orthogonal to.) --Tango (talk) 22:20, 4 August 2009 (UTC)[reply]
(edit conflict - verbose answer) In 4D (assuming it's just a normal extension of 3D with another right angle 'squeezed in' somewhere unimaginable) then the cross product of 2 vectors is a plane (since there is a whole flat plane that is at right angles to both vectors).
Taking that plane, and then applying the third vector to it gives a single vector (or 2 vectors pointing in opposite directions to be pedantic) - just as when you took a vector in 2d and got another vector at right angles to it.
So yes in 4D 3 (non colinear) vectors define a fourth vector that is at right angles to all three.
To expand - it's easy to count dimensions (aka degrees of freedom) . Curiously it's an extension of this that allows us to show that in 4D one can rotate a vector around a plane 4(dimensions)=2(dimensional contraints given by the plane)+2(degrees of freedom remaining) - thus a vector constrained to rotate about the plane in 4D has 2 degrees of freedom -just the right amount for rotation.
(By the way the maths is dead simple - just linear algebra, and sins/cosines -even easier still if you've got an automatic linear equation solver to do the donkey work)
So to summarise - to derive the formula - first find the equation of the plane P that satisfies the conditions
V1 is perpendicular to the plane
V2 is perpendicular to the plane
Then use your equation for the right angle in 2D and apply it to the plane with the 3rd vector as a constraint - to give the answer (a vector) 83.100.250.79 (talk) 22:24, 4 August 2009 (UTC)[reply]


See Levi-Civita symbol.

2D:  

3D:  

4D:  

The 2D formula is shorthand for

 
 

The 3D formula is shorthand for

 
 
 

Bo Jacoby (talk) 22:26, 4 August 2009 (UTC).[reply]

In my undergraduate multivariate calculus course (which was like Calc 4), our book had a few exercises that gave an analog for any finite dimension, with a subtitle "Cross products in R^n". As you guessed, you need N-1 vectors in N dimensions and you do it exactly like a cross product. That is, you take a matrix and you put e_1, e_2, ..., e_n (instead of i, j, k) in the top row and you put the N-1 vectors in the rest of the rows. Then, you compute the determinant. The exercises are to show it has various properties. Most are obvious from properties of determinants. But, one is that the vector you obtain is orthogonal to all N-1 vectors. My book is Vector Calculus by Susan Jane Colley. My edition is likely not the newest any more but it's a few exercises at the end of Section 1.6 Some n-dimensional Geometry in my edition. StatisticsMan 01:13, 5 August 2009 (UTC)[reply]
I think that Tango is correct about the wedge product thing, but incorrect about the number of vectors required (still two). It's important to remember that oriented line segments (vectors) are dual to oriented plane segments (bivectors, which are what you get with a cross product on vectors though you don't realize it!) in R^{3} but not R^{4}. The wedge product is constructed from the Clifford algebra. The lack of duality is manifest in that the wedge of two vectors from R^{4} spans a six dimensional vector space as opposed the three. Might I ask your purpose in knowing about this? I may be able to recommend some resources.--Leon (talk) 13:10, 5 August 2009 (UTC)[reply]
OK - the reason is kinda complicated & messy. <sigh> When you multiply a bunch of nice simple rotation matrices together on some real-world device such as a graphics processor, you get roundoff error. That error isn't particularly important in itself because there is typically a degree of interactivity that tends to correct any total error in the angle rotated through. However, after enough times through the multiplication pipeline, the matrices start to become non-orthonormal...which tends to result in all sorts of nasty skews and scalings that are pretty undesirable. This is a common enough problem in computer graphics and we've learned to "repair" 3x3 rotation matrices that seem to be a little the worse for wear by treating the three rows as vectors, normalizing them and then recalculating the third row to make it be orthogonal to the other two. This results in a matrix that may be rotating by slightly incorrect angles - but at least there is no skew or scale going on. You doubtless have "better" ways to do this - but before you leap in with them, I'll point out that I'm doing this a bazillion times a second on graphics hardware that's very efficient at things like normalizing and doing cross products and the like - and sucks badly at doing any of the things you were just about to suggest!
Well, now I need to do the same thing with some 4D rotation matrices...so I need an analogous function that (given three rows of the matrix) allows me to calculate the fourth row to be orthogonal to the other three. SteveBaker (talk) 13:55, 5 August 2009 (UTC)[reply]
I Understand. I'm really sorry, but I don't have too many suggestions. I do have one though. Whilst you gave the graphics processor merely as an example of a real world device, if that's what you want to program, I'll ask a follow up question: do you ever use quaternions? They're very common in computer graphics. I won't rant about what I'm suggesting unless (a) you already know about them and (b) are interested in how they can be cleanly extended to 4-D rotations---they are very easy to correct. As it happens, I'm (kind of) studying the field myself, so if this sound helpful... --Leon (talk) 15:14, 5 August 2009 (UTC)[reply]
Yes, we use quaternions quite extensively - but (to get technical about it) mostly on the CPU not on the GPU. The main central processing unit (CPU) in the computer...the Pentium chip or whatever...is a totally general purpose machine and it's very easy to program it to toss quaternions around. But the GPU (graphical processing unit) on the graphics pipeline isn't at all general-purpose. It's has a rather unique and specialised architecture that allows it to process 2D, 3D and 4D vectors and matrices with enormous speed...but they don't have any functions specialised for handling quaternions - so we tend to convert everything to matrices just before handing them over to the GPU. That's not to say that it's impossible to handle quaternions in the GPU - just that it's slow and inconvenient. I haven't thought much about using a 4D quaternion (a quinternion?!) - but it would presumably comprise something like a 4D vector plus a rotation around that vector...and that would require 5 numbers. Sadly, the GPU is highly optimised for handling 4 numbers in a group...it would fare badly if forced to toss around 'quinternions'. This is one of those situations where what looks good in abstract mathematics fails miserably when plopped into the middle of a video game that has to do this a million times a second! Also, the use of true 4D geometry is very limited to a few niche applications - it's not something we do a lot of. SteveBaker (talk) 15:35, 5 August 2009 (UTC)[reply]
It looks like you may be able to help me more than I can help you! However, I'm not suggesting a quinternion at all, simply ordinary 4-D quaternions interpreted slightly differently. Given your hint that this is for use in a video game, am I at all correct in suggesting you're using this in the context of projective geometry, describing translations as rotations about points infinitely (well, very, anyway) far away? If so, here's a specific link. [[2]]
And now my question for you: can you provide me with links detailing the maths functions native to typical consumer-grade GPUs? Thanks. --Leon (talk) 16:05, 5 August 2009 (UTC)[reply]
P.S. Even if my guess at your interest in this is wrong, the link may be helpful nonetheless. The absolute inefficiency of the quaternionic approach is lower when used for 4-D than for 3-D.--Leon (talk) 16:05, 5 August 2009 (UTC)[reply]
There are no quinternions. You can only have powers of two - 1D, 2D, 4D, 8D or 16D (no higher), and they get progressively less and less well behaved (you may be able to have 32D but they would be so badly behaved as to be useless). See Cayley–Dickson construction. If you started with quaternions are tried to add one more square root of -1 you would find that more would naturally arise as products of existing ones (in the same way as if you add just one root to the complex numbers, you'll find you end up with the quaternions). I'm curious, how do you handle multiplying 4D rotations? I've never done much with them, but I know they aren't as well behaved as their 3D counterparts - you don't necessarily get a rotation if you compose two 4D rotations. --Tango (talk) 18:07, 5 August 2009 (UTC)[reply]
Read the paper I linked above! Incidentally, I'm not sure about your final statement, indeed, I think it's wrong, though I'll add that I don't hold too much weight in that opinion as I'm not a geometer. SO(4) is a group, and for your final statement to be true SO(4) mustn't be the group of rotations in R^{4}. ALSO, Mr SteveBaker, whilst I'd advocate at least skimming the aforementioned paper, I know a few people at work who may be able to advise...I'll write here if I have anything else to offer. --Leon (talk) 18:29, 5 August 2009 (UTC)[reply]
There is some uncertainty as to which things ought to be called 'rotations' in higher dimensions. Some use the term for everything in SO(4), while Tango seems to be using it to mean only the so-called simple rotations. Algebraist 18:40, 5 August 2009 (UTC)[reply]
Sorry, yes, I assumed Steve was talking about simple rotations but I now realise he never actually said that. --Tango (talk) 19:39, 5 August 2009 (UTC)[reply]
True. Open suggestion to all who have commented in ways that do not really address the asker's needs: can those section be moved elsewhere? I'm not an experience Wikipedia, so please alert me if this is not a reasonable request.--Leon (talk) 18:47, 5 August 2009 (UTC)[reply]
The usual rule on the ref desks is that we don't go off on tangents until the OP has an answer to their original question. Steve has received lots of answers, but only he can say if he's received one that actually helps. So, Steve, do we need to keep working or can we play now? --Tango (talk) 19:39, 5 August 2009 (UTC)[reply]
Suggestion for SteveBaker: with an orthogonal matrix, the rows (and columns, obviously) are orthogonal vectors. So, use the Gram-Schmidt Process to "orthogonalize" your matrices. Hope this is helpful.--Leon (talk) 18:43, 5 August 2009 (UTC)[reply]

Hi Steve, It seems as if I did not explain my solution properly. I'm sorry. The 2D problem is: You have a vector   We want a vector   such that a and y are at right angles. One solution is  . They are at right angle because  . The solution is written like this:   where   and  . The epsilon is antisymmetric: if you swap two indexes the sign is changed.  . The 3D problem is: You have two vectors   We want a vector   such that a and y are at right angles and such that b and y are at right angles. One solution is:   where   and epsilon is antisymmetric: if you swap two indexes the sign is changed. The 4D problem is: You have three vectors   We want a vector   such that a and y are at right angles and such that b and y are at right angles, and such that c and y are at right angles. One solution is:   where   and epsilon is antisymmetric. Proof:   so  . Similarily   and   Bo Jacoby (talk) 19:35, 5 August 2009 (UTC).[reply]

You know - I can't actually tell whether you answered my question or not! I have to read all the linked documents...which is a slow process because I'm not a mathematician - and Wikipedia math articles are completely reader-hostile to anyone who doesn't already understand what they are saying! All I really wanted was a nice simple set of expressions...like a cross-product...that, given three 4D vectors would give me back a fourth that's at right angles to all of the other three. To pick an obvious example, if I handed it (1,0,0,0), (0,1,0,0) and (0,0,1,0) - it would give me back (0,0,0,1) (or maybe (0,0,0,-1)). I was kinda hoping there would be some nice, simple linear algebra to do that. Bo Jacoby's post that starts: "See Levi-Civita symbol" came close...but left the 4D case unexpanded...since I have yet to learn what the heck a Levi-Civita symbol is and somehow expand the expression for the 4D case, I'm not quite there yet. If someone could expand that one for me - then I think I'd be done and outta your hair! I was aware of the Gram-Schmidt Process - but it's not very nice to do inside a teeny-tiny graphics chip! For a 3x3 matrix, it suffices to say something like (from memory):
  • normalize row 1
  • normalize row 2
  • take the cross-product of row1 and 2 and stuff that into row3
  • normalize row 3
  • take the cross-product of row1 and 3 and stuff that into row2
Now all three rows are unit vectors and they are all at right angles...so the matrix is now well-behaved - although possibly rotating by a fractionally different amount than it was before...and on a modern graphics chip - you can do that sequence in 5 machine-code instructions! I can easily imagine a 4D analog of that...if there were a 4D analog for a cross product. SteveBaker (talk) 20:31, 5 August 2009 (UTC)[reply]

Aha!

There is such a product, yes.

Basically, you take the following determinant:   where the ei components are the basis vectors.

Does this make sense?--Leon (talk) 20:43, 5 August 2009 (UTC)[reply]

 

That's what a cross-product determinant looks like in three-dimensions; I've included this for clarity, such that you can see the similarity. --Leon (talk) 20:53, 5 August 2009 (UTC)[reply]

Leon's answer is the one you're looking for, but for what it's worth, you can represent a 4D rotation with a pair of unit quaternions (six degrees of freedom in total), as described in Quaternions and spatial rotation#Pairs of unit quaternions as rotations in 4D space (read the rest of the article too). SO(4) has six dimensions, and in general, SO(n) has n(n−1)/2 dimensions. It's only in three dimensions that the dimension of the rotation group matches the dimension of the space, and only in three dimensions that the notion of a rotational axis makes sense (in general rotations happen in planes, not around axes). Angular momentum in four-dimensional space can't be described by a 4D vector because it has six components, and the same goes for other pseudovectors, like the magnetic field. The wedge product is the correct generalization in those cases, but you don't need to worry about that unless you're doing 4D Newtonian physics.   is just a fancy way of writing the matrix determinant  . Are you sure quaternion arithmetic is inefficient on GPUs? The quaternion product of (s, v) and (t, w) is (st − v·w, sw + tv + v×w), where s and t are scalars, v and w are 3D vectors, and · and × are the 3D dot and cross product. If the GPU has built-in support for the cross product then I'd expect that to be comparable in speed to a matrix-matrix multiply. -- BenRG (talk) 21:46, 5 August 2009 (UTC)[reply]
I think every single suggestion given in this section has actually been the same idea just with different notation, so they should all be equally efficient. (The quaternions idea might be slightly different, but simultaneous equations, Levi-Civita symbols and matrix determinants are all exactly the same thing.) --Tango (talk) 23:27, 5 August 2009 (UTC)[reply]
Wonderful! I think I have it now. Thanks everyone! SteveBaker (talk) 01:12, 6 August 2009 (UTC)[reply]

Good! Ask questions on the talk pages of the reader-hostile math articles. Wikipedia editors try to write reader-friendly articles. The 4D Levi-Civita symbol   is the sign of the permutation   The sign of an even permutation is +1 while the sign of an odd permutation is −1, and the sign is 0 if   is not a permutation of 1234. The 12 even permutations are 1234 1342 1423 2134 2341 2413 3124 3241 3412 4123 4231 4312, and the 12 odd permutations are 1243 1324 1432 2143 2314 2431 3142 3214 3421 4132 4213 4321. So the 4D formula

 

is shorthand for

 
 
 
 

This is the nice simple set of expressions that, given three 4D vectors would give me back a fourth that's at right angles to all of the other three. Bo Jacoby (talk) 08:52, 6 August 2009 (UTC).[reply]