Wikipedia:Reference desk/Archives/Mathematics/2008 February 19

Mathematics desk
< February 18 << Jan | February | Mar >> February 20 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 19 edit

Singular Value Decomposition & Hermitian Matrices edit

This is a question posed in my class and the teacher himself could not figure it out either. The question is that given the singular value decomposition of a mxm square matrix   where the * represents the complex conjugate transpose of a matrix, is there anything that we can say about the eigenvalue decomposition (diagonalization) of the 2mx2m Hermitian matrix  ? Can B's eigenvalue decomposition be written in terms of   and   ? I have also tried a few numerical examples on Matlab and it appears to me that the two are completely unrelated. Any help would be appreciated.A Real Kaiser (talk) 06:10, 19 February 2008 (UTC)[reply]

If  , then  . As   is self-adjoint, we get   as well. Now compute:
 
Moving the unitaries (one can check that those block matrices on the left and right of your   are unitary) to the right gives the singular value decomposition. You have the absolute values of the eigenvalues at this point, but I'm not sure exactly what you mean by eigenvalue decomposition. Do you mean diagonalizing B? J Elliot (talk) 07:13, 19 February 2008 (UTC)[reply]
The eigenvalue decomposition goes similarly. For motivation, find the eigenvalue decomposition of [0,1;1,0] (taking A to be the 1x1 identity matrix). A=USV*, so A*=VSU*, AV=US, A*U=VS, so [0,A*;A,0][V;U] = [VS;US], so [V;U] is an "eigenvector" with eigenvalues S, and similarly [0,A*;A,0][V;-U]=[-VS;US], so [V;-U] is an "eigenvector" with eigenvalues -S. Since our matrix is hermitian, we want our eigenbasis to be unitary, so we'll divide the eigenvectors by their overall norm, 1/sqrt(2). Putting it all together gives:
 
So the eigenvalues of B are plus or minus the singular values of A. JackSchmidt (talk) 15:53, 19 February 2008 (UTC)[reply]

Thanks guys, that makes a lot more sense. But I still have a follow-up question. You have shown that [V;U] and [-V;U] are eigenvectors with respect to S and -S but how do we know that those are the only eigenvalues? What if S^2 of -2S^5 also turn out to be eigenvalues of our matrix B? How can we conclude that S and -S are the ONLY eigenvalues?A Real Kaiser (talk) 23:53, 19 February 2008 (UTC)[reply]

Sorry, I spoke too informally. U and V are actually m x m matrices, and S is a diagonal matrix with m values. [V,V;U,-U] has full rank (because it is unitary), so is actually 2m independent eigenvectors for B, and S,-S gives the 2m eigenvalues. The informal language was just to indicate how block matrices can simplify things. JackSchmidt (talk) 00:58, 20 February 2008 (UTC)[reply]

fitting a conic edit

I want to fit a general parabola (of unknown size and orientation) roughly to a given set of points in the plane. My first thought was to seek the coefficients that minimize the sum of the squares of  ; to make the problem more linear, I then sought to settle for a general conic,  ; but then it occurs to me that this penalizes those curves that go near the origin.

My next idea is to consider the family of cones tangent to some plane  ; I'm not sure what to minimize.

Anyone know a better way? —Tamfang (talk) 06:24, 19 February 2008 (UTC)[reply]

Minimize the sum of squares of   using some other normalizing condition than   (which excludes curves through the origin). Bo Jacoby (talk) 07:50, 19 February 2008 (UTC).[reply]
You could try an iterative approach. Define for brevity
 
and
 
Given estimates for the coefficients A, B, etcetera, you can determine the distance ei of each point (xi,yi) to the curve determined by F(x,y) = 0. If we give weight wi to the term Fi2 in a weighted sum of squares, we want the weighted square to come out like ei2, which suggests setting
 
as the weights for a next iteration.
Instead of determining the values ei exactly, which is computationally expensive, you can approximate it by using the linear approximation
 
Applied to the point (xi,yi), we write this as
 
The least value for x)2 + (Δy)2 for which the right-hand side can vanish, which provides an estimate for ei2, is then given by
 
So for the weights for the next iteration, you can use then
 
Since the value being inverted can become arbitrarily small and even vanish, you must exercise caution not to make this numerically unstable, and put limits on the size of the weights.  --Lambiam 08:49, 21 February 2008 (UTC)[reply]

Rationalising Surds edit

I was going over some of my notes and i tried this one but my answer isnt the same as in the book.. im not sure what im doing wrong
Rationalise the denominator
 
   
 
Kingpomba (talk) 11:26, 19 February 2008 (UTC)[reply]

Looks fine to me and quick calculator check confirms your answer. Could also be written as   or  . What does your book say ? Gandalf61 (talk) 11:47, 19 February 2008 (UTC)[reply]

it says:     (hmm i got a different answer on paper and i understand how the 1/2 thing works i guess typing it out on wikipedia helped me solve it, well cheers anyway =] Kingpomba (talk) 11:57, 19 February 2008 (UTC).[reply]

Are you sure that's what it says? There should be a minus sign in front, shouldn't there? --Tango (talk) 12:54, 19 February 2008 (UTC)[reply]
Here's a quick sanity check: 5 < 7, so sqrt(5) < sqrt(7) (by the properties of the square root), and hence sqrt(5) - sqrt(7) < 0. Thus the denominator of the original fraction is negative, meaning the whole fraction is negative. Confusing Manifestation(Say hi!) 23:47, 19 February 2008 (UTC)[reply]