Wikipedia:Reference desk/Archives/Mathematics/2013 March 30

Mathematics desk
< March 29 << Feb | March | Apr >> March 31 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 30

edit

Incorrect proof of boundary property

edit

Hi, I know that in  , given a set   that is equal to a ball of radius  , around a point  ,i.e B_r(x), the boundary  . I know in the case of the general metric space   with an open ball A it is not the case that   but I cannot see where my proof below explicitly assumes that the metric space is  .

Since  ,  ,

 ,

 ,

 ,

 ,

 ,

so  ,

Help very much appreciated.

Neuroxic (talk) 10:02, 30 March 2013 (UTC)[reply]

Enlighten me please… what does your italic capital "R" letter denote? A metric ring, I guess? Incnis Mrsi (talk) 11:23, 30 March 2013 (UTC)[reply]
Oh, I should have said the real numbers. I tried typing in \mathbb{R} but it didn't work so I just went with plain R.Neuroxic (talk) 12:07, 30 March 2013 (UTC)[reply]
Do you have a counter example? You could try your method with a concrete example to see where it goes wrong.--Salix (talk): 18:20, 30 March 2013 (UTC)[reply]
The statement
 ,
does not imply
 ,
since   is a weaker condition than  . Sławomir Biały (talk) 18:31, 30 March 2013 (UTC)[reply]

Treating linear differential operators like a matrices

edit

Matrices are linear transformations from   to  . Differential operators like   are also linear transformations, this time from differentiable functions into integrable functions. Does it make sense to speak of matrix properties of linear differential operators, like the "determinant" or "transpose" or things like that?

I was experimenting and noticed this example. If you restrict your set of functions to polynomials of some degree n (in this example I will take n=2)

The derivative operator   sends the quadratic

 

to

 ,

essentially it sends the coefficients

 

 

 

and the matrix

 

does exactly the same thing when acting on the vector  .

This would suggest   and  .

Is there a way to do this in general?

150.203.115.98 (talk) 12:55, 30 March 2013 (UTC)[reply]

You can define the transpose as the formal adjoint of the differential operator. The determinant usually needs regularization before it is well-defined. See functional determinant. Sławomir Biały (talk) 13:08, 30 March 2013 (UTC)[reply]
  • As a pointer, the derivative is commonly treated as an infinite-dimensional matrix operating on a Hilbert space or a similar infinite-dimensional space of functions. It is not invertible, though. Looie496 (talk) 14:51, 30 March 2013 (UTC)[reply]
    • Of course, the key reason that the derivative matrix is not invertible is that the derivative maps the constant term to zero. But you can define a pseudo-inverse anti-derivative straightforwardly, exactly like a matrix pseudo-inverse, that will get all the other terms right.
The key concept here is basis function. You have used (1, x, x2...) above, but there are lots of other choices you could have made -- for example the Fourier basis (1, cos(x), sin(x), cos(2x), sin(2x)...); or various families of orthogonal polynomials; or a set of regularly spaced boxcar functions; or a set of cubic spline polynomials; or a set of wavelet functions. Each set of basis functions can be particularly useful, in particular applications. Once you have chosen your set of basis functions, you can then represent any function of your original space as rather a big vector.
The mathematicians further up-thread have jumped straight to the infinite dimensional case. But in engineering maths and in mathematical physics we're often quite happy with (or, at any rate, may very often have to make do with) a finite number of basis functions, exactly as you were doing above, though of course you get different coefficients in your derivative matrix, depending which set of basis functions you use.
Using a finite number of basis functions to approximate a set of continuous equations is called the Galerkin method. It's also the basis of finite element analysis, used eg to predict in a computer the vibration modes of an airliner or a Formula 1 car (or whatever). Or in signal processing it's how you think about and design digital filters. And in physics, it's very heavily used in quantum mechanics. (You may remember that the first "modern" form of quantum mechanics was Heisenberg's matrix mechanics -- which initially was rather a mystery. But Hilbert asked Heisenberg whether there was a differential equation they could be related to. Heisenberg didn't take the hint. But if he had, it's entirely possible he might have beaten Schrodinger to the Schrodinger equation. In the end it was Dirac who showed how the two systems were equivalent, in just the sort of way you've written out above, and that synthesis has been the bedrock of quantum mechanics ever since.)
So if you look inside, for example, a big numerical weather forecasting installation, you'll basically find the entire world's weather represented as a big vector, which all the differential operators act on like a big matrix. So, having defined your basis, you can think of the functional that maps today's weather forward to tomorrow as essentially again a very big matrix. You can then use standard matrix techniques like singular value decomposition to see what sort of vectors are least stable when you apply that matrix -- ie which are the vector directions, that if we add a little change in that vector direction today, it will be blown up the largest amount by the matrix to make the biggest possible vector tomorrow. That's basically how the ECMWF chooses what perturbations to run for Ensemble forecasting -- in this case, that unstable vector found by the SVD may typically correspond to explosive formation of an entire weather system.
So in short, yes, it's no coincidence that you can represent differential operators by matrices; and this has huge relevance in the real world. Jheald (talk) 17:02, 30 March 2013 (UTC)[reply]
You can also consider the linear space spanned by cos(x) and sin(x), or simply the one dimensional complex vector space A exp(i x). Then the differential operator is equivalent to rotating over 90 degrees. You then do have an inverse, also it's then clear what the square root of the differential operator should be. You can then generalize this and define fractional powers of the differential operator for any analytic function. Count Iblis (talk) 18:20, 30 March 2013 (UTC)[reply]