Wikipedia:Reference desk/Archives/Mathematics/2011 November 28

Mathematics desk
< November 27 << Oct | November | Dec >> November 29 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 28

edit

Series solutions of a differential equation

edit

Hi, I've been stuck on this question for the past few hours: Let  . This has a regular singular point at  .

  1. Find the indicial equation  , with roots   and  .
  2. Find the series solutions:
    1.  
    2.  

I've figured out (using an indicial equation formula) that:   and   for n ≥ 1, and   and   for n ≥ 1. Therefore:  and  .

I then tried to find a recurrence relation for  :   which is undefined at n = 1... Where do my errors lie and what can I do to fix them?

I thank you greatly for your help. 216.221.38.254 (talk) 08:27, 28 November 2011 (UTC)[reply]

I don't know about the ps and qs, but I think you made a mistake in the recursion. I get, for the   term,
 , or
 
Arthur Rubin (talk) 19:19, 2 December 2011 (UTC)[reply]

Recipe for from-scratch logarithm table

edit

What is the algorithm for creating a logarithm table from scratch (that is a colloquialism meaning from absolute basics; let 'basics' be such as the concepts of numbers and arithmetic in this case)? 20.137.18.53 (talk) 13:08, 28 November 2011 (UTC)[reply]

Choose a base, say 10, and compute the powers 100=1, 101=10, 102=100. The reverse of this is a logaritm table. That is not satisfactory because the only entries in the table are 1, 10, 100. Then choose another base closer to one, say 1.01. Compute the succesive powers
0       1
1    1.01
2  1.0201
3  1.0303
4  1.0406
5 1.05101
6 1.06152
7 1.07214
8 1.08286
9 1.09369
10 1.10462

and so on. The reverse of this table is a base-1.01 logarithm table. The price paid for having the method very basic is doing a lot of work. Bo Jacoby (talk) 13:40, 28 November 2011 (UTC).[reply]

Essentially, this is what John Napier did, using a base of 1-10-7. Logarithms of intermediate values can be estimated by interpolation. Only "basic" arithmetic is required. Henry Briggs had the idea of re-basing tables of logarithms to use a base of 10. Gandalf61 (talk) 14:14, 28 November 2011 (UTC)[reply]

Closed-form solution

edit

Does this set of recursive equations have a corresponding set of closed-form solutions?

 
 
 

--Melab±1 16:30, 28 November 2011 (UTC)[reply]

Consider the matrix
 
and the column
 
Then the equations are
 
and the solution is
 
So
 
is the general solution. Bo Jacoby (talk) 16:37, 28 November 2011 (UTC).[reply]
I don't doubt that you are right, but I don't see how a matrix solves the equation. --Melab±1 17:32, 28 November 2011 (UTC)[reply]
Could they possibly be presented in this form:
 
 
 
--Melab±1 17:36, 28 November 2011 (UTC)[reply]
The matrix is by far the most compact way of writing it. It also lends nicely to generalising for arbitrary coefficients in your equations. To find solutions for a particular n, just do the n multiplications of the matrix, then you get   etc. in terms of your starting values by multiplying   with  . If the vector   happens to be an eigenvector of the matrix M, however, then raising M to the power n can be replaced by the nth power of the corresponding eigenvalue...but you won't really get anything more compact than that. Icthyos (talk) 20:49, 28 November 2011 (UTC)[reply]
Well it could be made easier by computing the eigenvalues and eigenvectors. Dmcq (talk) 20:54, 28 November 2011 (UTC)[reply]
Could it possible work if two of the variables were in the same term? --Melab±1 22:17, 28 November 2011 (UTC)[reply]
What do you mean by this? It's a bit unclear. Icthyos (talk) 23:27, 28 November 2011 (UTC)[reply]
I meant like if you had a term in one of the equations like  . --Melab±1 15:21, 29 November 2011 (UTC)[reply]
Then the equations would no longer be linear, and none of the linear algebra techniques suggested in this section would really work (at least, not globally). Gandalf61 (talk) 15:44, 29 November 2011 (UTC)[reply]
I probably should have expanded. The matrix is probably diagonalizable as the eigenvalues are probably different, i.e. it can be put in the form M = P−1DP where D only has the diagonal entries set so that Mn is P−1DnP. With this Dn is simply a diagonal matrix with each diagonal element to the power n. This will all give you a nice closed form. The only (!) downside is that the eigenvalues probably involve some nasty cubic roots so this is really only if you are happy with straight numeric approximation rather than a great big cubic root in there. Dmcq (talk) 00:11, 29 November 2011 (UTC)[reply]
The eigenvalues and eigenvectors are here at wolfram alpha. You can use these to get a closed form solution without using any matrix, but your closed form will involve some very nasty constants like this:   (You'll notice this is imaginary, but when combined with the other similarly complicated parts of the solution you magically get real numbers.) Hopefully you don't actually need to know what the closed form solution is, only that it exists. In that case the answer is "yes" (if you allow cube roots in your "closed form"). Staecker (talk) 00:22, 29 November 2011 (UTC)[reply]
No, this isn't right. The numbers' distances from zero grows each time. Those equations I gave you were like this originally:
 
 
 
and
 
 
 ,
which I used to get:
 
 
 .
--Melab±1 15:15, 29 November 2011 (UTC)[reply]
Check your working. I get:
 
 
 .
Gandalf61 (talk) 15:44, 29 November 2011 (UTC)[reply]

Dmcq's solution is, using Staecker's numbers:

 

where

 

is a diagonal matrix of eigenvalues, such that

 

and where the columns of

 

are eigenvectors, and where

 

is the inverse matrix to P. Bo Jacoby (talk) 02:17, 29 November 2011 (UTC).[reply]

Putting together everything that was said so far, the (numerically approximate) closed-form solution is
 
 
 
-- Meni Rosenfeld (talk) 10:29, 29 November 2011 (UTC)[reply]