Wikipedia:Reference desk/Archives/Mathematics/2018 June 15

Mathematics desk
< June 14 << May | June | Jul >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 15 edit

Generalizing Markov chains of a certain type to the continuous case edit

Consider Markov chains where the transition matrix is a doubly stochastic matrix. Is there any natural generalization of this to continuous-time and/or continuous-state-space stochastic processes?

In particular, Markov chains of this type with transition matrix   have the following property:

  for all  

In English, the entropy of the probability distribution vector over the states always increases as the chain evolves.

Is there a family of stochastic processes which generalizes this to the continuous case that preserves (the equivalent of) this property? PeterPresent (talk) 06:02, 15 June 2018 (UTC)[reply]

It's been a couple days and while I don't know enough about the subject to give a definitive answer, I do think it's an interesting question and deserves a response. First, from what I'm reading, the definition of the entropy of a continuous distribution is open for debate; the simplest version is Shannon's Differential entropy but there is also the more complex but more rigorous Limiting density of discrete points put forward by Jaynes. In any case, it seems simpler to tackle the continuous time/discrete state space variation first.
It's clear that whatever variation you're talking about, if entropy is do be non-decreasing then the process must leave the distribution with maximum entropy unchanged. For a discrete state space you get the maximum entropy with the uniform distribution, so the process must leave the uniform distribution unchanged. For discrete time (a Markov chain), this amounts to saying that DJ=J where J is the vector of all ones, and this is just another way of saying that D is doubly stochastic given that D is already stochastic. Another necessary condition is that small variations away from the uniform distribution don't get larger. For discrete time this is guaranteed by the fact that a stochastic matrix has no eigenvectors with absolute value >1. A simple model of continuous time is:
 
where P is a vector representing the probability distribution and A is matrix whose columns sum to 0, i.e. JA=0. The solution to the differential equation is:
 
and if this is to make sense as a probability distribution all the entries of eAt must be between 0 and 1. If the process is to preserve the uniform distribution as required then in addition AJ=0. In this case eAt is a doubly stochastic matrix and by appealing to the discrete case the entropy is non-decreasing.
I think another generalization that might be looking into to drop the assumption that all states are equally probable a priori, maybe a variation on conditional entropy. If the process has a distribution which is the limit no matter the initial state, then perhaps this should taken into account in the definition of the entropy of the system. None of this seems to cover Brownian motion though, and you would think this would have increasing entropy by thermodynamics, not sure what this would mean formally though. Again, not really my area so hopefully this makes sense and sorry if I'm reinventing the wheel here. --RDBury (talk) 13:51, 17 June 2018 (UTC)[reply]
Thank you for your response. I guess continuous time is easier than continuous state space because there are ways to define   when   is a real number. PeterPresent (talk) 14:45, 17 June 2018 (UTC)[reply]
Maybe the next step should be discrete but infinite state space, as in a random walk. Seems like a lot of room for exploration here, but I have no idea if it's all been done, it's an area of current research, or it's wide open, or it could be I'm wrong and it's just a dead end. --RDBury (talk) 14:19, 18 June 2018 (UTC)[reply]

Use of gauge functions in perturbation methods. edit

I am learning perturbation methods for solving nonlinear vibration problems. I am unable to understand the use of gauge functions properly. What does the term "is order of" or "O(big oh)" symbol actually mean. Is it same as the order of magnitude of a number. For example the book I am referring to says sin(∈) is order of ∈ as ∈ → 0. What does this actually mean? What is the difference between O(big oh) and o(small oh) symbol? Can anybody explain with a few examples. — Preceding unsigned comment added by Anildubey.sbp (talkcontribs) 14:22, 15 June 2018 (UTC)[reply]

See here: Big O notation. 2A02:C7D:B3A8:B900:34F8:8E73:7194:15C (talk) 14:32, 15 June 2018 (UTC)[reply]