Markov renewal process

In probability and statistics a Markov renewal process (MRP) is a random process that generalizes the notion of Markov jump processes. Other random processes like Markov chains, Poisson processes and renewal processes can be derived as special cases of MRP's.

Definition

An illustration of a Markov renewal process

Consider a state space ${\displaystyle \mathrm {S} .}$  Consider a set of random variables ${\displaystyle (X_{n},T_{n})}$ , where ${\displaystyle T_{n}}$  are the jump times and ${\displaystyle X_{n}}$  are the associated states in the Markov chain (see Figure). Let the inter-arrival time, ${\displaystyle \tau _{n}=T_{n}-T_{n-1}}$ . Then the sequence ${\displaystyle (X_{n},T_{n})}$  is called a Markov renewal process if

{\displaystyle {\begin{aligned}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid (X_{0},T_{0}),(X_{1},T_{1}),\ldots ,(X_{n}=i,T_{n}))\\[5pt]={}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid X_{n}=i)\,\forall n\geq 1,t\geq 0,i,j\in \mathrm {S} \end{aligned}}}

Relation to other stochastic processes

1. If we define a new stochastic process ${\displaystyle Y_{t}:=X_{n}}$  for ${\displaystyle t\in [T_{n},T_{n+1})}$ , then the process ${\displaystyle Y_{t}}$  is called a semi-Markov process. Note the main difference between an MRP and a semi-Markov process is that the former is defined as a two-tuple of states and times, whereas the latter is the actual random process that evolves over time and any realisation of the process has a defined state for any given time. The entire process is not Markovian, i.e., memoryless, as happens in a continuous time Markov chain/process (CTMC). Instead the process is Markovian only at the specified jump instants. This is the rationale behind the name, Semi-Markov.[1][2][3] (See also: hidden semi-Markov model.)
2. A semi-Markov process (defined in the above bullet point) where all the holding times are exponentially distributed is called a CTMC. In other words, if the inter-arrival times are exponentially distributed and if the waiting time in a state and the next state reached are independent, we have a CTMC.
{\displaystyle {\begin{aligned}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid (X_{0},T_{0}),(X_{1},T_{1}),\ldots ,(X_{n}=i,T_{n}))\\[3pt]={}&\Pr(\tau _{n+1}\leq t,X_{n+1}=j\mid X_{n}=i)\\[3pt]={}&\Pr(X_{n+1}=j\mid X_{n}=i)(1-e^{-\lambda _{i}t}),{\text{ for all }}n\geq 1,t\geq 0,i,j\in \mathrm {S} ,i\neq j\end{aligned}}}
3. The sequence ${\displaystyle X_{n}}$  in the MRP is a discrete-time Markov chain. In other words, if the time variables are ignored in the MRP equation, we end up with a DTMC.
${\displaystyle \Pr(X_{n+1}=j\mid X_{0},X_{1},\ldots ,X_{n}=i)=\Pr(X_{n+1}=j\mid X_{n}=i)\,\forall n\geq 1,i,j\in \mathrm {S} }$
4. If the sequence of ${\displaystyle \tau }$ s are independent and identically distributed, and if their distribution does not depend on the state ${\displaystyle X_{n}}$ , then the process is a renewal process. So, if the states are ignored and we have a chain of iid times, then we have a renewal process.
${\displaystyle \Pr(\tau _{n+1}\leq t\mid T_{0},T_{1},\ldots ,T_{n})=\Pr(\tau _{n+1}\leq t)\,\forall n\geq 1,\forall t\geq 0}$