Renewal theory is the branch of probability theory that generalizes compound Poisson process for arbitrary holding times. Applications include calculating the best strategy for replacing worn-out machinery in a factory (example below) and comparing the long-term benefits of different insurance policies.
- 1 Renewal processes
- 2 Renewal-reward processes
- 3 Properties of renewal processes and renewal-reward processes
- 4 Example applications
- 5 See also
- 6 References
The renewal process is a generalization of the compound Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent identically distributed holding times at each integer (exponentially distributed) before advancing (with probability 1) to the next integer: . In a compound Poisson process, the jump size need not be from i to i + 1, but is a random variable, and those random variables are independent and identically distributed. The exponential distribution of the holding times must be a memoryless exponential distribution if the number of jumps in each time interval is to have a Poisson distribution with expected value proportional to the length of the interval.
In a renewal process, the holding times need not have a memoryless distribution; rather, the process loses its memory only when one holding period ends and the next begins. That means the conditional probability distribution of the future of the process, given the past, is the same every time such a "renewal" occurs, and thus does not depend on the past. However that the independence and identical distribution (IID) property of the holding times is retained.
We refer to the random variable as the " -th" holding time.
is the expectation of .
Define for each n > 0 :
each is referred to as the " -th" jump time and the intervals
being called renewal intervals.
Then is given by random variable
where is the indicator function
represents the number of jumps that have occurred by time t, and is called a renewal process.
If one considers events occurring at random times, one may choose to think of the holding times as the random time elapsed between two consecutive events. For example, if the renewal process is modelling the numbers of breakdown of different machines, then the holding time represents the time between one machine breaking down before another one does.
Let be a sequence of IID random variables (rewards) satisfying
Then the random variable
is called a renewal-reward process. Note that unlike the , each may take negative values as well as positive values.
The random variable depends on two sequences: the holding times and the rewards These two sequences need not be independent. In particular, may be a function of .
In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards" (which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions.
An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed as . Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards" are the successive (random) financial losses/gains resulting from successive eggs (i = 1,2,3,...) and records the total financial "reward" at time t.
Properties of renewal processes and renewal-reward processesEdit
We define the renewal function as the expected value of the number of jumps observed up to some time :
The elementary renewal theoremEdit
The renewal function satisfies
To prove the elementary renewal theorem, it is sufficient to show that is uniformly integrable.
To do this, consider some truncated renewal process where the holding times are defined by where is a point such that which exists for all non-deterministic renewal processes. This new renewal process is an upper bound on and its renewals can only occur on the lattice . Furthermore, the number of renewals at each time is geometric with parameter . So we have
The elementary renewal theorem for renewal reward processesEdit
We define the reward function:
The reward function satisfies
The renewal equationEdit
The renewal function satisfies
where is the cumulative distribution function of and is the corresponding probability density function.
Proof of the renewal equationEdit
- We may iterate the expectation about the first holding time:
- But by the Markov property
- as required.
- (strong law of large numbers for renewal processes)
- (strong law of large numbers for renewal-reward processes)
- First consider . By definition we have:
- for all and so
- for all t ≥ 0.
- Now since we have:
- as almost surely (with probability 1). Hence:
- almost surely (using the strong law of large numbers); similarly:
- almost surely.
- Thus (since is sandwiched between the two terms)
- almost surely.
- Next consider . We have
- almost surely (using the first result and using the law of large numbers on ).
The inspection paradoxEdit
A curious feature of renewal processes is that if we wait some predetermined time t and then observe how large the renewal interval containing t is, we should expect it to be typically larger than a renewal interval of average size.
Mathematically the inspection paradox states: for any t > 0 the renewal interval containing t is stochastically larger than the first renewal interval. That is, for all x > 0 and for all t > 0:
where FS is the cumulative distribution function of the IID holding times Si.
Proof of the inspection paradoxEdit
Observe that the last jump-time before t is ; and that the renewal interval containing t is . Then
since both and are greater than or equal to for all values of s.
The superposition of independent renewal processes, or superimposed renewal process, is not generally a renewal process, but it can be described within a larger class of processes called the Markov-renewal processes. However, the cumulative distribution function of the first inter-event time in the superposition process is given by
where Rk(t) and αk > 0 are the CDF of the inter-event times and the arrival rate of process k.
Example 1: use of the strong law of large numbersEdit
Eric the entrepreneur has n machines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200.
What is his optimal replacement policy?
The lifetime of the n machines can be modeled as n independent concurrent renewal-reward processes, so it is sufficient to consider the case n=1. Denote this process by . The successive lifetimes S of the replacement machines are independent and identically distributed, so the optimal policy is the same for all replacement machines in the process.
If Eric decides at the start of a machine's life to replace it at time 0 < t < 2 but the machine happens to fail before that time then the lifetime S of the machine is uniformly distributed on [0, t] and thus has expectation 0.5t. So the overall expected lifetime of the machine is:
and the expected cost W per machine is:
So by the strong law of large numbers, his long-term average cost per unit time is:
then differentiating with respect to t:
this implies that the turning points satisfy:
We take the only solution t in [0, 2]: t = 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity as t tends to zero, meaning that the cost is decreasing as t increases, until the point 2/3 where it starts to increase.
- Campbell's theorem (probability)
- Compound Poisson process
- Continuous-time Markov process
- Little's lemma
- Palm–Khintchine theorem
- Poisson process
- Queueing theory
- Residual time
- Ruin theory
- Semi-Markov process
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (July 2010) (Learn how and when to remove this template message)
- English grammar
- Çinlar, Erhan (1969). "Markov Renewal Theory". Advances in Applied Probability. Applied Probability Trust. 1 (2): 123–187. JSTOR 1426216.
- Lawrence, A. J. (1973). "Dependency of Intervals Between Events in Superposition Processes". Journal of the Royal Statistical Society. Series B (Methodological). 35 (2): 306–315. JSTOR 2984914. formula 4.1
- Choungmo Fofack, Nicaise; Nain, Philippe; Neglia, Giovanni; Towsley, Don. "Analysis of TTL-based Cache Networks". Proceedings of 6th International Conference on Performance Evaluation Methodologies and Tools. Retrieved Nov 15, 2012.
- Cox, David (1970). Renewal Theory. London: Methuen & Co. p. 142. ISBN 0-412-20570-X.
- Doob, J. L. (1948). "Renewal Theory From the Point of View of the Theory of Probability" (PDF). Transactions of the American Mathematical Society. 63 (3): 422–438. doi:10.2307/1990567. JSTOR 1990567.
- Wanli Wang, Johannes H. P. Schulz,, Weihua Deng, and Eli Barkai (2018). "Renewal theory with fat-tailed distributed sojourn times: Typical versus rare". Phys. Rev. E. 98 (4): 042139. arXiv:1809.05856. doi:10.1103/PhysRevE.98.042139.CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link)
- Smith, Walter L. (1958). "Renewal Theory and Its Ramifications". Journal of the Royal Statistical Society, Series B. 20 (2): 243–302. JSTOR 2983891.