Open main menu

Renewal processesEdit

IntroductionEdit

The renewal process is a generalization of the compound Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent identically distributed holding times at each integer   (exponentially distributed) before advancing (with probability 1) to the next integer:  . In a compound Poisson process, the jump size need not be from i to i + 1, but is a random variable, and those random variables are independent and identically distributed. The exponential distribution of the holding times must be a memoryless exponential distribution if the number of jumps in each time interval is to have a Poisson distribution with expected value proportional to the length of the interval.

In a renewal process, the holding times need not have a memoryless distribution; rather, the process loses its memory only when one holding period ends and the next begins. That means the conditional probability distribution of the future of the process, given the past, is the same every time such a "renewal" occurs, and thus does not depend on the past. However that the independence and identical distribution (IID) property of the holding times is retained.

Formal definitionEdit

 
Sample evolution of a renewal process with holding times Si and jump times Jn.

Let   be a sequence of positive independent identically distributed random variables such that

 

We refer to the random variable   as the " -th" holding time.

  is the expectation of  .

Define for each n > 0 :

 

each   is referred to as the " -th" jump time and the intervals

 

being called renewal intervals.

Then   is given by random variable

 

where   is the indicator function

 

  represents the number of jumps that have occurred by time t, and is called a renewal process.

InterpretationEdit

If one considers events occurring at random times, one may choose to think of the holding times   as the random time elapsed between two consecutive[1] events. For example, if the renewal process is modelling the numbers of breakdown of different machines, then the holding time represents the time between one machine breaking down before another one does.

Renewal-reward processesEdit

 
Sample evolution of a renewal-reward process with holding times Si, jump times Jn and rewards Wi

Let   be a sequence of IID random variables (rewards) satisfying

 

Then the random variable

 

is called a renewal-reward process. Note that unlike the  , each   may take negative values as well as positive values.

The random variable   depends on two sequences: the holding times   and the rewards   These two sequences need not be independent. In particular,   may be a function of  .

InterpretationEdit

In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards"   (which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions.

An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed as  . Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards"   are the successive (random) financial losses/gains resulting from successive eggs (i = 1,2,3,...) and   records the total financial "reward" at time t.

Properties of renewal processes and renewal-reward processesEdit

We define the renewal function as the expected value of the number of jumps observed up to some time  :

 

The elementary renewal theoremEdit

The renewal function satisfies

 

ProofEdit

The strong law of large numbers for renewal processes implies

 

To prove the elementary renewal theorem, it is sufficient to show that   is uniformly integrable.

To do this, consider some truncated renewal process where the holding times are defined by   where   is a point such that   which exists for all non-deterministic renewal processes. This new renewal process   is an upper bound on   and its renewals can only occur on the lattice  . Furthermore, the number of renewals at each time is geometric with parameter  . So we have

 

The elementary renewal theorem for renewal reward processesEdit

We define the reward function:

 

The reward function satisfies

 

The renewal equationEdit

The renewal function satisfies

 

where   is the cumulative distribution function of   and   is the corresponding probability density function.

Proof of the renewal equationEdit

We may iterate the expectation about the first holding time:
 
But by the Markov property
 
So
 
as required.

Asymptotic propertiesEdit

  and   satisfy

  (strong law of large numbers for renewal processes)
  (strong law of large numbers for renewal-reward processes)

almost surely.

ProofEdit

First consider  . By definition we have:
 
for all   and so
 
for all t ≥ 0.
Now since   we have:
 
as   almost surely (with probability 1). Hence:
 
almost surely (using the strong law of large numbers); similarly:
 
almost surely.
Thus (since   is sandwiched between the two terms)
 
almost surely.
Next consider  . We have
 
almost surely (using the first result and using the law of large numbers on  ).

The inspection paradoxEdit

A curious feature of renewal processes is that if we wait some predetermined time t and then observe how large the renewal interval containing t is, we should expect it to be typically larger than a renewal interval of average size.

Mathematically the inspection paradox states: for any t > 0 the renewal interval containing t is stochastically larger than the first renewal interval. That is, for all x > 0 and for all t > 0:

 

where FS is the cumulative distribution function of the IID holding times Si.

Proof of the inspection paradoxEdit

 
The renewal interval determined by the random point t (shown in red) is stochastically larger than the first renewal interval.

Observe that the last jump-time before t is  ; and that the renewal interval containing t is  . Then

 

since both   and   are greater than or equal to   for all values of s.

SuperpositionEdit

The superposition of independent renewal processes, or superimposed renewal process, is not generally a renewal process, but it can be described within a larger class of processes called the Markov-renewal processes.[2] However, the cumulative distribution function of the first inter-event time in the superposition process is given by[3]

 

where Rk(t) and αk > 0 are the CDF of the inter-event times and the arrival rate of process k.[4]

Example applicationsEdit

Example 1: use of the strong law of large numbersEdit

Eric the entrepreneur has n machines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200.

What is his optimal replacement policy?

SolutionEdit

The lifetime of the n machines can be modeled as n independent concurrent renewal-reward processes, so it is sufficient to consider the case n=1. Denote this process by  . The successive lifetimes S of the replacement machines are independent and identically distributed, so the optimal policy is the same for all replacement machines in the process.

If Eric decides at the start of a machine's life to replace it at time 0 < t < 2 but the machine happens to fail before that time then the lifetime S of the machine is uniformly distributed on [0, t] and thus has expectation 0.5t. So the overall expected lifetime of the machine is:

 

and the expected cost W per machine is:

 

So by the strong law of large numbers, his long-term average cost per unit time is:

 

then differentiating with respect to t:

 

this implies that the turning points satisfy:

 

and thus

 

We take the only solution t in [0, 2]: t = 2/3. This is indeed a minimum (and not a maximum) since the cost per unit time tends to infinity as t tends to zero, meaning that the cost is decreasing as t increases, until the point 2/3 where it starts to increase.

See alsoEdit

ReferencesEdit

  1. ^ English grammar
  2. ^ Çinlar, Erhan (1969). "Markov Renewal Theory". Advances in Applied Probability. Applied Probability Trust. 1 (2): 123–187. JSTOR 1426216.
  3. ^ Lawrence, A. J. (1973). "Dependency of Intervals Between Events in Superposition Processes". Journal of the Royal Statistical Society. Series B (Methodological). 35 (2): 306–315. JSTOR 2984914. formula 4.1
  4. ^ Choungmo Fofack, Nicaise; Nain, Philippe; Neglia, Giovanni; Towsley, Don. "Analysis of TTL-based Cache Networks". Proceedings of 6th International Conference on Performance Evaluation Methodologies and Tools. Retrieved Nov 15, 2012.