Open main menu

Stochastic scheduling concerns scheduling problems involving random attributes, such as random processing times, random due dates, random weights, and stochastic machine breakdowns. Major applications arise in manufacturing systems, computer systems, communication systems, logistics and transportation, machine learning, etc.



The objective of the stochastic scheduling problems can be regular objectives such as minimizing the total flowtime, the makespan, or the total tardiness cost of missing the due dates; or can be irregular objectives such as minimizing both earliness and tardiness costs of completing the jobs, or the total cost of scheduling tasks under likely arrival of a disastrous event such as a severe typhoon.[1]

The performance of such systems, as evaluated by a regular performance measure or an irregular performance measure, can be significantly affected by the scheduling policy adopted to prioritize over time the access of jobs to resources. The goal of stochastic scheduling is to identify scheduling policies that can optimize the objective.

Stochastic scheduling problems can be classified into three broad types: problems concerning the scheduling of a batch of stochastic jobs, multi-armed bandit problems, and problems concerning the scheduling of queueing systems[2] . These three types are usually under the assumption that complete information is available in the sense that the probability distributions of the random variables involved are known in advance. When such distributions are not fully specified and there are multiple competing distributions to model the random variables of interest, the problem is referred to as incomplete information. The Bayesian method has been applied to treat stochastic scheduling problems with incomplete information.

Scheduling of a batch of stochastic jobsEdit

In this class of models, a fixed batch of   jobs with random process times, whose distributions are known, have to be completed by a set of   machines to optimize a given performance objective.

The simplest model in this class is the problem of sequencing a set of   jobs on a single machine to minimize the expected weighted flowtime. Job processing times are independent random variables with a general distribution with mean   with mean   for job  . Admissible policies must be nonanticipative (scheduling decisions are based on the system’s history up to and including the present time) and nonpreemptive (processing of a job must proceed uninterruptedly to completion once started).

Let   denote the cost rate incurred per unit time in the system for job  , and let   denote its random completion time. Let   denote the class of all admissible policies, and let   denote expectation under policy  . The problem can be stated as


The optimal solution in the special deterministic case is given by the Shortest Weighted Processing Time rule of Smith:[3] sequence jobs in nonincreasing order of the priority index  . The natural extension of Smith’s rule is also optimal to the above stochastic model.[4]

In general, the rule that assigns higher priority to jobs with shorter expected processing time is optimal for the flowtime objective under the following assumptions: when all the job processing time distributions are exponential;[5] when all the jobs have a common general processing time distribution with a nondecreasing hazard rate function;[6] and when job processing time distributions are stochastically ordered.[7]

Multi-armed bandit problemsEdit

Multi-armed bandit models form a particular type of optimal resource allocation (usually working with time assignment), in which a number of machines or processors are to be allocated to serve a set of competing projects (termed as arms). In the typical framework, the system consists of a single machine and a set of stochastically independent projects, which will contribute random rewards continuously or at certain discrete time points, when they are served. The objective is to maximize the expected total discounted rewards over all dynamically revisable policies.[1]

The first version of multi-bandit problems was formulated in the area of sequential designs by Robbins (1952).[8] Since then, there had not been any essential progress in two decades, until Gittins and his collaborators made celebrated research achievements in Gittins (1979),[9] Gittins and Jones (1974),[10] Gittins and Glazebrook (1977),[11] and Whittle (1980)[12] under the Markov and semi-Markov settings. In this early model, each arm is modeled by a Markov or semi-Markov process in which the time points of making state transitions are decision epochs. The machine can at each epoch pick an arm to serve with a reward represented as a function of the current state of the arm being processed, and the solution is characterized by allocation indices assigned to each state that depends only on the states of the arms. These indices are therefore known as Gittins indices and the optimal policies are usually called Gittins index policies, due to his reputable contributions.

Soon after the seminal paper of Gittins, the extension to branching bandit problem to model stochastic arrivals (also known as the open bandit or arm acquiring bandit problem) was investigated by Whittle (1981).[13] Other extensions include the models of restless bandit, formulated by Whittle (1988),[14] in which each arm evolves restlessly according to two different mechanisms (idle fashion and busy fashion), and the models with switching costs/delays by Van Oyen et al. (1992),[15] who showed that no index policy is optimal when switching between arms incurs costs/delays.

Scheduling of queueing systemsEdit

Models in this class are concerned with the problems of designing optimal service disciplines in queueing systems, where the jobs to be completed arrive at random epochs over time, instead of being available at the start. The main class of models in this setting is that of multiclass queueing networks (MQNs), widely applied as versatile models of computer communications and manufacturing systems.

The simplest types of MQNs involve scheduling a number of job classes in a single server. Similarly as in the two model categories discussed previously, simple priority-index rules have been shown to be optimal for a variety of such models.

More general MQN models involve features such as changeover times for changing service from one job class to another (Levy and Sidi, 1990),[16] or multiple processing stations, which provide service to corresponding nonoverlapping subsets of job classes. Due to the intractability of such models, researchers have aimed to design relatively simple heuristic policies which achieve a performance close to optimal.

Stochastic scheduling with incomplete informationEdit

The majority of studies on stochastic scheduling models have largely been established based on the assumption of complete information, in the sense that the probability distributions of the random variables involved, such as the processing times and the machine up/downtimes, are completely specified a priori.

However, there are circumstances where the information is only partially available. Examples of scheduling with incomplete information can be found in environmental clean-up,[17] project management,[18] petroleum exploration,[19] sensor scheduling in mobile robots,[20] and cycle time modeling,[21] among many others.

As a result of incomplete information, there may be multiple competing distributions to model the random variables of interest. An effective approach is developed by Cai et al. (2009),[22] to tackle this problem, based on Bayesian information update. It identifies each competing distribution by a realization of a random variable, say  . Initially,   has a prior distribution based on historical information or assumption (which may be non-informative if no historical information is available). Information on   may be updated after realizations of the random variables are observed. A key concern in decision making is how to utilize the updated information to refine and enhance the decisions. When the scheduling policy is static in the sense that it does not change over time, optimal sequences are identified to minimize the expected discounted reward and stochastically minimize the number of tardy jobs under a common exponential due date.[22] When the scheduling policy is dynamic in the sense that it can make adjustments during the process based on up-to-date information, posterior Gittins index is developed to find the optimal policy that minimizes the expected discounted reward in the class of dynamic policies.[22]


  1. ^ a b Cai, X.Q.; Wu, X.Y.; Zhou, X. (2014). Optimal Stochastic Scheduling. Springer US. pp. 49, p.95. ISBN 978-1-4899-7405-1.
  2. ^ Nino-Mora, J. (2009). "Stochastic Scheduling". In Floudas, C.; Pardalos, P. (eds.). Encyclopedia of Optimization. US: Springer. pp. 3818–3824. ISBN 978-0-387-74758-3.
  3. ^ Smith, Wayne E. (1956). "Various optimizers for single-stage production". Naval Research Logistics Quarterly. 3: 59–66.
  4. ^ Rothkopf, Michael (1966). "Scheduling with random service times". Management Science. 12 (9): 707–713.
  5. ^ Weiss, Gideon; Pinedo, Michael (1980). "Scheduling tasks with exponential service times on non-identical processors to minimize various cost functions". Journal of Applied Probability. 17 (1): 187–202.
  6. ^ Weber, Richard R. (1982). "Scheduling jobs with stochastic processing requirements on parallel machines to minimize makespan or flowtime". Journal of Applied Probability. 19 (1): 167–182.
  7. ^ Weber, Richard; Varaiya, P.; Walrand, J. (1986). "Scheduling jobs with stochastically ordered processing times on parallel machines to minimize expected flowtime". Journal of Applied Probability. 23 (3): 841–847.
  8. ^ Robbins, H. (1952). "Some aspects of the sequential design of experiments". Bulletin of the American Mathematical Society. 58 (5): 527–535.
  9. ^ Gittins, J.C. (1979). "Bandit processes and dynamic allocation indices (with discussion)". Journal of the Royal Statistical Society, Series B. 41: 148–164.
  10. ^ Gittins, J.C.; Jones, D. "A Dynamic allocation index for the sequential allocation of experiments". In Gani, J.; et al. (eds.). Progress in statistics. Amsterdam: North Holland.
  11. ^ Gittins, J.C.; Glazebrook, K.D. (1977). "On Bayesian models in stochastic scheduling". Journal of Applied Probability. 14: 556–565.
  12. ^ Whittle, P. (1980). "Multi-armed bandits and the Gittins index". Journal of the Royal Statistical Society, Series B. 42 (2): 143–149.
  13. ^ Whittle, P. (1981). "Arm-acquiring bandits". The Annals of Probability. 9 (2): 284–292.
  14. ^ Whittle, P. (1988). "Restless bandits: Activity allocation in a changing world". Journal of Applied Probability. 25: 287–298.
  15. ^ van Oyen, M.P.; Pandelis, D.G.; Teneketzis, D. (1992). "Optimality of index policies for stochastic scheduling with switching penaltie". Journal of Applied Probability. 29 (4): 957–966.
  16. ^ Levy, H.; Sidi, M. (1990). "Polling systems: applications, modeling, and optimization". IEEE Transactions on Communications. 38 (10): 1750–1760.
  17. ^ Lee, S.I.; Kitanidis, P.K. (1991). "Optimal estimation and scheduling in aquifer remediation with incomplete information". Water Resources Research. 27: 2203–2217.
  18. ^ Gardoni, P.; Reinschmidt, K. F.; Kumar, R. (2007). "A probabilistic framework for Bayesian adaptive forecasting of project progress". Computer-Aided Civil and Infrastructure Engineering. 22: 182–196.
  19. ^ Glazebrook, K.D.; Boys, R.J. (1995). "A class of Bayesian models for optimal exploration". Journal of the Royal Statistical Society, Series B. 57: 705–720.
  20. ^ Gage, A.; Murphy, R.R. (2004). "Sensor scheduling in mobile robots using incomplete information via Min-Conflict with Happiness". IEEE Transactions on Systems, Man, and Cybernetics, Part B. 34: 454–467.
  21. ^ Chen, C.Y.I.; Ding, Q.; Lin, B.M.T. (2004). "A concise survey of scheduling with time dependent processing times". European Journal of Operational Research. 152: 1–13.
  22. ^ a b c Cai, X.Q.; Wu, X.Y.; Zhou, X. (2009). "Stochastic scheduling subject to breakdown-repeat breakdowns with incomplete information". Operations Research. 57 (5): 1236–1249.