In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating the constraints with large negative constants which would not be part of any optimal solution, if it exists.

Algorithm

edit

The simplex algorithm is the original and still one of the most widely used methods for solving linear maximization problems. However, to apply it, the origin (all variables equal to 0) must be a feasible point. This condition is satisfied only when all the constraints (except non-negativity) are less-than constraints and with positive constant on the right-hand side. The Big M method introduces surplus and artificial variables to convert all inequalities into that form. The "Big M" refers to a large number associated with the artificial variables, represented by the letter M.

The steps in the algorithm are as follows:

  1. Multiply the inequality constraints to ensure that the right hand side is positive.
  2. If the problem is of minimization, transform to maximization by multiplying the objective by −1.
  3. For any greater-than constraints, introduce surplus si and artificial variables ai (as shown below).
  4. Choose a large positive Value M and introduce a term in the objective of the form −M multiplying the artificial variables.
  5. For less-than or equal constraints, introduce slack variables si so that all constraints are equalities.
  6. Solve the problem using the usual simplex method.

For example, x + y ≤  100 becomes x + y + s1 = 100, whilst x + y ≥ 100 becomes x + y − s1 + a1 = 100. The artificial variables must be shown to be 0. The function to be maximised is rewritten to include the sum of all the artificial variables. Then row reductions are applied to gain a final solution.

The value of M must be chosen sufficiently large so that the artificial variable would not be part of any feasible solution.

For a sufficiently large M, the optimal solution contains any artificial variables in the basis (i.e. positive values) if and only if the problem is not feasible.

However, the a-priori selection of an appropriate value for M is not trivial. A way to overcome the need to specify the value of M is described in.[1]

Other usage

edit

When used in the objective function, the Big M method sometimes refers to formulations of linear optimization problems in which violations of a constraint or set of constraints are associated with a large positive penalty constant, M.

When used in the constraints themselves, one of the many uses of Big M, for example, refers to ensuring equality of variables only when a certain binary variable takes on one value, but to leave the variables "open" if the binary variable takes on its opposite value. One instance of this is as follows: for a sufficiently large M and z binary variable (0 or 1), the constraints

 
 

ensure that when   then  . Otherwise, when  , then  , indicating that the variables x and y can have any values so long as the absolute value of their difference is bounded by   (hence the need for M to be "large enough.")

See also

edit
edit

Bibliography

  • Griva, Igor; Nash, Stephan G.; Sofer, Ariela (26 March 2009). Linear and Nonlinear Optimization (2nd ed.). Society for Industrial Mathematics. ISBN 978-0-89871-661-0.

Discussion

References

edit
  1. ^ Cococcioni, Marco; Fiaschi, Lorenzo (2021). "The Big-M method with the numerical infinite M". Optimization Letters. 15 (1): 2455–2468. doi:10.1007/s11590-020-01644-6. hdl:11568/1061259.