Sequential minimal optimization

Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). It was invented by John Platt in 1998 at Microsoft Research.[1] SMO is widely used for training support vector machines and is implemented by the popular LIBSVM tool.[2][3] The publication of the SMO algorithm in 1998 has generated a lot of excitement in the SVM community, as previously available methods for SVM training were much more complex and required expensive third-party QP solvers.[4]

Sequential minimal optimization
ClassOptimization algorithm for training support vector machines
Worst-case performanceO(n³)

Optimization problem

edit

Consider a binary classification problem with a dataset (x1, y1), ..., (xn, yn), where xi is an input vector and yi ∈ {-1, +1} is a binary label corresponding to it. A soft-margin support vector machine is trained by solving a quadratic programming problem, which is expressed in the dual form as follows:

 
subject to:
 
 

where C is an SVM hyperparameter and K(xi, xj) is the kernel function, both supplied by the user; and the variables   are Lagrange multipliers.

Algorithm

edit

SMO is an iterative algorithm for solving the optimization problem described above. SMO breaks this problem into a series of smallest possible sub-problems, which are then solved analytically. Because of the linear equality constraint involving the Lagrange multipliers  , the smallest possible problem involves two such multipliers. Then, for any two multipliers   and  , the constraints are reduced to:

 
 

and this reduced problem can be solved analytically: one needs to find a minimum of a one-dimensional quadratic function.   is the negative of the sum over the rest of terms in the equality constraint, which is fixed in each iteration.

The algorithm proceeds as follows:

  1. Find a Lagrange multiplier   that violates the Karush–Kuhn–Tucker (KKT) conditions for the optimization problem.
  2. Pick a second multiplier   and optimize the pair  .
  3. Repeat steps 1 and 2 until convergence.

When all the Lagrange multipliers satisfy the KKT conditions (within a user-defined tolerance), the problem has been solved. Although this algorithm is guaranteed to converge, heuristics are used to choose the pair of multipliers so as to accelerate the rate of convergence. This is critical for large data sets since there are   possible choices for   and  .

edit

The first approach to splitting large SVM learning problems into a series of smaller optimization tasks was proposed by Bernhard Boser, Isabelle Guyon, Vladimir Vapnik.[5] It is known as the "chunking algorithm". The algorithm starts with a random subset of the data, solves this problem, and iteratively adds examples which violate the optimality conditions. One disadvantage of this algorithm is that it is necessary to solve QP-problems scaling with the number of SVs. On real world sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.[1]

In 1997, E. Osuna, R. Freund, and F. Girosi proved a theorem which suggests a whole new set of QP algorithms for SVMs.[6] By the virtue of this theorem a large QP problem can be broken down into a series of smaller QP sub-problems. A sequence of QP sub-problems that always add at least one violator of the Karush–Kuhn–Tucker (KKT) conditions is guaranteed to converge. The chunking algorithm obeys the conditions of the theorem, and hence will converge.[1] The SMO algorithm can be considered a special case of the Osuna algorithm, where the size of the optimization is two and both Lagrange multipliers are replaced at every step with new multipliers that are chosen via good heuristics.[1]

The SMO algorithm is closely related to a family of optimization algorithms called Bregman methods or row-action methods. These methods solve convex programming problems with linear constraints. They are iterative methods where each step projects the current primal point onto each constraint.[1]

See also

edit

References

edit
  1. ^ a b c d e Platt, John (1998). "Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines" (PDF). CiteSeerX 10.1.1.43.4376.
  2. ^ Chang, Chih-Chung; Lin, Chih-Jen (2011). "LIBSVM: A library for support vector machines". ACM Transactions on Intelligent Systems and Technology. 2 (3). doi:10.1145/1961189.1961199. S2CID 961425.
  3. ^ Zanni, Luca (2006). "Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems" (PDF).
  4. ^ Rifkin, Ryan (2002). Everything Old is New Again: a Fresh Look at Historical Approaches in Machine Learning (Ph.D. Thesis). Massachusetts Institute of Technology. p. 18. hdl:1721.1/17549.
  5. ^ Boser, B. E.; Guyon, I. M.; Vapnik, V. N. (1992). "A training algorithm for optimal margin classifiers". Proceedings of the fifth annual workshop on Computational learning theory - COLT '92. p. 144. CiteSeerX 10.1.1.21.3818. doi:10.1145/130385.130401. ISBN 978-0897914970. S2CID 207165665.
  6. ^ Osuna, E.; Freund, R.; Girosi, F. (1997). "An improved training algorithm for support vector machines". Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop. pp. 276–285. CiteSeerX 10.1.1.392.7405. doi:10.1109/NNSP.1997.622408. ISBN 978-0-7803-4256-9. S2CID 5667586.