Convex optimization

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms,[1] whereas mathematical optimization is in general NP-hard.[2][3][4]

Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design,[5] data analysis and modeling, finance, statistics (optimal experimental design),[6] and structural optimization, where the approximation concept has proven to be efficient.[7][8] With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.[9]

DefinitionEdit

A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. A function   mapping some subset of  into   is convex if its domain is convex and for all   and all   in its domain, the following condition holds:  . A set S is convex if for all members   and all  , we have that  .

Concretely, a convex optimization problem is the problem of finding some   attaining

 ,

where the objective function   is convex, as is the feasible set  .[10][11] If such a point exists, it is referred to as an optimal point or solution; the set of all optimal points is called the optimal set. If   is unbounded below over   or the infimum is not attained, then the optimization problem is said to be unbounded. Otherwise, if   is the empty set, then the problem is said to be infeasible.[12]

Standard formEdit

A convex optimization problem is in standard form if it is written as

 

where   is the optimization variable, the function   is convex,  ,  , are convex, and  ,  , are affine.[12] This notation describes the problem of finding   that minimizes   among all   satisfying  ,   and  ,  . The function   is the objective function of the problem, and the functions   and   are the constraint functions.

The feasible set   of the optimization problem consists of all points   satisfying the constraints. This set is convex because   is convex, the sublevel sets of convex functions are convex, affine sets are convex, and the intersection of convex sets is convex.[13]

A solution to a convex optimization problem is any point   attaining  . In general, a convex optimization problem may have zero, one, or many solutions.[citation needed]

Many optimization problems can be equivalently formulated in this standard form. For example, the problem of maximizing a concave function   can be re-formulated equivalently as the problem of minimizing the convex function  . The problem of maximizing a concave function over a convex set is commonly called a convex optimization problem.[citation needed]

PropertiesEdit

The following are useful properties of convex optimization problems:[14][12]

  • every local minimum is a global minimum;
  • the optimal set is convex;
  • if the objective function is strictly convex, then the problem has at most one optimal point.

These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.[citation needed]

ApplicationsEdit

Ben-Hain and Elishakoff[15] (1990), Elishakoff et al.[16] (1994) applied convex analysis to model uncertainty.

The following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations:[12][17]

 
A hierarchy of convex optimization problems. (LP: linear program, QP: quadratic program, SOCP second-order cone program, SDP: semidefinite program, CP: cone program.)

Convex optimization has practical applications for the following.

Lagrange multipliersEdit

Consider a convex minimization problem given in standard form by a cost function   and inequality constraints   for  . Then the domain   is:

 

The Lagrangian function for the problem is

 

For each point   in   that minimizes   over  , there exist real numbers   called Lagrange multipliers, that satisfy these conditions simultaneously:

  1.   minimizes   over all  
  2.   with at least one  
  3.   (complementary slackness).

If there exists a "strictly feasible point", that is, a point   satisfying

 

then the statement above can be strengthened to require that  .

Conversely, if some   in   satisfies (1)–(3) for scalars   with   then   is certain to minimize   over  .

AlgorithmsEdit

Unconstrained convex optimization can be easily solved with gradient descent (a special case of steepest descent) or Newton's method, combined with line search for an appropriate step size; these can be mathematically proven to converge quickly, especially the latter method.[20] Convex optimization with linear equality constraints can also be solved using KKT matrix techniques if the objective function is a quadratic function (which generalizes to a variation of Newton's method, which works even if the point of initialization does not satisfy the constraints), but can also generally be solved by eliminating the equality constraints with linear algebra or solving the dual problem.[20] Finally, convex optimization with both linear equality constraints and convex inequality constraints can be solved by applying an unconstrained convex optimization technique to the objective function plus logarithmic barrier terms.[20] (When the starting point is not feasible - that is, satisfying the constraints - this is preceded by so-called phase I methods, which either find a feasible point or show that none exist. Phase I methods generally consist of reducing the search in question to yet another convex optimization problem.[20])

Convex optimization problems can also be solved by the following contemporary methods:[21]

Subgradient methods can be implemented simply and so are widely used.[24][citation needed] Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.[citation needed]

ImplementationsEdit

Convex optimization and related algorithms have been implemented in the following software programs:

Program Language Description FOSS? Ref
CVX MATLAB Interfaces with SeDuMi and SDPT3 solvers; designed to only express convex optimization problems. Yes [25]
CVXMOD Python Interfaces with the CVXOPT solver. Yes [25]
CVXPY Python [26]
CVXR R Yes [27]
YALMIP MATLAB Interfaces with CPLEX, GUROBI, MOSEK, SDPT3, SEDUMI, CSDP, SDPA, PENNON solvers; also supports integer and nonlinear optimization, and some nonconvex optimization. Can perform robust optimization with uncertainty in LP/SOCP/SDP constraints. Yes [25]
LMI lab MATLAB Expresses and solves semidefinite programming problems (called "linear matrix inequalities") No [25]
LMIlab translator Transforms LMI lab problems into SDP problems. Yes [25]
xLMI MATLAB Similar to LMI lab, but uses the SeDuMi solver. Yes [25]
AIMMS Can do robust optimization on linear programming (with MOSEK to solve second-order cone programming) and mixed integer linear programming. Modeling package for LP + SDP and robust versions. No [25]
ROME Modeling system for robust optimization. Supports distributionally robust optimization and uncertainty sets. Yes [25]
GLOPTIPOLY MATLAB Yes [25]
SOSTOOLS Modeling system for polynomial optimization. Uses SDPT3 and SeDuMi. Requires Symbolic Computation Toolbox. Yes [25]
SparsePOP Modeling system for polynomial optimization. Uses the SDPA or SeDuMi solvers. Yes [25]
CPLEX Supports primal-dual methods for LP + SOCP. Can solve LP, QP, SOCP, and mixed integer linear programming problems. No [25]
CSDP C Supports primal-dual methods for LP + SDP. Interfaces available for MATLAB, R, and Python. Parallel version available. SDP solver. Yes [25]
CVXOPT Python Supports primal-dual methods for LP + SOCP + SDP. Uses Nesterov-Todd scaling. Interfaces to MOSEK and DSDP. Yes [25]
MOSEK Supports primal-dual methods for LP + SOCP. No [25]
SeDuMi MATLAB, MEX Solves LP + SOCP + SDP. Supports primal-dual methods for LP + SOCP + SDP. Yes [25]
SDPA C++ Solves LP + SDP. Supports primal-dual methods for LP + SDP. Parallelized and extended precision versions are available. Yes [25]
SDPT3 MATLAB, MEX Solves LP + SOCP + SDP. Supports primal-dual methods for LP + SOCP + SDP. Yes [25]
ConicBundle Supports general-purpose codes for LP + SOCP + SDP. Uses a bundle method. Special support for SDP and SOCP constraints. Yes [25]
DSDP Supports general-purpose codes for LP + SDP. Uses a dual interior point method. Yes [25]
LOQO Supports general-purpose codes for SOCP, which it treats as a nonlinear programming problem. No [25]
PENNON Supports general-purpose codes. Uses an augmented Lagrangian method, especially for problems with SDP constraints. No [25]
SDPLR Supports general-purpose codes. Uses low-rank factorization with an augmented Lagrangian method. Yes [25]
GAMS Modeling system for linear, nonlinear, mixed integer linear/nonlinear, and second-order cone programming problems. No [25]
Gloptipoly3 Modeling system for polynomial optimization. Yes [25]
Optimization Services XML standard for encoding optimization problems and solutions. [25]

ExtensionsEdit

Extensions of convex optimization include the optimization of biconvex, pseudo-convex, and quasiconvex functions. Extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity, also known as abstract convex analysis.[citation needed]

See alsoEdit

NotesEdit

  1. ^ a b Nesterov & Nemirovskii 1994
  2. ^ Murty, Katta; Kabadi, Santosh (1987). "Some NP-complete problems in quadratic and nonlinear programming". Mathematical Programming. 39 (2): 117–129. doi:10.1007/BF02592948. hdl:2027.42/6740. S2CID 30500771.
  3. ^ Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.
  4. ^ Quadratic programming with one negative eigenvalue is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in Journal of Global Optimization, Volume 1, Number 1, 1991, pg.15-22.
  5. ^ Boyd & Vandenberghe 2004, p. 17
  6. ^ Chritensen/Klarbring, chpt. 4.
  7. ^ Boyd & Vandenberghe 2004
  8. ^ Schmit, L.A.; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods. J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260
  9. ^ Boyd & Vandenberghe 2004, p. 8
  10. ^ Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1996). Convex analysis and minimization algorithms: Fundamentals. p. 291. ISBN 9783540568506.
  11. ^ Ben-Tal, Aharon; Nemirovskiĭ, Arkadiĭ Semenovich (2001). Lectures on modern convex optimization: analysis, algorithms, and engineering applications. pp. 335–336. ISBN 9780898714913.
  12. ^ a b c d Boyd & Vandenberghe 2004, chpt. 4
  13. ^ Boyd & Vandenberghe 2004, chpt. 2
  14. ^ Rockafellar, R. Tyrrell (1993). "Lagrange multipliers and optimality" (PDF). SIAM Review. 35 (2): 183–238. CiteSeerX 10.1.1.161.7209. doi:10.1137/1035044.
  15. ^ Ben Haim Y. and Elishakoff I., Convex Models of Uncertainty in Applied Mechanics, Elsevier Science Publishers, Amsterdam, 1990
  16. ^ I. Elishakoff, I. Lin Y.K. and Zhu L.P., Probabilistic and Convex Modeling of Acoustically Excited Structures, Elsevier Science Publishers, Amsterdam, 1994
  17. ^ Agrawal, Akshay; Verschueren, Robin; Diamond, Steven; Boyd, Stephen (2018). "A rewriting system for convex optimization problems" (PDF). Control and Decision. 5 (1): 42–60. arXiv:1709.04494. doi:10.1080/23307706.2017.1397554. S2CID 67856259.
  18. ^ a b c d e Boyd, Stephen; Diamond, Stephen; Zhang, Junzi; Agrawal, Akshay. "Convex Optimization Applications" (PDF). Retrieved 12 Apr 2021.
  19. ^ a b c Malick, Jérôme (2011-09-28). "Convex optimization: applications, formulations, relaxations" (PDF). Retrieved 12 Apr 2021.
  20. ^ a b c d Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization (PDF). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved 12 Apr 2021.
  21. ^ For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczyński, Bertsekas, and Boyd and Vandenberghe (interior point).
  22. ^ Nesterov, Yurii; Arkadii, Nemirovskii (1995). Interior-Point Polynomial Algorithms in Convex Programming. Society for Industrial and Applied Mathematics. ISBN 978-0898715156.
  23. ^ Peng, Jiming; Roos, Cornelis; Terlaky, Tamás (2002). "Self-regular functions and new search directions for linear and semidefinite optimization". Mathematical Programming. 93 (1): 129–171. doi:10.1007/s101070200296. ISSN 0025-5610. S2CID 28882966.
  24. ^ Bertsekas
  25. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z Borchers, Brian. "An Overview Of Software For Convex Optimization" (PDF). Archived from the original (PDF) on 2017-09-18. Retrieved 12 Apr 2021.
  26. ^ "Welcome to CVXPY 1.1 — CVXPY 1.1.11 documentation". www.cvxpy.org. Retrieved 2021-04-12.
  27. ^ "Disciplined Convex Optimiation - CVXR". www.cvxgrp.org. Retrieved 2021-06-17.

ReferencesEdit

  • Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton University Press.
  • Schmit, L.A.; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods. J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260

External linksEdit