The PROPT[1] MATLAB Optimal Control Software is a new generation platform for solving applied optimal control (with ODE or DAE formulation) and parameters estimation problems.
Developer(s) | Tomlab Optimization Inc. |
---|---|
Stable release | 7.8
/ December 16, 2011 |
Operating system | TOMLAB - OS Support |
Type | Technical computing |
License | Proprietary |
Website | PROPT product page |
The platform was developed by MATLAB Programming Contest Winner, Per Rutquist in 2008. The most recent version has support for binary and integer variables as well as an automated scaling module.
Description
editPROPT is a combined modeling, compilation and solver engine, built upon the TomSym modeling class, for generation of highly complex optimal control problems. PROPT uses a pseudospectral Collocation method (with Gauss or Chebyshev points) for solving optimal control problems. This means that the solution takes the form of a Polynomial, and this polynomial satisfies the DAE and the path constraints at the collocation points.
In general PROPT has the following main functions:
- Computation of the constant matrices used for the differentiation and integration of the polynomials used to approximate the solution to the Trajectory optimization problem.
- Source transformation to turn user-supplied expressions into MATLAB code for the cost function and constraint function that are passed to a Nonlinear programming solver in TOMLAB. The source transformation package TomSym automatically generates first and second order derivatives.
- Functionality for plotting and computing a variety of information for the solution to the problem.
- Automatic detection of the following:
- Linear and quadratic objective.
- Simple bounds, linear and nonlinear constraints.
- Non-optimized expressions.
- Integrated support for non-smooth[2] (hybrid) optimal control problems.
- Module for automatic scaling of difficult space related problem.
- Support for binary and integer variables, controls or states.
Modeling
editThe PROPT system uses the TomSym symbolic source transformation engine to model optimal control problems. It is possible to define independent variables, dependent functions, scalars and constant parameters:
toms tf
toms t
p = tomPhase('p', t, 0, tf, 30);
x0 = {tf == 20};
cbox = {10 <= tf <= 40};
toms z1
cbox = {cbox; 0 <= z1 <= 500};
x0 = {x0; z1 == 0};
ki0 = [1e3; 1e7; 10; 1e-3];
States and controls
editStates and controls only differ in the sense that states need be continuous between phases.
tomStates x1
x0 = {icollocate({x1 == 0})};
tomControls u1
cbox = {-2 <= collocate(u1) <= 1};
x0 = {x0; collocate(u1 == -0.01)};
Boundary, path, event and integral constraints
editA variety of boundary, path, event and integral constraints are shown below:
cbnd = initial(x1 == 1); % Starting point for x1
cbnd = final(x1 == 1); % End point for x1
cbnd = final(x2 == 2); % End point for x2
pathc = collocate(x3 >= 0.5); % Path constraint for x3
intc = {integrate(x2) == 1}; % Integral constraint for x2
cbnd = final(x3 >= 0.5); % Final event constraint for x3
cbnd = initial(x1 <= 2.0); % Initial event constraint x1
Single-phase optimal control example
editVan der Pol Oscillator [3]
Minimize:
Subject to:
To solve the problem with PROPT the following code can be used (with 60 collocation points):
toms t
p = tomPhase('p', t, 0, 5, 60);
setPhase(p);
tomStates x1 x2 x3
tomControls u
% Initial guess
x0 = {icollocate({x1 == 0; x2 == 1; x3 == 0})
collocate(u == -0.01)};
% Box constraints
cbox = {-10 <= icollocate(x1) <= 10
-10 <= icollocate(x2) <= 10
-10 <= icollocate(x3) <= 10
-0.3 <= collocate(u) <= 1};
% Boundary constraints
cbnd = initial({x1 == 0; x2 == 1; x3 == 0});
% ODEs and path constraints
ceq = collocate({dot(x1) == (1-x2.^2).*x1-x2+u
dot(x2) == x1; dot(x3) == x1.^2+x2.^2+u.^2});
% Objective
objective = final(x3);
% Solve the problem
options = struct;
options.name = 'Van Der Pol';
solution = ezsolve(objective, {cbox, cbnd, ceq}, x0, options);
Multi-phase optimal control example
editOne-dimensional rocket [4] with free end time and undetermined phase shift
Minimize:
Subject to:
The problem is solved with PROPT by creating two phases and connecting them:
toms t
toms tCut tp2
p1 = tomPhase('p1', t, 0, tCut, 20);
p2 = tomPhase('p2', t, tCut, tp2, 20);
tf = tCut+tp2;
x1p1 = tomState(p1,'x1p1');
x2p1 = tomState(p1,'x2p1');
x1p2 = tomState(p2,'x1p2');
x2p2 = tomState(p2,'x2p2');
% Initial guess
x0 = {tCut==10
tf==15
icollocate(p1,{x1p1 == 50*tCut/10;x2p1 == 0;})
icollocate(p2,{x1p2 == 50+50*t/100;x2p2 == 0;})};
% Box constraints
cbox = {
1 <= tCut <= tf-0.00001
tf <= 100
0 <= icollocate(p1,x1p1)
0 <= icollocate(p1,x2p1)
0 <= icollocate(p2,x1p2)
0 <= icollocate(p2,x2p2)};
% Boundary constraints
cbnd = {initial(p1,{x1p1 == 0;x2p1 == 0;})
final(p2,x1p2 == 100)};
% ODEs and path constraints
a = 2; g = 1;
ceq = {collocate(p1,{
dot(p1,x1p1) == x2p1
dot(p1,x2p1) == a-g})
collocate(p2,{
dot(p2,x1p2) == x2p2
dot(p2,x2p2) == -g})};
% Objective
objective = tCut;
% Link phase
link = {final(p1,x1p1) == initial(p2,x1p2)
final(p1,x2p1) == initial(p2,x2p2)};
%% Solve the problem
options = struct;
options.name = 'One Dim Rocket';
constr = {cbox, cbnd, ceq, link};
solution = ezsolve(objective, constr, x0, options);
Parameter estimation example
editParameter estimation problem [5]
Minimize:
Subject to:
In the code below the problem is solved with a fine grid (10 collocation points). This solution is subsequently fine-tuned using 40 collocation points:
toms t p1 p2
x1meas = [0.264;0.594;0.801;0.959];
tmeas = [1;2;3;5];
% Box constraints
cbox = {-1.5 <= p1 <= 1.5
-1.5 <= p2 <= 1.5};
%% Solve the problem, using a successively larger number collocation points
for n=[10 40]
p = tomPhase('p', t, 0, 6, n);
setPhase(p);
tomStates x1 x2
% Initial guess
if n == 10
x0 = {p1 == 0; p2 == 0};
else
x0 = {p1 == p1opt; p2 == p2opt
icollocate({x1 == x1opt; x2 == x2opt})};
end
% Boundary constraints
cbnd = initial({x1 == p1; x2 == p2});
% ODEs and path constraints
x1err = sum((atPoints(tmeas,x1) - x1meas).^2);
ceq = collocate({dot(x1) == x2; dot(x2) == 1-2*x2-x1});
% Objective
objective = x1err;
%% Solve the problem
options = struct;
options.name = 'Parameter Estimation';
options.solver = 'snopt';
solution = ezsolve(objective, {cbox, cbnd, ceq}, x0, options);
% Optimal x, p for starting point
x1opt = subs(x1, solution);
x2opt = subs(x2, solution);
p1opt = subs(p1, solution);
p2opt = subs(p2, solution);
end
Optimal control problems supported
edit- Aerodynamic trajectory control[6]
- Bang-bang control[7]
- Chemical engineering[8]
- Dynamic systems[9]
- General optimal control
- Large-scale linear control[10]
- Multi-phase system control[11]
- Mechanical engineering design[12]
- Nondifferentiable control[13]
- Parameters estimation for dynamic systems[14]
- Singular control
References
edit- ^ Rutquist, Per; M. M. Edvall (June 2008). PROPT - Matlab Optimal Control Software (PDF). Pullman, WA: Tomlab Optimization Inc.
- ^ Banga, J. R.; Balsa-Canto, E.; Moles, C. G.; Alonso, A. A. (2003). "Dynamic optimization of bioprocesses: efficient and robust numerical strategies". Journal of Biotechnology.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ "Van Der Pol Oscillator - Matlab Solution", PROPT Home Page June, 2008.
- ^ "One Dimensional Rocket Launch (2 Free Time)", PROPT Home Page June, 2008.
- ^ "Matlab Dynamic Parameter Estimation with PROPT", PROPT Home Page June, 2008.
- ^ Betts, J. (2007). "SOCS Release 6.5.0". THE BOEING COMPANY.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Liang, J.; Meng, M.; Chen, Y.; Fullmer, R. (2003). "Solving Tough Optimal Control Problems by Network Enabled Optimization Server (NEOS)". School of Engineering, Utah State University USA, Chinene University of Hong Kong China.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Carrasco, E. F.; Banga, J. R. (September 1998). "A HYBRID METHOD FOR THE OPTIMAL CONTROL OF CHEMICAL PROCESSES". University of Wales, Swansea, UK: UKACC International Conference on CONTROL 98.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Vassiliadis, V. S.; Banga, J. R.; Balsa-Canto, E. (1999). "Second-order sensitivities of general dynamic systems with application to optimal control problems". Chemical Engineering Science. 54 (17): 3851–3860. Bibcode:1999ChEnS..54.3851V. doi:10.1016/S0009-2509(98)00432-1.
- ^ Luus, R. (2002). Iterative dynamic programming. Chapman and Hall/CRC.
- ^ Fabien, B. C. (1998). "A Java Application for the Solution of Optimal Control Problems". Stevens Way, Box 352600 Seattle, WA 98195, USA: Mechanical Engineering, University of Washington.
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: location (link) - ^ Jennings, L. S.; Fisher, M. E. (2002). "MISER3: Optimal Control Toolbox User Manual, Matlab Beta Version 2.0". Nedlands, WA 6907, Australia: Department of Mathematics, The University of Western Australia.
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: location (link) - ^ Banga, J. R.; Seider, W. D. (1996). Floudas, C. A.; Pardalos, P. M. (eds.). Global Optimization of Chemical Processes using Stochastic Algorithms - State of the Art in Global Optimization: Computational Methods and Applications. Dordrecht, The Netherlands: Kluwer Academic Publishers. pp. 563–583. ISBN 0-7923-3838-3.
- ^ Dolan, E. D.; More, J. J. (January 2001). "Benchmarking Optimization Software with COPS". 9700 South Cass Avenue, Argonne, Illinois 60439: ARGONNE NATIONAL LABORATORY.
{{cite journal}}
: Cite journal requires|journal=
(help)CS1 maint: location (link)