# Hamiltonian (control theory)

The Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle.[1] It was inspired by, but is distinct from, the Hamiltonian of classical mechanics. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to minimize the Hamiltonian. For details see Pontryagin's maximum principle.

## Notation and Problem statementEdit

A control ${\displaystyle u(t)}$  is to be chosen so as to minimize the objective function

${\displaystyle J(u)=\Psi (x(T))+\int _{0}^{T}L(x,u,t)dt}$

where ${\displaystyle x(t)}$  is the system state, which evolves according to the state equations

${\displaystyle {\dot {x}}=f(x,u,t)\qquad x(0)=x_{0}\quad t\in [0,T]}$

and the control must satisfy the constraints

${\displaystyle a\leq u(t)\leq b\quad t\in [0,T]}$

## Definition of the HamiltonianEdit

${\displaystyle H(x,\lambda ,u,t)=\lambda ^{T}(t)f(x,u,t)+L(x,u,t)\,}$

where ${\displaystyle \lambda (t)}$  is a vector of costate variables of the same dimension as the state variables ${\displaystyle x(t)}$ .

For information on the properties of the Hamiltonian, see Pontryagin's maximum principle.

## The Hamiltonian in discrete timeEdit

When the problem is formulated in discrete time, the Hamiltonian is defined as:

${\displaystyle H(x,\lambda ,u,t)=\lambda ^{T}(t+1)f(x,u,t)+L(x,u,t)\,}$

and the costate equations are

${\displaystyle \lambda (t+1)=-{\frac {\partial H}{\partial x}}+\lambda (t)}$

(Note that the discrete time Hamiltonian at time ${\displaystyle t}$  involves the costate variable at time ${\displaystyle t+1.}$ [2] This small detail is essential so that when we differentiate with respect to ${\displaystyle x}$  we get a term involving ${\displaystyle \lambda (t+1)}$  on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).

## The Hamiltonian of control compared to the Hamiltonian of mechanicsEdit

William Rowan Hamilton defined the Hamiltonian for describing the mechanics of a system. It is a function of three variables:

${\displaystyle {\mathcal {H}}={\mathcal {H}}(p,q,t)=\langle p,{\dot {q}}\rangle -L(q,{\dot {q}},t)}$

where ${\displaystyle L}$ the Lagrangian the extremizing of which determines the dynamics (not the Lagrangian defined above), ${\displaystyle q}$ is the state variable and ${\displaystyle {\dot {q}}}$ is its time derivative.

${\displaystyle p}$  is the so-called "conjugate momentum", defined by

${\displaystyle p={\frac {\partial L}{\partial {\dot {q}}}}}$

Hamilton then formulated his equations to describe the dynamics of the system as

${\displaystyle {\frac {d}{dt}}p(t)=-{\frac {\partial }{\partial q}}{\mathcal {H}}}$
${\displaystyle {\frac {d}{dt}}q(t)=~~{\frac {\partial }{\partial p}}{\mathcal {H}}}$

The Hamiltonian of control theory describes not the dynamics of a system but conditions for extremizing some scalar function thereof (the Lagrangian) with respect to a control variable ${\displaystyle u}$ . As normally defined, it is a function of 4 variables

${\displaystyle H(q,u,p,t)=\langle p,{\dot {q}}\rangle -L(q,u,t)}$

where ${\displaystyle q}$ is the state variable and ${\displaystyle u}$ is the control variable with respect to which we are extremizing.

The associated conditions for a maximum are

${\displaystyle {\frac {dp}{dt}}=-{\frac {\partial H}{\partial q}}}$
${\displaystyle {\frac {dq}{dt}}=~~{\frac {\partial H}{\partial p}}}$
${\displaystyle {\frac {\partial H}{\partial u}}=0}$

This definition agrees with that given by the article by Sussmann and Willems.[3] (see p. 39, equation 14). Sussmann-Willems show how the control Hamiltonian can be used in dynamics e.g. for the brachystochrone problem, but do not mention the prior work of Carathéodory on this approach.[4]

## Example: Ramsey ModelEdit

Take a simplified version of the Ramsey–Cass–Koopmans model. We wish to maximize an agent's discounted lifetime utility achieved through consumption

${\displaystyle max\int _{0}^{\infty }e^{-\rho t}u(c(t))dt}$

subject to the time evolution of capital per effective worker

${\displaystyle {\dot {k}}={\frac {\partial k}{\partial t}}=f(k(t))-(n+\delta )k(t)-c(t)}$

where ${\displaystyle c(t)}$  is period t consumption, ${\displaystyle k(t)}$  is period t capital per worker, ${\displaystyle f(k(t))}$  is period t production, ${\displaystyle n}$  is the population growth rate, ${\displaystyle \delta }$  is the capital depreciation rate, the agent discounts future utility at rate ${\displaystyle \rho }$ , with ${\displaystyle u'>0}$  and ${\displaystyle u''<0}$ .

Here, ${\displaystyle k(t)}$  is the state variable which evolves according to the above equation, and ${\displaystyle c(t)}$  is the control variable. The Hamiltonian becomes

${\displaystyle H(k,c,\mu ,t)=e^{-\rho t}u(c(t))+\mu (t){\dot {k}}=e^{-\rho t}u(c(t))+\mu (t)[f(k(t))-(n+\delta )k(t)-c(t)]}$

The optimality conditions are

${\displaystyle {\frac {\partial H}{\partial c}}=0\Rightarrow e^{-\rho t}u'(c)=\mu (t)}$
${\displaystyle {\frac {\partial H}{\partial k}}=-{\frac {\partial \mu }{\partial t}}=-{\dot {\mu }}\Rightarrow \mu (t)[f'(k)-(n+\delta )]=-{\dot {\mu }}}$

If we let ${\displaystyle u(c)=ln(c)}$ , then log-differentiating the first optimality condition with respect to ${\displaystyle t}$  yields

${\displaystyle -\rho {\frac {\dot {c}}{c(t)}}={\frac {\dot {\mu }}{\mu (t)}}}$

Inserting this equation into the second optimality condition yields

${\displaystyle \rho {\frac {\dot {c}}{c(t)}}=f'(k)-(n+\delta )}$

which is the Keynes–Ramsey rule or the Euler–Lagrange equation, which gives a condition for consumption in every period which, if followed, ensures maximum lifetime utility.

## ReferencesEdit

1. ^ Dixit, Avinash K. (1990). Optimization in Economic Theory. New York: Oxford University Press. pp. 145–161. ISBN 0-19-877210-6.
2. ^ Varaiya, Chapter 6
3. ^ Sussmann; Willems (June 1997). "300 Years of Optimal Control" (PDF). IEEE Control Systems.
4. ^ See Pesch, H. J.; Bulirsch, R. (1994). "The maximum principle, Bellman's equation, and Carathéodory's work". Journal of Optimization Theory and Applications. 80 (2): 199–225. doi:10.1007/BF02192933.