1 Introduction 1
1.1 Optimal control problem 1
1.2 Some background on finite-dimensional optimization 3
1.2.1 Unconstrained optimization 4
1.2.2 Constrained optimization 11
1.3 Preview of infinite-dimensional optimization 17
1.3.1 Function spaces,norms,and local minima 18
1.3.2 First variation and first-order necessary condition 19
1.3.3 Second variation and second-order conditions 21
1.3.4 Global minima and convex problems 23
1.4 Notes and references for Chapter 1 24
2 Calculus of Variations 26
2.1 Examples of variational problems 26
2.1.1 Dido's isoperimetric problem 26
2.1.2 Light reflection and refraction 27
2.1.3 Catenary 28
2.1.4 Brachistochrone 30
2.2 Basic calculus of variations problem 32
2.2.1 Weak and strong extrema 33
2.3 First-order necessary conditions for weak extrema 34
2.3.1 Euler-Lagrange equation 35
2.3.2 Historical remarks 39
2.3.3 Technical remarks 40
2.3.4 Two special cases 41
2.3.5 Variable-endpoint problems 42
2.4 Hamiltonian formalism and mechanics 44
2.4.1 Hamilton's canonical equations 45
2.4.2 Legendre transformation 46
2.4.3 Principle of least action and conservation laws 48
2.5 Variational problems with constraints 51
2.5.1 Integral constraints 52
2.5.2 Non-integral constraints 55
2.6 Second-order conditions 58
2.6.1 Legendre's necessary condition for a weak minimum 59
2.6.2 Sufficient condition for a weak minimum 62
2.7 Notes and references for Chapter 2 68
3 From Calculus of Variations to Optimal Control 71
3.1 Necessary conditions for strong extrema 71
3.1.1 Weierstrass-Erdmann corner conditions 71
3.1.2 Weierstrass excess function 76
3.2 Calculus of variations versus optimal control 81
3.3 Optimal control problem formulation and assumptions 83
3.3.1 Control system 83
3.3.2 Cost functional 86
3.3.3 Target set 88
3.4 Variational approach to the fixed-time,free-endpoint problem 89
3.4.1 Preliminaries 89
3.4.2 First variation 92
3.4.3 ?econd variation 95
3.4.4 Some comments 96
3.4.5 Critique of the variational approach and preview of the maximum principle 98
3.5 Notes and references for Chapter 3 100
4 The Maximum Principle 102
4.1 Statement of the maximum principle 102
4.1.1 Basic fixed-endpoint control problem 102
4.1.2 Basic variable-endpoint control problem 104
4.2 Proof of the maximum principle 105
4.2.1 From Lagrange to Mayer form 107
4.2.2 Temporal control perturbation 109
4.2.3 Spatial control perturbation 110
4.2.4 Variational equation 112
4.2.5 Terminal cone 115
4.2.6 Key topological lemma 117
4.2.7 Separating hyperplane 120
4.2.8 Adjoint equation 121
4.2.9 Properties of the Hamiltonian 122
4.2.10 Transversality condition 126
4.3 Discussion of the maximum principle 128
4.3.1 Changes of variables 130
4.4 Time-optimal control problems 134
4.4.1 Example:double integrator 135
4.4.2 Bang-bang principle for linear systems 138
4.4.3 Nonlinear systems,singular controls,and Lie brackets 141
4.4.4 Fuller's problem 146
4.5 Existence of optimal controls 148
4.6 Notes and references for Chapter 4 153
5 The Hamilton-Jacobi-Bellman Equation 156
5.1 Dynamic programming and the HJB equation 156
5.1.1 Motivation:the discrete problem 156
5.1.2 Principle of optimality 158
5.1.3 HJB equation 161
5.1.4 Sufficient condition for optimality 165
5.1.5 Historical remarks 167
5.2 HJB equation versus the maximum principle 168
5.2.1 Example:nondifferentiable value function 170
5.3 Viscosity solutions of the HJB equation 172
5.3.1 One-sided differentials 172
5.3.2 Viscosity solutions of PDEs 174
5.3.3 HJB equation and the value function 176
5.4 Notes and references for Chapter 5 178
6 The Linear Quadratic Regulator 180
6.1 Finite-horizon LQR problem 180
6.1.1 Candidate optimal feedback law 181
6.1.2 Riccati differential equation 183
6.1.3 Value function and optimality 185
6.1.4 Global existence of solution for the RDE 187
6.2 Infinite-horizon LQR problem 189
6.2.1 Existence and properties of the limit 190
6.2.2 Infinite-horizon problem and its solution 193
6.2.3 Closed-loop stability 194
6.2.4 Complete result and discussion 196
6.3 Notes and references for Chapter 6 199
7 Advanced Topics 200
7.1 Maximum principle on manifolds 200
7.1.1 Differentiable manifolds 201
7.1.2 Re-interpreting the maximum principle 203
7.1.3 Symplectic geometry and Hamiltonian flows 206
7.2 HJB equation,canonical equations,and characteristics 207
7.2.1 Method of characteristics 208
7.2.2 Canonical equations as characteristics of the HJB equation 211
7.3 Riccati equations and inequalities in robust control 212
7.3.1 L2 gain 213
7.3.2 H∞ control problem 216
7.3.3 Riccati inequalities and LMIs 219
7.4 Maximum principle for hybrid control systems 219
7.4.1 Hybrid optimal controlproblem 219
7.4.2 Hybrid maximum principle 221
7.4.3 Example: light reflection 222
7.5 Notes and references for Chapter 7 223
Bibliography 225
Index 231