End of this page section.

Begin of page section: Contents:

Workpackage 4

We are interested in the design of reliable computational methods for solution of optimal control problems and differential games by means of dynamic programming methods. In this context, the optimal value function is globally characterized as the viscosity solution of a first-order fully nonlinear PDE, the Hamilton-Jacobi-Bellman equation, over the state space of the system dynamics. However, this approach is strongly limited to low-dimensional dynamics (the so-called "curse of dimensionality"), and therefore an important research topic is the design of dynamic programming-based schemes wich are able to handle high-dimensional problems. Our research focuses on the analysis and application of acceleration techniques in order to deliver reasonable computation times for online feedback synthesis. In particular, we have worked on high-order monotone iterative schemes for Hamilton-Jacobi-Bellman equations related to optimal control and differential games, as well as on policy iteration algorithms for dynamic programming, and the use of semismooth Newton methods for an accurate control approximation.

Closely related to the previous point, an interesting problem appears when optimal feedback controllers are sought for systems governed by partial differential equations. Only simpler cases such as the linear quadratic regulator problem are well-understood from both theoretical and computational perspectives. We focus on the design and analysis of control schemes for the optimal control of PDEs via dynamic programming-based methods. In particular, by applying techniques stemming from modern linear systems theory, such as Riccati equations for control synthesis and model reduction, we recover finite-dimensional controllers and study its convergence and performance. We also work on feasible implementations of the HJB synthesis, including minimum time and control-constrained problems.

 

End of this page section.

Begin of page section: Additional information:

End of this page section.