Previous month Previous day Next day Next month
By Year By Month By Week Today Search Jump to month
Prelectura de tesis
Prelectura de tesis
 
martes, 23 de febrero, a las 10:30
 
 
“Control in moving interfaces and deep learning”
 
Director: Enrique Zuazua Iriondo
 
 
Abstract: 

This thesis brings forth several contributions to the controllability theory of free boundary problems, to the turnpike property for nonlinear optimal control problems, and to the modern theory of deep supervised learning. 

We set-up a systematic methodology for the exact-controllability of free boundary problems, governed by diffusive partial differential equations, to specific, possibly nontrivial targets, by combining a careful study of the linearized problem and fixed point arguments. We distinguish problems wherein the linearization is either controllable by using spectral techniques for deriving the needed observability inequality (e.g. when controlling the one-dimensional porous medium equation to its self-similar Barenblatt trajectory) or by a combination of Carleman inequalities with compactness arguments (in the context of a free boundary problem for the one-dimensional viscous Burgers equation, where steering the free boundary is seen as a finite-dimensional constraint on the control).

We present a new proof of the turnpike property for nonlinear optimal con- trol problems, when the running target is a stationary solution of the free dynamics. By using of sub-optimal quasi-turnpike trajectories (via a controllability assumption) and a bootstrap argument, and bypassing an analysis of the optimality system or linearization techniques, we are able to address finite-dimensional, control-affine systems with globally Lipschitz nonlinearities. 

Following the continuous-time, neural ODE formulation of supervised machine learning, e also propose an augmented supervised learning problem by adding an artificial regularization term of the state trajectory over the entire time horizon. Applying the turnpike results presented before, we obtain an exponential rate of decay for the training error and for the optimal parameters in any time – an improved estimate for the depth required to reach almost perfect training accuracy. We discuss the appearance of sparsity patterns for L1 regularised learning problems.
Numerical experiments are shown to confirm these findings". 

 

Location  martes, 23 de febrero, a las 10:30