Machine learning in Madrid
Machine learning in Madrid
Lunes 22 de febrero de 2020, 12-13h
Ponente: Borjan Geshkovski (UAM)
Título: The interplay of control theory and deep learning
Microsoft teams (https://teams.microsoft.com/l/meetup-join/19%3a2ae25c9f15ac485fbea44e8090110f50%40thread.tacv2/1613658855281?context=%7b%22Tid%22%3a%22fc6602ef-8e88-4f1d-a206-e14a3bc19af2%22%2c%22Oid%22%3a%22deefb9b8-fab2-49ff-b6cf-f18c553b6fe0%22%7d) or contact Esta dirección de correo electrónico está siendo protegida contra los robots de spam. Necesita tener JavaScript habilitado para poder verlo. for an invite
“
We will mainly concentrate on presenting this problem's behavior when the final time horizon is increased. a fact that can be interpreted as increasing the number of layers in the neural network setting. We show qualitative and quantitative estimates of the convergence to zero training error depending on the functional to be minimized.
Referencias:
[1] Benning, M., Celledoni, E., Ehrhardt, M. J., Owren, B., and Schönlieb, C.-B. (2019). Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 6(2):171.
[2] Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K. (2018). Neural ordinary differential equations. In Advances in Neural Information Processing Systems, pages 6571–6583.
[3] E, W. (2017). A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1–11.
[4] Esteve, C., Geshkovski, B., Pighin, D., and Zuazua, E. Large-time asymptotics in deep learning. arXiv preprint arXiv:2008.02491 (2020).
[5] Haber, E. and Ruthotto, L. (2017). Stable architectures for deep neural networks. Inverse Problems, 34(1):014004.
Localización Lunes 22 de febrero de 2020, 12-13h