Retour aux activités
Séminaire du GERAD

Regularization and Robustness in Reinforcement Learning

iCalendar

23 oct. 2023   11h00 — 12h00

Esther Derman MILA, Canada

Esther Derman

Présentation sur YouTube.

Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this talk, I will show how we can learn robust MDPs using proper regularization, so as to reduce planning and learning in robust MDPs to regularized MDPs.

Erick Delage responsable
Pierre-Luc Bacon responsable

Lieu

Activité hybride au GERAD
Zoom et salle 4488
Pavillon André-Aisenstadt
Campus de l'Université de Montréal
2920, chemin de la Tour

Montréal Québec H3T 1J4
Canada

Organisme associé

Axe de recherche