Retour aux activités
Séminaire informel de théorie des systèmes (ISS)

Practicable robust Markov decision processes


14 avr. 2017   11h00 — 12h00

Huan Xu H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, États-Unis

Markov decision processes (MDP) is a standard modeling tool for sequential decision making in a dynamic and stochastic environment. When the model parameters are subject to uncertainty, the "optimal strategy" obtained from MDP can significantly under-perform than the model's prediction. To address this, robust MDP has been developed which is based on worst-case analysis. However, several restrictions of the robust MDP model prevent it from practical success, which I will address in this talk. The first restriction of standard robust MDP is that the modeling of uncertainty is not flexible and can lead to conservative solution. In particular, it requires that the uncertainty set is "rectangular" - i.e., it is a Cartesian product of uncertainty sets of each state. To lift this assumption, we propose an uncertainty model which we call “k-rectangular" that generalizes the concept of rectangularity, and we show that this can be solved efficiently via state augmentation. The second restriction is that it does not take into account the learning issue - i.e., how to adapt the model in an efficient way to reduce the uncertainty. To address this, we devise an algorithm inspired by reinforcement learning that, without knowing the true uncertainty model, is able to adapt its level of protection to uncertainty, and in the long run performs as good as the minimax policy as if the true uncertainty model is known. Indeed, the algorithm achieves similar regret bounds as standard MDP where no parameter is adversarial, which shows that with virtually no extra cost we can adapt robust learning to handle uncertainty in MDPs.

Peter E. Caines responsable
Aditya Mahajan responsable
Ali Pakniyat responsable


Salle MC 437
Pavillon McConnell
Université McGill
3480, rue University Montréal QC H3A 0E9 Canada

Axes de recherche

Applications de recherche