The analysis of Markov Decision Processes (MDPs) often involves the proof that a threshold policy is optimal. The "structured value iteration" technique, described in e.g. the book of Puterman, allows to do this. When continuous-time MDPs have unbounded transition rates, they cannot be uniformized and this technique cannot be applied anymore. In this talk, we explain how recent results from Guo and Hernández-Lerma, and Blok and Spieksma, allow to handle this situation. As an illustration, we apply their truncation and smoothing technique to the optimal control of a single-server queueing system.
Welcome to everyone!