Stochastic control theory studies optimal multi-stage decision making under uncertainty. It provides analytic and computational tools (e.g., dynamic programming and
\(Q\)-learning) and also identifies models (e.g, Markov decision processes and linear quadratic Gaussian models) that are intuitively appealing and provide tractable results. For these results to hold, the assumption of centralized control (i.e., the existence of a single controller/decision-maker that has perfect recall of its past observations and control actions), is essential. However, this assumption is not satisfied in many modern applications like networked control systems, communication and queuing networks, sensor networks, etc. Such systems require control decisions to be made by different controllers/decision-makers with access to different information. Optimal decision making in such decentralized stochastic control systems needs a new conceptual approach.
In this talk I will first present an overview of decentralized stochastic control and explain why decentralized control problems are more difficult compared to centralized control problems. Then, I will present a new solution approach to overcome these conceptual difficulties. This approach is based on common information (or common knowledge, in the sense of Aummann) between the decision makers and allows us to resolve long-standing open conjectures in decentralized control. This is joint work with Ashutosh Nayyar and Demosthenis Teneketzis.