We revisit the notion of feedback as a ubiquitous policy structure in systems and control theory, and argue that a feedback law purely in state is not necessarily optimal. By studying examples of deterministic and stochastic hybrid systems, we remark that a general control policy depends on both the past information and future predictions about the process and, hence, a reduction to feedback structure jointly in the state and a "dual" variable requires the pair to summarize both the past and the future.
Viewing the two fundamental results in optimal control theory from a duality perspective, we show that duality relationship holds in the Minimum Principle (MP) between the finite dimensional spaces of state variations and of co-state (adjoint) processes, and in Dynamic Programming (DP) between the infinite dimensional spaces of measures and of continuous functions. We present new version of the MP and DP for deterministic and stochastic hybrid systems and illustrate their implementation on analytic and practical examples. For numerical solution methodologies, we study the three classes of (a) generally nonlinear, (b) linear quadratic, and (c) polynomial systems where, for the latter case in particular, we can employ sum-of-squares techniques.
Bienvenue à tous!