The increasingly complex energy systems are turning the attention towards model-free control approaches such as reinforcement learning (RL). This work proposes novel RL-based energy management approaches for scheduling the operation of controllable devices within an electric network. The proposed approaches provide a tool for efficiently solving multi-dimensional, multi-objective and partially observable power system problems. The novelty in this work is threefold: We implement a hierarchical RL-based control strategy to solve a typical energy scheduling problem. Second, multi-agent reinforcement learning (MARL) is put forward to efficiently coordinate different units with no communication burden. Third, a control strategy that merges hierarchical RL and MARL theory is proposed for a robust control framework that can handle complex power system problems. A comparative performance evaluation of the proposed control approaches is also presented. Experimental results of two typical energy dispatch scenarios show the effectiveness of the proposed approaches.
Published August 2021 , 20 pages
G2146.pdf (600 KB)