Synergy Unleashed: The Convergence of Control, Game Theory, and Reinforcement Learning in Cyber-Physical Systems
Date:
Nowadays, with the integration of information technology into plant automation (traditional control systems), cyber-physical systems (CPSs) technology has been widely applied to various infrastructures. The pursuit of resilient and robust controllers withstanding adversarial disturbances and interventions has stood as a formidable challenge for years.
This seminar mainly consists of two parts. In the first segment, I will focus on the confluence of game and control theory in the design of resilient and robust controllers for Cyber-Physical Systems (CPSs). Specifically, within the context of a multi-agent and hierarchical CPSs framework, we identify the physical-physical (inter-agent), physical-cyber (cross-layer), and cyber-cyber (inter-agent) coupling relationships. As per each relationship, we design the resilient and robust controller from a game-theoretic perspective. Other than assuming known dynamics, we further delve into the integration of reinforcement learning (RL) to tackle the typical optimal control problem in an online manner. An illustrative example of the linear quadratic control problem is given.
In the second portion, I will traverse both model-based and model-free RL. In the domain of model-based RL, we propose a “grey-box” framework for leveraging the structural information to improve the sample efficiency. We show the advantage of our approach theoretically by providing a probably approximately correct (PAC) bound for the total number of samples. In the domain of model-free RL, we focus on maximum entropy reinforcement learning (MERL) and develop a scalable extension of soft actor-critic (SAC) tailored for discrete action spaces, by introducing the Gumbel-softmax trick. The competitive results in several OpenAI Gym benchmarks prove the effectiveness of our approach.