Frank L. Lewis
0516报告 Reinforcement Learning Structures for Real-Time Optimal Control and Differential Games
This talk will discuss some new adaptive control structures for learning online the solutions to optimal control problems and multi-player differential games. Techniques from reinforcement learning are used to design a new family of adaptive controllers based on actor-critic mechanisms that converge in real time to optimal control and game theoretic solutions. Continuous-time systems are considered. Application of reinforcement learning to continuous-time (CT) systems has been hampered because the system Hamiltonian contains the full system dynamics. Using our technique known as Integral Reinforcement Learning (IRL), we will develop reinforcement learning methods that do not require knowledge of the system drift dynamics. In the linear quadratic (LQ) case, the new RL adaptive control algorithms learn the solution to the Riccati equation by adaptation along the system motion trajectories. In the case of nonlinear systems with general performance measures, the algorithms learn the (approximate smooth local) solutions of HJ or HJI equations. New algorithms will be presented for solving online the non zero-sum and zero-sum multi-player games. Each player maintains two adaptive learning structures, a critic network and an actor network. The result is an adaptive control system that learns based on the interplay of agents in a game, to deliver true online gaming behavior. A new Experience Replay technique is given that uses past data for present learning and significantly speeds up convergence. New methods of Off-policy Learning allow learning of optimal solutions without knowing any dynamic information. New RL methods in Optimal Tracking allow solution of the Output Regulator Equations for heterogeneous multi-agent systems.
0517报告
Cooperative Control for Multi-Agent Systems in AC Microgrid Distributed Generation
With aging power distribution systems andnew opportunities for renewable energy generation, the smart grid and microgridare becoming increasingly important. Microgrid allows the addition of local loads and local distributedgeneration (DG) including wind power, solar, hydroelectric, fuel cells, andmicro-turbines. Microgrid holds out thehope of scalable growth in power distribution systems by distributedcoordination of local loads and local DG so as not to overload existing powergrid generation and transmission capabilities. Sample microgrids are smart buildings, isolated rural systems, andoffshore drilling systems. Microgridtakes power from the main power grid when needed, and is able to provide powerback to the main power system when there is local generation excess. When connectedto the main distribution grid, microgrid receives a frequency reference fromgrid synchronous generators. Standardoperating procedures call for disconnecting microgrid from the main power gridwhen disturbances occur. Ondisconnection, or in islanded mode, the absence of rotating synchronousgeneration leads to a loss of frequency references. After islanding, it is necessary to returnMicrogrid DG frequencies to synchronization, provide voltage support, andensure power quality. In this talk we develop a new method of synchronizationfor cooperative systems linked by a communication graph topology that is basedon a novel distributed feedback linearization technique. This cooperative feedback linearizationapproach allows for different dynamics of agents such as occur in the DGs of amicrogrid. It is shown the newcooperative protocol design method allows for frequency synchronization,voltage synchronization, and distributed power balancing in a microgrid after agrid disconnection islanding event. The distributed nature of the cooperativefeedback linearization method is shown to lead to sparse communicationtopologies that are more suited to microgrid control, more reliable, and moreeconomical than standard centralized secondary power control methods.