Alessandro Giuseppi, Danilo Menegatti, Antonio Pietrabissa. Enhancing Federated Reinforcement Learning: A Consensus-Based Approach for Both Homogeneous and Heterogeneous Agents[J]. Machine Intelligence Research.
Citation: Alessandro Giuseppi, Danilo Menegatti, Antonio Pietrabissa. Enhancing Federated Reinforcement Learning: A Consensus-Based Approach for Both Homogeneous and Heterogeneous Agents[J]. Machine Intelligence Research.

Enhancing Federated Reinforcement Learning: A Consensus-Based Approach for Both Homogeneous and Heterogeneous Agents

  • Federated Reinforcement Learning (FedRL) is an emerging paradigm in data-driven control where a group of decision-making agents cooperate to learn optimal control laws through a distributed reinforcement learning procedure, with the peculiarity of having the constraints of not sharing any process/control data. In the typical FedRL setting, a centralized entity is responsible for orchestrating the distributed training process. To remove this design limitation, this work proposes a solution to enable a fully decentralized approach leveraging on results from consensus theory. The proposed algorithm, named FedRLCon, can then deal with: (i) scenarios with homogeneous agents, which can share their actor and, possibly, the critic networks; (ii) scenarios with heterogeneous agents, in which agents may share their critic network only. The proposed algorithms are validated on two scenarios, consisting of a resource management problem in a communication network and a smart grid case study. Our tests show that practically no performance is lost for the decentralization.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return