Using Decentralized Learning to Reduce Communication in Column-Partitioned, Multi-Agent Systems

Mar 1, 2021·
Zachary R. Atkins
Zachary R. Atkins
· 1 min read

Abstract: Multi-agent systems introduce new challenges to distributed computing, such as unreliability and a need for data localization, which require robust decentralized learning methods capable of minimizing communication overhead. In multi-agent systems, each agent typically stores local, time-series data columns which must be communicated to other agents in order to apply traditional, row-partitioned distributed learning algorithms; such data-sharing is infeasible in unreliable or communication-delayed environments. State-of-the-art, column-partitioned decentralized learning methods avoid such communication bottlenecks through aggregation of approximate local optimization results between neighbors over a less connected network topology. In this talk, we will focus on the recent advances and outstanding challenges of decentralized learning for column-partitioned multi-agent systems.

See slides linked above for more info!