IEEE Transactions on Automatic Control, Vol.63, No.11, 3809-3824, 2018
A Bregman Splitting Scheme for Distributed Optimization Over Networks
We consider distributed optimization problems, in which a group of agents are to collaboratively seek the global optimum through peer-to-peer communication networks. The problem arises in various application areas, such as resource allocation, sensor fusion, and distributed learning. We present a general algorithmic framework based on the Bregman method and operator splitting, which allows us to easily recover most of the existing distributed algorithms. Under this framework, we propose a general efficient distributed algorithm-distributed forward-backward Bregman splitting (D-FBBS)-to simultaneously solve the above primal problem as well as its dual. The proposed algorithm allows agents to communicate asynchronously and, thus, lends itself to stochastic networks. This algorithm is shown to have close connections with some existing well-known algorithms when dealing with fixed networks. However, we will show that it is generally different from the existing ones due to its effectiveness in handling stochastic networks. With proper assumptions, we establish a nonergodic convergence rate of O(1/k) in terms of fixed-point residuals over fixed networks both for D-FBBS and its inexact version (ID-FBBS) that is more computationally efficient and an ergodic convergence rate of O(1/k ) for D-FBBS over stochastic networks. We also apply the proposed algorithm to sensor fusion problems to show its superior performance compared to existing methods.