TY - JOUR

T1 - Distributed Gradient Methods for Convex Machine Learning Problems in Networks

T2 - Distributed Optimization

AU - Nedic, Angelia

N1 - Funding Information:
This work was partially funded by the National Science Foundation under grant CCF-1717391 and the U.S. Navy under award N000141612245.
Publisher Copyright:
© 1991-2012 IEEE.

PY - 2020/5

Y1 - 2020/5

N2 - This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form min 1/m i f (x) m x !Rn ^ hR = 1 i in a system consisting of m agents that are embedded in a communication network. Each agent i has a collection of data captured by its privately known objective function fi (x) . The distributed algorithms considered here obey two simple rules: privately known agent functions fi (x) cannot be disclosed to any other agent in the network and every agent is aware of the local connectivity structure of the network, i.e., it knows its one-hop neighbors only. While obeying these two rules, the distributed algorithms that agents execute should find a solution to the overall system problem with the limited knowledge of the objective function and limited local communications. Given in this article is an overview of such algorithms that typically involve two update steps: A gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.

AB - This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form min 1/m i f (x) m x !Rn ^ hR = 1 i in a system consisting of m agents that are embedded in a communication network. Each agent i has a collection of data captured by its privately known objective function fi (x) . The distributed algorithms considered here obey two simple rules: privately known agent functions fi (x) cannot be disclosed to any other agent in the network and every agent is aware of the local connectivity structure of the network, i.e., it knows its one-hop neighbors only. While obeying these two rules, the distributed algorithms that agents execute should find a solution to the overall system problem with the limited knowledge of the objective function and limited local communications. Given in this article is an overview of such algorithms that typically involve two update steps: A gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.

UR - http://www.scopus.com/inward/record.url?scp=85084562386&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85084562386&partnerID=8YFLogxK

U2 - 10.1109/MSP.2020.2975210

DO - 10.1109/MSP.2020.2975210

M3 - Article

AN - SCOPUS:85084562386

VL - 37

SP - 92

EP - 101

JO - IEEE Acoustics, Speech, and Signal Processing Newsletter

JF - IEEE Acoustics, Speech, and Signal Processing Newsletter

SN - 1053-5888

IS - 3

M1 - 9084356

ER -