Accumulated Gradient Normalization

Joeri R. Hermans, Gerasimos Spanakis, Rico Möckel

Research output: Chapter in Book/Report/Conference proceedingChapterAcademic

40 Downloads (Pure)

Abstract

This work addresses the instability in asynchronous data parallel optimization. It does so by introducing a novel distributed optimizer which is able to efficiently optimize a centralized model under communication constraints. The optimizer achieves this by pushing a normalized sequence of first-order gradients to a parameter server. This implies that the magnitude of a worker delta is smaller compared to an accumulated gradient, and provides a better direction towards a minimum compared to first-order gradients, which in turn also forces possible implicit momentum fluctuations to be more aligned since we make the assumption that all workers contribute towards a single minima. As a result, our approach mitigates the parameter staleness problem more effectively since staleness in asynchrony induces (implicit) momentum, and achieves a better convergence rate compared to other optimizers such as asynchronous textsceasgd and which we show empirically.
Original languageEnglish
Title of host publicationProceedings of the 9th Asian Conference on Machine Learning
EditorsMin-Ling Zhang, Yung-Kyun Noh
PublisherProceedings of Machine Learning Research
Pages439-454
Number of pages16
Volume77
Publication statusPublished - Nov 2017

Publication series

SeriesProceedings of Machine Learning Research

Cite this