NIPS Proceedingsβ

Ji Liu

12 Papers

  • Communication Compression for Decentralized Training (2018)
  • Gradient Sparsification for Communication-Efficient Distributed Optimization (2018)
  • Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity (2018)
  • Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent (2017)
  • Accelerating Stochastic Composition Optimization (2016)
  • A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order (2016)
  • Asynchronous Parallel Greedy Coordinate Descent (2016)
  • Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization (2015)
  • Exclusive Feature Learning on Arbitrary Structures via $\ell_{1,2}$-norm (2014)
  • An Approximate, Efficient LP Solver for LP Rounding (2013)
  • Regularized Off-Policy TD-Learning (2012)
  • Multi-Stage Dantzig Selector (2010)