NIPS Proceedingsβ

Ji Liu

16 Papers

  • Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent (2019)
  • Global Sparse Momentum SGD for Pruning Very Deep Neural Networks (2019)
  • LIIR: Learning Individual Intrinsic Reward in Multi-Agent Reinforcement Learning (2019)
  • Model Compression with Adversarial Robustness: A Unified Optimization Framework (2019)
  • Communication Compression for Decentralized Training (2018)
  • Gradient Sparsification for Communication-Efficient Distributed Optimization (2018)
  • Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity (2018)
  • Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent (2017)
  • Accelerating Stochastic Composition Optimization (2016)
  • A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order (2016)
  • Asynchronous Parallel Greedy Coordinate Descent (2016)
  • Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization (2015)
  • Exclusive Feature Learning on Arbitrary Structures via $\ell_{1,2}$-norm (2014)
  • An Approximate, Efficient LP Solver for LP Rounding (2013)
  • Regularized Off-Policy TD-Learning (2012)
  • Multi-Stage Dantzig Selector (2010)