NIPS Proceedingsβ

Sergey Levine

19 Papers

  • Data-Efficient Hierarchical Reinforcement Learning (2018)
  • Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models (2018)
  • Meta-Reinforcement Learning of Structured Exploration Strategies (2018)
  • Probabilistic Model-Agnostic Meta-Learning (2018)
  • Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition (2018)
  • Visual Memory for Robust Path Following (2018)
  • Visual Reinforcement Learning with Imagined Goals (2018)
  • Where Do You Think You're Going?: Inferring Beliefs about Dynamics from Behavior (2018)
  • EX2: Exploration with Exemplar Models for Deep Reinforcement Learning (2017)
  • Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning (2017)
  • Backprop KF: Learning Discriminative Deterministic State Estimators (2016)
  • Guided Policy Search via Approximate Mirror Descent (2016)
  • Learning to Poke by Poking: Experiential Learning of Intuitive Physics (2016)
  • Unsupervised Learning for Physical Interaction through Video Prediction (2016)
  • Value Iteration Networks (2016)
  • Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics (2014)
  • Variational Policy Search via Trajectory Optimization (2013)
  • Nonlinear Inverse Reinforcement Learning with Gaussian Processes (2011)
  • Feature Construction for Inverse Reinforcement Learning (2010)