NIPS Proceedings
β
Books
Gergely Neu
6 Papers
Explore no more: Improved high-probability regret bounds for non-stochastic bandits
(2015)
Efficient learning by implicit exploration in bandit problems with side observations
(2014)
Exploiting easy data in online optimization
(2014)
Online combinatorial optimization with stochastic decision sets and adversarial losses
(2014)
Online learning in episodic Markovian decision processes by relative entropy policy search
(2013)
Online Markov Decision Processes under Bandit Feedback
(2010)