Offline Actor-Critic for Average Reward MDPs

William Powell, Jeongyeol Kwon, Qiaomin Xie, Hanbaek Lyu

Advances in Neural Information Processing Systems 38 (NeurIPS 2025) Main Conference Track

We study offline policy optimization for infinite-horizon average-reward Markov decision processes (MDPs) with large or infinite state spaces. Specifically, we propose a pessimistic actor-critic algorithm that uses a computationally efficient linear function class for value function estimation. At the core of our method is a critic that computes a pessimistic estimate of the average reward under the current policy, as well as the corresponding policy gradient, by solving a fixed-point Bellman equation, rather than solving a successive sequence of regression problems as in finite horizon settings. This procedure reduces to solving a second-order cone program, which is computationally tractable. Our theoretical analysis is based on a weak partial data coverage assumption, which requires only that the offline data aligns well with the expected feature vector of a comparator policy. Under this condition, we show that our algorithm achieves the optimal sample complexity of O(\varepsilon^{-2}) for learning a near-optimal policy, up to model misspecification errors.