Semi-infinitely Constrained Markov Decision Processes

Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track

Bibtex Paper Supplemental

Authors

Liangyu Zhang, Yang Peng, Wenhao Yang, Zhihua Zhang

Abstract

We propose a generalization of constrained Markov decision processes (CMDPs) that we call the \emph{semi-infinitely constrained Markov decision process} (SICMDP).Particularly, in a SICMDP model, we impose a continuum of constraints instead of a finite number of constraints as in the case of ordinary CMDPs.We also devise a reinforcement learning algorithm for SICMDPs that we call SI-CRL.We first transform the reinforcement learning problem into a linear semi-infinitely programming (LSIP) problem and then use the dual exchange method in the LSIP literature to solve it.To the best of our knowledge, we are the first to apply tools from semi-infinitely programming (SIP) to solve reinforcement learning problems.We present theoretical analysis for SI-CRL, identifying its sample complexity and iteration complexity.We also conduct extensive numerical examples to illustrate the SICMDP model and validate the SI-CRL algorithm.