TD_gamma: Re-evaluating Complex Backups in Temporal Difference Learning[PDF] [BibTeX]
We show that the lambda-return target used in the TD(lambda) family of algorithms is the maximum likelihood estimator for a specific model of how the variance of an n-step return estimate increases with n. We introduce the gamma-return estimator, an alternative target based on a more accurate model of variance, which defines the TD_gamma family of complex-backup temporal difference learning algorithms. We derive TD_gamma, the gamma-return equivalent of the original TD(lambda) algorithm, which eliminates the lambda parameter but can only perform updates at the end of an episode and requires time and space proportional to the episode length. We then derive a second algorithm, TD_gamma(C), with a capacity parameter C. TD_gamma(C) requires C times more time and memory than TD(lambda) and is incremental and online. We show that TD_gamma outperforms TD(lambda) for any setting of lambda on 4 out of 5 benchmark domains, and that TD_gamma(C) performs as well as or better than TD_gamma for intermediate settings of C.