Papers
arxiv:2205.14211

KL-Entropy-Regularized RL with a Generative Model is Minimax Optimal

Published on May 27, 2022
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Mirror descent value iteration with Kullback-Leibler divergence and entropy regularization is shown to be nearly minimax-optimal in model-free reinforcement learning without variance reduction.

AI-generated summary

In this work, we consider and analyze the sample complexity of model-free reinforcement learning with a generative model. Particularly, we analyze mirror descent value iteration (MDVI) by Geist et al. (2019) and Vieillard et al. (2020a), which uses the Kullback-Leibler divergence and entropy regularization in its value and policy updates. Our analysis shows that it is nearly minimax-optimal for finding an varepsilon-optimal policy when varepsilon is sufficiently small. This is the first theoretical result that demonstrates that a simple model-free algorithm without variance-reduction can be nearly minimax-optimal under the considered setting.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2205.14211 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2205.14211 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.