Papers
arxiv:2103.01312

UCB Momentum Q-learning: Correcting the bias without forgetting

Published on Mar 1, 2021
Authors:
,
,
,

Abstract

UCBMQ, an advanced Q-learning algorithm with momentum, achieves optimal regret bounds in reinforcement learning for both exploration and exploitation.

AI-generated summary

We propose UCBMQ, Upper Confidence Bound Momentum Q-learning, a new algorithm for reinforcement learning in tabular and possibly stage-dependent, episodic Markov decision process. UCBMQ is based on Q-learning where we add a momentum term and rely on the principle of optimism in face of uncertainty to deal with exploration. Our new technical ingredient of UCBMQ is the use of momentum to correct the bias that Q-learning suffers while, at the same time, limiting the impact it has on the second-order term of the regret. For UCBMQ, we are able to guarantee a regret of at most O(H^3SAT+ H^4 S A ) where H is the length of an episode, S the number of states, A the number of actions, T the number of episodes and ignoring terms in poly-log(SAHT). Notably, UCBMQ is the first algorithm that simultaneously matches the lower bound of Ω(H^3SAT) for large enough T and has a second-order term (with respect to the horizon T) that scales only linearly with the number of states S.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2103.01312 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2103.01312 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.