Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/172650
Title: Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism
Authors: CHEUNG WANG CHI 
Simchi-Levi, David
Zhu, Ruihao
Issue Date: 14-Aug-2020
Citation: CHEUNG WANG CHI, Simchi-Levi, David, Zhu, Ruihao (2020-08-14). Reinforcement Learning for Non-Stationary Markov Decision Processes: The Blessing of (More) Optimism. International Conference on Machine Learning. ScholarBank@NUS Repository.
Abstract: We consider un-discounted reinforcement learning (RL) in Markov decision processes (MDPs) under drifting non-stationarity, i.e., both the reward and state transition distributions are allowed to evolve over time, as long as their respective total variations, quantified by suitable metrics, do not exceed certain variation budgets. We first develop the Sliding Window Upper-Confidence bound for Reinforcement Learning with Confidence Widening (SWUCRL2-CW) algorithm, and establish its dynamic regret bound when the variation budgets are known. In addition, we propose the Bandit-over-Reinforcement Learning (BORL) algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound, but in a parameter-free manner, i.e., withoutknowingthevariationbudgets. Notably,learning non-stationary MDPs via the conventional optimistic exploration technique presents a unique challenge absent in existing (non-stationary) bandit learning settings. We overcome the challenge by a novel confidence widening technique that incorporates additional optimism.
Source Title: International Conference on Machine Learning
URI: https://scholarbank.nus.edu.sg/handle/10635/172650
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
paper.pdfAccepted version596.56 kBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.