Managing Uncertainty within Value Function Approximation in Reinforcement Learning
Online Access
https://hal-supelec.archives-ouvertes.fr/hal-00554398Abstract
The dilemma between exploration and exploitation is an important topic in reinforcement learning (RL). Most successful approaches in addressing this problem tend to use some uncertainty information about values estimated during learning. On another hand, scalability is known as being a lack of RL algorithms and value function approximation has become a major topic of research. Both problems arise in realworld applications, however few approaches allow approximating the value function while maintaining uncertainty information about estimates. Even fewer use this information in the purpose of addressing the exploration/ exploitation dilemma. In this paper, we show how such an uncertainty information can be derived from a Kalman-based Temporal Differences (KTD) framework. An active learning scheme for a second-order value-iteration-like algorithm (named KTDQ) is proposed. We also suggest adaptations of several existing exploration/exploitation dilemma schemes. This is a first step towards global handling of continuous state and action spaces and exploration/exploitation dilemma.Date
2010-05-16Type
info:eu-repo/semantics/conferenceObjectIdentifier
oai:HAL:hal-00554398v1hal-00554398
https://hal-supelec.archives-ouvertes.fr/hal-00554398