Please use this identifier to cite or link to this item:
|Title:||Reinforcement learning of multiple tasks using a hierarchical CMAC architecture||Authors:||Tham, C.K.||Keywords:||Context-dependent learning
Modular neural networks
|Issue Date:||Oct-1995||Citation:||Tham, C.K. (1995-10). Reinforcement learning of multiple tasks using a hierarchical CMAC architecture. Robotics and Autonomous Systems 15 (4) : 247-274. ScholarBank@NUS Repository.||Abstract:||A reinforcement learning approach based on modular function approximation is presented. Cerebellar Model Articulation Controller (CMAC) networks are incorporated in the Hierarchical Mixtures of Experts (HME) architecture and the resulting architecture is referred to as HME-CMAC. A computationally efficient on-line learning algorithm based on the Expectation Maximization (EM) algorithm is proposed in order to achieve fast function approximation with the HME-CMAC architecture. The Compositional Q-Learning (CQ-L) framework establishes the relationship between the Q-values of composite tasks and those of elemental tasks in its decomposition. This framework is extended here to allow rewards in non-terminal states. An implementation of the extended CQ-L framework using the HME-CMAC architecture is used to perform task decomposition in a realistic simulation of a two-linked manipulator having non-linear dynamics. The context-dependent reinforcement learning achieved by adopting this approach has advantages over monolithic approaches in terms of speed of learning, storage requirements and the ability to cope with changing goals. © 1995.||Source Title:||Robotics and Autonomous Systems||URI:||http://scholarbank.nus.edu.sg/handle/10635/81083||ISSN:||09218890|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on May 21, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.