Please use this identifier to cite or link to this item:
|Title:||Hard constrained semi-markov decision processes||Authors:||Yeow, W.-L.
|Issue Date:||2006||Citation:||Yeow, W.-L.,Tham, C.-K.,Wong, W.-C. (2006). Hard constrained semi-markov decision processes. Proceedings of the National Conference on Artificial Intelligence 1 : 549-554. ScholarBank@NUS Repository.||Abstract:||In multiple criteria Markov Decision Processes (MDP) where multiple costs are incurred at every decision point, current methods solve them by minimising the expected primary cost criterion while constraining the expectations of other cost criteria to some critical values. However, systems are often faced with hard constraints where the cost criteria should never exceed some critical values at any time, rather than constraints based on the expected cost criteria. For example, a resource-limited sensor network no longer functions once its energy is depleted. Based on the semi-MDP (sMDP) model, we study the hard constrained (HC) problem in continuous time, state and action spaces with respect to both finite and infinite horizons, and various cost criteria. We show that the HCsMDP problem is NP-hard and that there exists an equivalent discrete-time MDP to every HCsMDP. Hence, classical methods such as reinforcement learning can solve HCsMDPs. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.||Source Title:||Proceedings of the National Conference on Artificial Intelligence||URI:||http://scholarbank.nus.edu.sg/handle/10635/70453||ISBN:||1577352815|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.