Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICON.2006.302614
DC FieldValue
dc.titleDecentralized dynamic workflow scheduling for grid computing using reinforcement learning
dc.contributor.authorYao, J.
dc.contributor.authorTham, C.-K.
dc.contributor.authorNg, K.-Y.
dc.date.accessioned2014-06-19T03:04:40Z
dc.date.available2014-06-19T03:04:40Z
dc.date.issued2006
dc.identifier.citationYao, J.,Tham, C.-K.,Ng, K.-Y. (2006). Decentralized dynamic workflow scheduling for grid computing using reinforcement learning. Proceedings - 2006 IEEE International Conference on Networks, ICON 2006 - Networking-Challenges and Frontiers 1 : 90-95. ScholarBank@NUS Repository. <a href="https://doi.org/10.1109/ICON.2006.302614" target="_blank">https://doi.org/10.1109/ICON.2006.302614</a>
dc.identifier.isbn0780397460
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/69788
dc.description.abstractThe workflow enactment engine is used to execute grid workflow on heterogeneous and distributed resources. However, in the literature, the efficient workflow scheduling algorithm designating resources to tasks in a dynamic environment has not been carefully investigated. In the paper, a decentralized dynamic workflow scheduling algorithm using reinforcement learning (DDWS-RL) is proposed. The on-line model-free RL algorithm is embedded into the decentralized just in-time scheduling system together with a RL agent. At the time of tasks execution, the decentralized task schedulers query information from the RL agent, designate resources to tasks and update the RL agent. To evaluate the efficiency of the DDWS-RL algorithm, a real grid network is built. The Globus Toolkit 2.4 is installed as the middleware for the testbed and the workflow enactment engine and the DDWSRL algorithm are realized in Java programming. The experiment results show that the proposed DDWS-RL algorithm converges to the theoretical shortest execution time of the workflow in the homogeneous environment. In the heterogeneous environment, the algorithm reaches the sub-optimal execution time due to the self-interest of the independent learner applied in task scheduler. © 2006 IEEE.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/ICON.2006.302614
dc.sourceScopus
dc.subjectDDWS-RL
dc.subjectGrid computing
dc.subjectReinforcement learning
dc.subjectWorkflow enactment engine
dc.subjectWorkflow scheduling
dc.typeConference Paper
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.doi10.1109/ICON.2006.302614
dc.description.sourcetitleProceedings - 2006 IEEE International Conference on Networks, ICON 2006 - Networking-Challenges and Frontiers
dc.description.volume1
dc.description.page90-95
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

8
checked on Aug 12, 2022

Page view(s)

114
checked on Aug 4, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.