Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/69742
DC FieldValue
dc.titleConvergence analysis of discrete time recurrent neural networks for linear variational inequality problem
dc.contributor.authorTang, H.J.
dc.contributor.authorTan, K.C.
dc.contributor.authorZhang, Y.
dc.date.accessioned2014-06-19T03:04:08Z
dc.date.available2014-06-19T03:04:08Z
dc.date.issued2002
dc.identifier.citationTang, H.J.,Tan, K.C.,Zhang, Y. (2002). Convergence analysis of discrete time recurrent neural networks for linear variational inequality problem. Proceedings of the International Joint Conference on Neural Networks 3 : 2470-2475. ScholarBank@NUS Repository.
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/69742
dc.description.abstractIn this paper, we study the convergence of a class of discrete recurrent neural networks to solve Linear Variational Inequality Problem (LVIP). LVIP has important applications in engineering and economics. Not only the network's exponential convergence for the case of positive definite matrix is proved, but its global convergence for positive semidefinite matrix is also proved. Conditions are derived to guarantee the convergences of the network. Comprehensive examples are discussed and simulated to illustrate the results.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.sourcetitleProceedings of the International Joint Conference on Neural Networks
dc.description.volume3
dc.description.page2470-2475
dc.description.coden85OFA
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Page view(s)

50
checked on May 18, 2019

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.