Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/69742
Title: Convergence analysis of discrete time recurrent neural networks for linear variational inequality problem
Authors: Tang, H.J.
Tan, K.C. 
Zhang, Y.
Issue Date: 2002
Citation: Tang, H.J.,Tan, K.C.,Zhang, Y. (2002). Convergence analysis of discrete time recurrent neural networks for linear variational inequality problem. Proceedings of the International Joint Conference on Neural Networks 3 : 2470-2475. ScholarBank@NUS Repository.
Abstract: In this paper, we study the convergence of a class of discrete recurrent neural networks to solve Linear Variational Inequality Problem (LVIP). LVIP has important applications in engineering and economics. Not only the network's exponential convergence for the case of positive definite matrix is proved, but its global convergence for positive semidefinite matrix is also proved. Conditions are derived to guarantee the convergences of the network. Comprehensive examples are discussed and simulated to illustrate the results.
Source Title: Proceedings of the International Joint Conference on Neural Networks
URI: http://scholarbank.nus.edu.sg/handle/10635/69742
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.