Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/182253
Title: | NEURAL NETS IN IDENTIFICATION AND CONTROL | Authors: | FONG KIN FUI | Issue Date: | 1996 | Citation: | FONG KIN FUI (1996). NEURAL NETS IN IDENTIFICATION AND CONTROL. ScholarBank@NUS Repository. | Abstract: | In this thesis, the role of neural networks in identification and control is presented. Neural networks are viewed as general nonlinear models which may be readily incorporated into some standard control schemes. They may be used as models or controllers for nonlinear dynamical systems. The core of neural network strategies essentially lies in their weight update algorithms. As such, some existing parameter estimation procedures and control configurations may be adapted to include neural networks. The research in this thesis touches on three areas, namely system identification, optimal control and adaptive control. In system identification, neural networks are used as general models for the learning of nonlinear dynamic systems, while in both optimal and adaptive control, they are used as general controller structures in their respective schemes. In the first part of the thesis, a nonlinear optimal control problem which involves the incorporation of a neural network regulator is formulated. A general gradient descent method similar to backpropagation is then derived as an optimisation procedure. This is shown to be a general formulation of the off-line neural network control strategy proposed by Nguyen & Widrow (1990). To demonstrate this, the control of an inverted pendulum using a neural network trained by this method is simulated. The control of a simulated pH process which is only output measurable is also shown to be feasible. In applications which require on-line estimation and control of the plant, such as adaptive control, the rate of convergence of the weight update algorithm becomes an important issue. With standard backpropagation, adaptation is too slow for such applications. To improve such situations, a more rapid weight update algorithm for neural networks using modified recursive least squares (RLS) is derived. Despite the nonlinearity of the neural network, the algorithm was shown to have stable and ultimately bounded parameter and approximation errors. The algorithm was however computationally demanding due to the large number of weights involved in typical situations. This provided the motivation for the formulation of the layered least squares (LLS) algorithm which is simpler and computationally less intensive. Analysis also shows that the LLS has similar properties to the RLS. To demonstrate their effectiveness, the modelling of a highly nonlinear plant using both algorithms are simulated. The relative improvement of the rate of convergence of the LLS and the RLS over other general gradient descent methods prompted their application in adaptive control. This was demonstrated through two simlation examples including the pH control problem. | URI: | https://scholarbank.nus.edu.sg/handle/10635/182253 |
Appears in Collections: | Ph.D Theses (Restricted) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
b20132050.pdf | 5.08 MB | Adobe PDF | RESTRICTED | None | Log In |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.