Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/170586
DC Field | Value | |
---|---|---|
dc.title | PATTERN RECOGNITION USING HIGH ORDER NEURAL NETWORKS | |
dc.contributor.author | TEO YEE MIAN RAYMUND | |
dc.date.accessioned | 2020-06-22T05:24:49Z | |
dc.date.available | 2020-06-22T05:24:49Z | |
dc.date.issued | 1993 | |
dc.identifier.citation | TEO YEE MIAN RAYMUND (1993). PATTERN RECOGNITION USING HIGH ORDER NEURAL NETWORKS. ScholarBank@NUS Repository. | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/170586 | |
dc.description.abstract | High order neural networks can perform nonlinear discrimination required for two-dimensional pattern recognition invariant to scale, translation and rotation, but with considerable advantages over multi-layer, first-order networks. First, a high order network can perform nonlinear separation using only a single layer so that simple perceptron learning rule can be used, leading to rapid convergence [Rose62]. Furthermore, the problem of combinatoric explosion can be partially overcome by building invariances into the network architecture, using information from the relationships expected between the inputs [GiMa87], [GiGM88]. To achieve invariant pattern recognition, high order networks extract features from an image that are insensitive to the transformation. Hence any features that do not change under the transformation can be compared directly with the corresponding features in a canonical description of the object. In this thesis, we construct and perform experiments on a variety of high order neural networks. From our experiments, we can see that a third order network (which is implemented as a second order one) is theoretically sufficient to recognize patterns that are scaled, shifted, rotated, or any combination of any number of these transformations. In addition, we also proposed a novel model called the Two-Stage model to perform pattern autoassociation. The most important advantage of this model is that it reduces the required connections among the neurons. This is critical because high order models are always viewed to be impractical due to quadratic growth in the connection weights [MiPa88]. The major strengths of the high order neural networks are: they have the ability to recognize the training set, the ability to recognize translated and some rotated patterns from the training set, the ability to recognize patterns with omission, reasonable ability to generalize as they can recognize some handwritten number patterns, and fast convergence. Even though high order networks are theoretically capable of handling various invariances, we find, in our implementations and experiments, particular problems in recognizing scaled and most rotated patterns. High connectivities among the units is another shortcoming of this model. Some improvements to address these problems are suggested. | |
dc.source | CCK BATCHLOAD 20200626 | |
dc.type | Thesis | |
dc.contributor.department | INFORMATION SYSTEMS & COMPUTER SCIENCE | |
dc.contributor.supervisor | HO SENG BENG | |
dc.description.degree | Master's | |
dc.description.degreeconferred | MASTER OF SCIENCE | |
Appears in Collections: | Master's Theses (Restricted) |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
B18949861.PDF | 2.69 MB | Adobe PDF | RESTRICTED | None | Log In |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.