Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/174700
DC FieldValue
dc.titleGESTURE RECOGNITION SYSTEM
dc.contributor.authorKOH ENG SIONG
dc.date.accessioned2020-09-08T08:51:34Z
dc.date.available2020-09-08T08:51:34Z
dc.date.issued1998
dc.identifier.citationKOH ENG SIONG (1998). GESTURE RECOGNITION SYSTEM. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/174700
dc.description.abstractGesture recognition is defined as the classification of a sequence of consecutive image frames, which show the hand in various orientations, scales, locations and articulations, into a finite set of classes of possible gestures. We use three information sources to perform this classification. They are the shape, orientation and motion information of the hand. The main aims of this work are to provide a user-independent, real-time, easy to use and concatenated gesture recognition system. Furthermore, this system should recognize the gesture as soon as possible after its completion to minimize the time lag between gesture completion and recognition. In other words, the system should not wait until the whole sequence of concatenated gestures is completed before recognizing each individual gesture. In this work, a new, robust method of recognizing hand gestures is presented. This method overcomes certain limitations of Hidden Markov Models (HMMs), which have been widely used in not only gesture recognition but also speech recognition. The advantages of this new method includes the use or appropriate information sources depending on which gesture is occurring, the ability to place equal importance on the various information sources despite their varying feature vector lengths, and a more intuitive system than HMMs (which utilize unobservable states that may not have an intuitive meaning). The method requires each of the image frames in a gesture sequence to undergo five processing steps. These are segmentation, low-level feature extraction, intermediate-level probability assignment, high-level probability combination and weighting, and finally the recognition step. 600 training images provided the database for shape probability assignments while 4 training sequences were used for parameter optimization. A gesture recognition accuracy of 95.1 % was achieved with gesture sequences that were not used for system training. A total of 7 users provided 149 gestures to test the system.
dc.sourceCCK BATCHLOAD 20200918
dc.typeThesis
dc.contributor.departmentELECTRICAL ENGINEERING
dc.contributor.supervisorS. RANGANATH
dc.contributor.supervisorTIAN QI
dc.description.degreeMaster's
dc.description.degreeconferredMASTER OF ENGINEERING
Appears in Collections:Master's Theses (Restricted)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
b2062590x.pdf7.51 MBAdobe PDF

RESTRICTED

NoneLog In

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.