Please use this identifier to cite or link to this item: http://scholarbank.nus.edu.sg/handle/10635/15716
Title: Audio and visual perceptions for mobile robot
Authors: GUAN FENG
Keywords: sound localization, visual human detection, unsupervised learning, sensor fusion
Issue Date: 15-Jan-2007
Source: GUAN FENG (2007-01-15). Audio and visual perceptions for mobile robot. ScholarBank@NUS Repository.
Abstract: In this thesis, audio and visual perceptions for mobile robots are investigated. This investigation includes passive sound localization mainly using acoustic sensors, and human detection using multiple visual sensors. Firstly, we proposed several strategies such as fusion of multiple sound localization cues, multiple sampling, and multiple sensors for audio perception. The latter enhances the robustness of the sound localization systems. It also naturally leads to the tracking of a sound source (which may be a human speaker) via motion estimation. Secondly, we developed a human detection algorithm using stereo and thermal images. This algorithm is able to discriminate humans from human-like objects as well as to detect humans robustly in variable environments. Thirdly, we proposed an unsupervised learning algorithm to discover the inherent properties hidden in high dimensional observations in images. This can be used to extract image features straightforwardly and effortlessly. As a result, a friendly audio and visual perception system for mobile robots is achieved.
URI: http://scholarbank.nus.edu.sg/handle/10635/15716
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
GF_Sound.pdf14.85 MBAdobe PDF

OPEN

NoneView/Download

Page view(s)

223
checked on Dec 11, 2017

Download(s)

104
checked on Dec 11, 2017

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.