Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/247289
Title: | CROSS-MODALITY COMPLEMENTARITY FOR AUDIO-VISUAL SPEECH RECOGNITION | Authors: | WANG JIADONG | ORCID iD: | orcid.org/0000-0001-9372-3133 | Keywords: | Multi-modality, speech recognition, modality corruption, audio-visual fusion, lip generation | Issue Date: | 4-Jul-2023 | Citation: | WANG JIADONG (2023-07-04). CROSS-MODALITY COMPLEMENTARITY FOR AUDIO-VISUAL SPEECH RECOGNITION. ScholarBank@NUS Repository. | Abstract: | Speech recognition is an indispensable tool for human-robot interaction. Inspired by human processes, the integration of audio and visual modalities enhances robustness in transcribing texts. However, corruption may occur in either modality or both, leading to a degradation in speech recognition. Therefore, utilizing audio-visual complementarity to mitigate corruption is essential, given the unique properties of these two modalities. To achieve this goal, I address three types of corruption through audio-visual complementarity. Mimicking human speech perception, the first part employs the visual modality to complement speeches corrupted by acoustic noise. The second part tackles the issue of speakers missing in the camera's field of view through a novel sound source localization. Finally, the third part aims to reconstruct occluded lips with the assistance of the audio modality. | URI: | https://scholarbank.nus.edu.sg/handle/10635/247289 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
Jiadong_Thesis.pdf | 10.92 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.