Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/236780
Title: ACOUSTIC EVENT RECOGNITION: FROM SUPERVISED LEARNING TO UNSUPERVISED LEARNING
Authors: WEI WEI
Keywords: event detection,speech recognition,machine learning
Issue Date: 10-Aug-2022
Citation: WEI WEI (2022-08-10). ACOUSTIC EVENT RECOGNITION: FROM SUPERVISED LEARNING TO UNSUPERVISED LEARNING. ScholarBank@NUS Repository.
Abstract: Audio, one of the most common sources of multimedia information in our daily life, contains a lot of useful information to analyze. In the audio signal, an acoustic event is defined as a segment containing a particular audio event, such as phone-ringing, singer-singing, people talking, etc. Acoustic event recognition is an effective method to extract such useful information from audio signals. The task of acoustic event recognition is to predict a label, a start time, and an end time for each detected audio event in the given audio signal input. This thesis analyzes three major types of audio signals: singing voice, environmental audio, and speech. From singing voice to environmental audio and speech, the proposed models gradually develop from a fully supervised one to unsupervised ones.
URI: https://scholarbank.nus.edu.sg/handle/10635/236780
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
WeiW.pdf6.93 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.