Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/185985
Title: | DIRECTED AUDIO TEXTURE SYNTHESIS WITH DEEP LEARNING | Authors: | MUHAMMAD HUZAIFAH BIN MD SHAHRIN | ORCID iD: | orcid.org/0000-0002-7188-3600 | Keywords: | audio texture, deep learning, generative models, audio synthesis, sound modelling, neural networks | Issue Date: | 21-Dec-2020 | Citation: | MUHAMMAD HUZAIFAH BIN MD SHAHRIN (2020-12-21). DIRECTED AUDIO TEXTURE SYNTHESIS WITH DEEP LEARNING. ScholarBank@NUS Repository. | Abstract: | Audio textures are a group of sounds that have stable characteristics within an adequately large window of time but may be largely unstructured locally. In this thesis we develop models and techniques that allow us to synthesise a selection of audio textures while enabling the exploration and shaping of the output sound space via parameters. The utilisation of data-driven deep learning techniques improves the potential model expressivity and flexibility over existing physical and sample-based modelling approaches, both in terms of expanding the range of possible sounds and parameters by which to direct them, without having to radically change the model itself. | URI: | https://scholarbank.nus.edu.sg/handle/10635/185985 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
HuzaifahBMDS.pdf | 7.53 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.