Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/163183
Title: REPRESENTATION LEARNING IN MULTIMODAL SPATIOTEMPORAL IMAGE-GUIDED MEDICAL PROCEDURES
Authors: MOBARAKOL ISLAM
ORCID iD:   orcid.org/0000-0002-7162-2822
Keywords: Deep Learning, Multi-task Learning, MTL Optimization, Pruning, Image-guided Medical Intervention, Multimodal-Spatiotemporal Data
Issue Date: 2-Aug-2019
Citation: MOBARAKOL ISLAM (2019-08-02). REPRESENTATION LEARNING IN MULTIMODAL SPATIOTEMPORAL IMAGE-GUIDED MEDICAL PROCEDURES. ScholarBank@NUS Repository.
Abstract: Medical image computing and computer-assisted analytics are playing a vital role in healthcare by helping early and accurate diagnosis as well as guidance in the intervention. Advances in imaging technology and robotics increase the demand for developing image-guided computational models to analyze the data and assist in clinical decision making. In this thesis, we propose several deep convolutional neural networks (DCNNs) to automate and enhance the image-guided medical procedures by addressing challenges of (1) insufficient and imbalanced dataset, (2) synthesizing missing modality, (3) processing spatiotemporal, multimodal and high-resolution data for online application, and (4) designing and optimizing multitask learning (MTL) model to enable the system with concurrent tasks at a time. The thesis focuses on medical applications such as detection and outcome prediction of brain tumor, ischemic stroke, intracerebral hemorrhage, and tracking and scanpath prediction in the image-guided intervention using imaging sources of MRI, CT, and endoscope.
URI: https://scholarbank.nus.edu.sg/handle/10635/163183
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
IslamMK.pdf22.78 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.