Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/240997
Title: | TOWARDS EFFICIENT OBJECT DETECTION WITH DEEP LEARNING | Authors: | WANG, TAO | Keywords: | object detection, deep learning, efficient model | Issue Date: | 1-Jul-2022 | Citation: | WANG, TAO (2022-07-01). TOWARDS EFFICIENT OBJECT DETECTION WITH DEEP LEARNING. ScholarBank@NUS Repository. | Abstract: | Object detection aims at localizing and recognizing object instances of certain classes in the input image. It is a fundamental computer vision task that enables instance-level visual perception and thus is important for many vision applications. These algorithms are mostly developed in laboratory settings with well-curated benchmarks and modern high-performance computation devices. However, they face significant challenges in real-world scenarios like computation limitation and environmental variation. The focus of this thesis is to develop solutions addressing the mentioned challenges and achieve efficient object detection. Concretely, we take two perspectives, model efficiency and data efficiency. We propose detection knowledge distillation and feature compression technique to accelerate model inference. We establish few-shot adaptation detection and long-tail detection frameworks to achieve efficient data usage. The proposed methods are extensively validated on various experimental settings and benchmarks, we also conduct extensive ablation and analysis to better understand the effectiveness of the methods. We believe the findings and methods developed in the thesis provide insights for future research on object detection. | URI: | https://scholarbank.nus.edu.sg/handle/10635/240997 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
WangT.pdf | 22.44 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.