Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/149512
Title: POINT CLOUD RECOGNITION WITH DEEP LEARNING
Authors: LI JIAXIN
Keywords: Computer Vision, Point Cloud, Machine Learning, Perception, Deep Learning, 3D Recognition
Issue Date: 18-Jun-2018
Citation: LI JIAXIN (2018-06-18). POINT CLOUD RECOGNITION WITH DEEP LEARNING. ScholarBank@NUS Repository.
Abstract: This thesis studies deep learning based approaches to perform recognition tasks with 2D/3D point clouds. Perception is the foundation of many unmanned systems such as UAVs, self-driving cars. It solves problems of "where am I", "what's around me" and "what will happen next". Sensors like LiDAR, camera are widely used to accomplish these missions. Among the many representations of measured information, point cloud is mathematically simple and effective to represent 2D/3D structures. Although point cloud is intensively applied in localization and mapping, its potential to be used in other recognition tasks is under estimated. Besides the lack of texture information, the difficulty of utilizing deep learning technique is another major reason of this phenomenon. To fill the gap, the thesis starts with a method to solve scan matching and loop closure detection for 2D LiDAR scans, by learning scan alignment in both supervised and unsupervised fashions. Later, a general framework is proposed to process point clouds with deep network. The framework is generalizable to various applications including object classification, shape retrieval, semantic segmentation, etc. Our approaches are intensively evaluated with simulations, public datasets and real world applications, and demonstrate state-of-the-art performance.
URI: http://scholarbank.nus.edu.sg/handle/10635/149512
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Li Jiaxin(Li).pdf34.86 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.