Please use this identifier to cite or link to this item:
https://doi.org/10.1109/TMM.2022.3206664
Title: | Self-supervised Point Cloud Representation Learning via Separating Mixed Shapes | Authors: | Sun, C Zheng, Z Wang, X Xu, M Yang, Y |
Issue Date: | 1-Jan-2022 | Publisher: | Institute of Electrical and Electronics Engineers (IEEE) | Citation: | Sun, C, Zheng, Z, Wang, X, Xu, M, Yang, Y (2022-01-01). Self-supervised Point Cloud Representation Learning via Separating Mixed Shapes. IEEE Transactions on Multimedia : 1-11. ScholarBank@NUS Repository. https://doi.org/10.1109/TMM.2022.3206664 | Abstract: | The manual annotation for large-scale point clouds costs a lot of time and is usually unavailable in harsh real-world scenarios. Inspired by the great success of the pre-training and fine-tuning paradigm in both vision and language tasks, we argue that pre-training is one potential solution for obtaining a scalable model to 3D point cloud downstream tasks as well. In this paper, we, therefore, explore a new self-supervised learning method, called Mixing and Disentangling ( |
Source Title: | IEEE Transactions on Multimedia | URI: | https://scholarbank.nus.edu.sg/handle/10635/245919 | ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2022.3206664 |
Appears in Collections: | Staff Publications Elements |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
TMM_3D_Pre_Training.pdf | Accepted version | 9.29 MB | Adobe PDF | OPEN | Published | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.