Please use this identifier to cite or link to this item:
https://doi.org/10.1109/TMM.2022.3206664
DC Field | Value | |
---|---|---|
dc.title | Self-supervised Point Cloud Representation Learning via Separating Mixed Shapes | |
dc.contributor.author | Sun, C | |
dc.contributor.author | Zheng, Z | |
dc.contributor.author | Wang, X | |
dc.contributor.author | Xu, M | |
dc.contributor.author | Yang, Y | |
dc.date.accessioned | 2023-11-14T03:29:23Z | |
dc.date.available | 2023-11-14T03:29:23Z | |
dc.date.issued | 2022-01-01 | |
dc.identifier.citation | Sun, C, Zheng, Z, Wang, X, Xu, M, Yang, Y (2022-01-01). Self-supervised Point Cloud Representation Learning via Separating Mixed Shapes. IEEE Transactions on Multimedia : 1-11. ScholarBank@NUS Repository. https://doi.org/10.1109/TMM.2022.3206664 | |
dc.identifier.issn | 1520-9210 | |
dc.identifier.issn | 1941-0077 | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/245919 | |
dc.description.abstract | The manual annotation for large-scale point clouds costs a lot of time and is usually unavailable in harsh real-world scenarios. Inspired by the great success of the pre-training and fine-tuning paradigm in both vision and language tasks, we argue that pre-training is one potential solution for obtaining a scalable model to 3D point cloud downstream tasks as well. In this paper, we, therefore, explore a new self-supervised learning method, called Mixing and Disentangling (<bold>MD</bold>), for 3D point cloud representation learning. As the name implies, we mix two input shapes and demand the model learning to separate the inputs from the mixed shape. We leverage this reconstruction task as the pretext optimization objective for self-supervised learning. There are two primary advantages: 1) Compared to prevailing image datasets, e.g., ImageNet, point cloud datasets are <italic>de facto</italic> small. The mixing process can provide a much larger online training sample pool. 2) On the other hand, the disentangling process motivates the model to mine the geometric prior knowledge, e.g., key points. To verify the effectiveness of the proposed pretext task, we build one baseline network, which is composed of one encoder and one decoder. During pre-training, we mix two original shapes and obtain the geometry-aware embedding from the encoder, then an instance-adaptive decoder is applied to recover the original shapes from the embedding. Albeit simple, the pre-trained encoder can capture the key points of an unseen point cloud and surpasses the encoder trained from scratch on downstream tasks. The proposed method has improved the empirical performance on both ModelNet-40 and ShapeNet-Part datasets in terms of point cloud classification and segmentation tasks. We further conduct ablation studies to explore the effect of each component and verify the generalization of our proposed strategy by harnessing different backbones. | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.source | Elements | |
dc.type | Article | |
dc.date.updated | 2023-11-11T03:35:56Z | |
dc.contributor.department | DEPARTMENT OF COMPUTER SCIENCE | |
dc.description.doi | 10.1109/TMM.2022.3206664 | |
dc.description.sourcetitle | IEEE Transactions on Multimedia | |
dc.description.page | 1-11 | |
dc.published.state | Published | |
Appears in Collections: | Staff Publications Elements |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
TMM_3D_Pre_Training.pdf | Accepted version | 9.29 MB | Adobe PDF | OPEN | Published | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.