Please use this identifier to cite or link to this item: https://doi.org/10.1109/TMM.2022.3206664
DC FieldValue
dc.titleSelf-supervised Point Cloud Representation Learning via Separating Mixed Shapes
dc.contributor.authorSun, C
dc.contributor.authorZheng, Z
dc.contributor.authorWang, X
dc.contributor.authorXu, M
dc.contributor.authorYang, Y
dc.date.accessioned2023-11-14T03:29:23Z
dc.date.available2023-11-14T03:29:23Z
dc.date.issued2022-01-01
dc.identifier.citationSun, C, Zheng, Z, Wang, X, Xu, M, Yang, Y (2022-01-01). Self-supervised Point Cloud Representation Learning via Separating Mixed Shapes. IEEE Transactions on Multimedia : 1-11. ScholarBank@NUS Repository. https://doi.org/10.1109/TMM.2022.3206664
dc.identifier.issn1520-9210
dc.identifier.issn1941-0077
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/245919
dc.description.abstractThe manual annotation for large-scale point clouds costs a lot of time and is usually unavailable in harsh real-world scenarios. Inspired by the great success of the pre-training and fine-tuning paradigm in both vision and language tasks, we argue that pre-training is one potential solution for obtaining a scalable model to 3D point cloud downstream tasks as well. In this paper, we, therefore, explore a new self-supervised learning method, called Mixing and Disentangling (<bold>MD</bold>), for 3D point cloud representation learning. As the name implies, we mix two input shapes and demand the model learning to separate the inputs from the mixed shape. We leverage this reconstruction task as the pretext optimization objective for self-supervised learning. There are two primary advantages: 1) Compared to prevailing image datasets, e.g., ImageNet, point cloud datasets are <italic>de facto</italic> small. The mixing process can provide a much larger online training sample pool. 2) On the other hand, the disentangling process motivates the model to mine the geometric prior knowledge, e.g., key points. To verify the effectiveness of the proposed pretext task, we build one baseline network, which is composed of one encoder and one decoder. During pre-training, we mix two original shapes and obtain the geometry-aware embedding from the encoder, then an instance-adaptive decoder is applied to recover the original shapes from the embedding. Albeit simple, the pre-trained encoder can capture the key points of an unseen point cloud and surpasses the encoder trained from scratch on downstream tasks. The proposed method has improved the empirical performance on both ModelNet-40 and ShapeNet-Part datasets in terms of point cloud classification and segmentation tasks. We further conduct ablation studies to explore the effect of each component and verify the generalization of our proposed strategy by harnessing different backbones.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.sourceElements
dc.typeArticle
dc.date.updated2023-11-11T03:35:56Z
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1109/TMM.2022.3206664
dc.description.sourcetitleIEEE Transactions on Multimedia
dc.description.page1-11
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
TMM_3D_Pre_Training.pdfAccepted version9.29 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.