Please use this identifier to cite or link to this item:
https://doi.org/arXiv:1906.00562
DC Field | Value | |
---|---|---|
dc.title | Learning to Self-Train for Semi-Supervised Few-Shot Classification | |
dc.contributor.author | Xinzhe Li | |
dc.contributor.author | Qianru Sun | |
dc.contributor.author | Yaoyao Liu | |
dc.contributor.author | Qin Zhou | |
dc.contributor.author | Shibao Zheng | |
dc.contributor.author | Tat-Seng Chua | |
dc.contributor.author | Bernt Schiele | |
dc.date.accessioned | 2020-05-22T06:17:15Z | |
dc.date.available | 2020-05-22T06:17:15Z | |
dc.date.issued | 2019-12-08 | |
dc.identifier.citation | Xinzhe Li, Qianru Sun, Yaoyao Liu, Qin Zhou, Shibao Zheng, Tat-Seng Chua, Bernt Schiele (2019-12-08). Learning to Self-Train for Semi-Supervised Few-Shot Classification. NeurIPS 2019. ScholarBank@NUS Repository. https://doi.org/arXiv:1906.00562 | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/168420 | |
dc.description.abstract | Few-shot classification (FSC) is challenging due to the scarcity of labeled training data (e.g. only one labeled data point per class). Meta-learning has shown to achieve promising results by learning to initialize a classification model for FSC. In this paper we propose a novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically metalearns how to cherry-pick and label such unsupervised data to further improve performance. To this end, we train the LST model through a large number of semi-supervised few-shot tasks. On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning. We additionally learn a soft weighting network (SWN) to optimize the self-training weights of pseudo labels so that better ones can contribute more to gradient descent optimization. We evaluate our LST method on two ImageNet benchmarks for semi-supervised few-shot classification and achieve large improvements over the state-of-the-art method. Code is at github.com/xinzheli1217/learning-to-self-train. | |
dc.subject | Few-shot classification | |
dc.subject | deep neural network | |
dc.subject | learning to self-train | |
dc.type | Conference Paper | |
dc.contributor.department | DEPARTMENT OF COMPUTER SCIENCE | |
dc.contributor.department | MATERIALS SCIENCE AND ENGINEERING | |
dc.description.doi | arXiv:1906.00562 | |
dc.description.sourcetitle | NeurIPS 2019 | |
dc.grant.id | R-252-300-002-490 | |
dc.grant.fundingagency | Infocomm Media Development Authority | |
dc.grant.fundingagency | National Research Foundation | |
Appears in Collections: | Elements Staff Publications |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
Learning to Self-Train for Semi-Supervised Few-Shot Classification.pdf | 3.7 MB | Adobe PDF | OPEN | None | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.