Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/213436
DC FieldValue
dc.titlePreventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
dc.contributor.authorKuluhan Binici
dc.contributor.authorPham Nam Trung
dc.contributor.authorTulike Mitra
dc.contributor.authorKarianto Leman
dc.date.accessioned2022-01-10T01:32:06Z
dc.date.available2022-01-10T01:32:06Z
dc.date.issued2022-01-04
dc.identifier.citationKuluhan Binici, Pham Nam Trung, Tulike Mitra, Karianto Leman (2022-01-04). Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2022) : 663-671. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/213436
dc.description.abstractWith the increasing popularity of deep learning on edge devices, compressing large neural networks to meet the hardware requirements of resource-constrained devices became a significant research direction. Numerous compression methodologies are currently being used to reduce the memory sizes and energy consumption of neural networks. Knowledge distillation (KD) is among such methodologies and it functions by using data samples to transfer the knowledge captured by a large model (teacher) to a smaller one (student). However, due to various reasons, the original training data might not be accessible at the compression stage. Therefore, data-free model compression is an ongoing research problem that has been addressed by various works. In this paper, we point out that catastrophic forgetting is a problem that can potentially be observed in existing data-free distillation methods. Moreover, the sample generation strategies in some of these methods could result in a mismatch between the synthetic and real data distributions. To prevent such problems, we propose a data-free KD framework that maintains a dynamic collection of generated samples over time. Additionally, we add the constraint of matching the real data distribution in sample generation strategies that target maximum information gain. Our experiments demonstrate that we can improve the accuracy of the student models obtained via KD when compared with state-of-the-art approaches on the SVHN, Fashion MNIST and CIFAR100 datasets.
dc.description.urihttps://openaccess.thecvf.com/content/WACV2022/html/Binici_Preventing_Catastrophic_Forgetting_and_Distribution_Mismatch_in_Knowledge_Distillation_via_WACV_2022_paper.html
dc.language.isoen
dc.publisherIEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
dc.rightsAttribution-NoDerivatives 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by-nd/4.0/
dc.typeConference Paper
dc.contributor.departmentCOMPUTATIONAL SCIENCE
dc.description.volumeProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
dc.description.issue2022
dc.description.page663-671
dc.published.statePublished
dc.grant.idNRF-CRP23-2019-0003
dc.grant.fundingagencyNational Research Foundation, Singapore
Appears in Collections:Staff Publications
Elements
Students Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Binici_Preventing_Catastrophic_Forgetting_and_Distribution_Mismatch_in_Knowledge_Distillation_via_WACV_2022_paper.pdf4.76 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check


This item is licensed under a Creative Commons License Creative Commons