Please use this identifier to cite or link to this item: https://doi.org/10.23919/DATE56975.2023.10137046
DC FieldValue
dc.titleChameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
dc.contributor.authorShivam Aggarwal
dc.contributor.authorKuluhan Binici
dc.contributor.authorTulika Mitra
dc.date.accessioned2023-06-08T06:12:20Z
dc.date.available2023-06-08T06:12:20Z
dc.date.issued2023-06-02
dc.identifier.citationShivam Aggarwal, Kuluhan Binici, Tulika Mitra (2023-06-02). Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices. 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE). ScholarBank@NUS Repository. https://doi.org/10.23919/DATE56975.2023.10137046
dc.identifier.isbn979-8-3503-9624-9
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/241726
dc.description.abstractOnce deployed on edge devices, a deep neural network model should dynamically adapt to newly discovered environments and personalize its utility for each user. The system must be capable of continual learning, i.e., learning new information from a temporal stream of data in situ without forgetting previously acquired knowledge. However, the prohibitive intricacies of such a personalized continual learning framework stand at odds with limited compute and storage on edge devices. Existing continual learning methods rely on massive memory storage to preserve the past data while learning from the incoming data stream. We propose Chameleon, a hardware-friendly continual learning framework for user-centric training with dual replay buffers. The proposed strategy leverages the hierarchical memory structure available on most edge devices, introducing a short-term replay store in the on-chip memory and a long-term replay store in the off-chip memory to acquire new information while retaining past knowledge. Extensive experiments on two large-scale continual learning benchmarks demonstrate the efficacy of our proposed method, achieving better or comparable accuracy than existing state-of-the-art techniques while reducing the memory footprint by roughly 16× . Our method achieves up to 7× speedup and energy efficiency on edge devices such as ZCU102 FPGA, NVIDIA Jetson Nano and Google's EdgeTPU. Our code is available at https://github.com/ecolab-nus/Chameleon
dc.language.isoen
dc.publisherIEEE
dc.rightsAttribution-NonCommercial 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/
dc.typeConference Paper
dc.contributor.departmentCOMPUTATIONAL SCIENCE
dc.description.doi10.23919/DATE56975.2023.10137046
dc.description.sourcetitle2023 Design, Automation & Test in Europe Conference & Exhibition (DATE)
dc.published.statePublished
dc.grant.idNRF-CRP23-2019-0003
dc.grant.id251RES1905
dc.grant.fundingagencyNational Research Foundation, Singapore
dc.grant.fundingagencySingapore Ministry of Education Academic Research Fund T1
Appears in Collections:Students Publications
Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
DATE_Chameleon_CameraReady.pdf1.31 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons