Please use this identifier to cite or link to this item:
https://doi.org/10.23919/DATE56975.2023.10137046
DC Field | Value | |
---|---|---|
dc.title | Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices | |
dc.contributor.author | Shivam Aggarwal | |
dc.contributor.author | Kuluhan Binici | |
dc.contributor.author | Tulika Mitra | |
dc.date.accessioned | 2023-06-08T06:12:20Z | |
dc.date.available | 2023-06-08T06:12:20Z | |
dc.date.issued | 2023-06-02 | |
dc.identifier.citation | Shivam Aggarwal, Kuluhan Binici, Tulika Mitra (2023-06-02). Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices. 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE). ScholarBank@NUS Repository. https://doi.org/10.23919/DATE56975.2023.10137046 | |
dc.identifier.isbn | 979-8-3503-9624-9 | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/241726 | |
dc.description.abstract | Once deployed on edge devices, a deep neural network model should dynamically adapt to newly discovered environments and personalize its utility for each user. The system must be capable of continual learning, i.e., learning new information from a temporal stream of data in situ without forgetting previously acquired knowledge. However, the prohibitive intricacies of such a personalized continual learning framework stand at odds with limited compute and storage on edge devices. Existing continual learning methods rely on massive memory storage to preserve the past data while learning from the incoming data stream. We propose Chameleon, a hardware-friendly continual learning framework for user-centric training with dual replay buffers. The proposed strategy leverages the hierarchical memory structure available on most edge devices, introducing a short-term replay store in the on-chip memory and a long-term replay store in the off-chip memory to acquire new information while retaining past knowledge. Extensive experiments on two large-scale continual learning benchmarks demonstrate the efficacy of our proposed method, achieving better or comparable accuracy than existing state-of-the-art techniques while reducing the memory footprint by roughly 16× . Our method achieves up to 7× speedup and energy efficiency on edge devices such as ZCU102 FPGA, NVIDIA Jetson Nano and Google's EdgeTPU. Our code is available at https://github.com/ecolab-nus/Chameleon | |
dc.language.iso | en | |
dc.publisher | IEEE | |
dc.rights | Attribution-NonCommercial 4.0 International | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | |
dc.type | Conference Paper | |
dc.contributor.department | COMPUTATIONAL SCIENCE | |
dc.description.doi | 10.23919/DATE56975.2023.10137046 | |
dc.description.sourcetitle | 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE) | |
dc.published.state | Published | |
dc.grant.id | NRF-CRP23-2019-0003 | |
dc.grant.id | 251RES1905 | |
dc.grant.fundingagency | National Research Foundation, Singapore | |
dc.grant.fundingagency | Singapore Ministry of Education Academic Research Fund T1 | |
Appears in Collections: | Students Publications Staff Publications Elements |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
DATE_Chameleon_CameraReady.pdf | 1.31 MB | Adobe PDF | OPEN | None | View/Download |
This item is licensed under a Creative Commons License