Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/189401
DC FieldValue
dc.titleTempDiff: Temporal Difference-Based Feature Map-Level Sparsity Induction in CNNs with <4% Memory Overhead
dc.contributor.authorWATHUTHANTHRIGE UDARI CHARITHA DE ALWIS
dc.contributor.authorALIOTO,MASSIMO BRUNO
dc.date.accessioned2021-04-16T00:34:16Z
dc.date.available2021-04-16T00:34:16Z
dc.date.issued2021-04-15
dc.identifier.citationWATHUTHANTHRIGE UDARI CHARITHA DE ALWIS, ALIOTO,MASSIMO BRUNO (2021-04-15). TempDiff: Temporal Difference-Based Feature Map-Level Sparsity Induction in CNNs with <4% Memory Overhead. IEEE. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/189401
dc.description.abstractThe diffusion of vision sensor nodes in a wide range of applications has given rise to higher computational demand at the edge of the Internet of Things (IoT). Indeed, in-node video sense-making has become essential in the form of high-level tasks such as object detection for visual monitoring, mitigating data deluge from the wireless network to the cloud storage level. In such applications, deep neural networks are well known to be a prime choice, in view of their performance and flexibility. However, such properties come at the cost of high computational requirements at inference time, which directly hamper power efficiency, lifetime and cost of self-powered edge devices. In this paper, a computationally-efficient inference technique is introduced to perform the ubiquitously required task of bounding box-based object detection. The proposed method leverages the correlation among frames in the temporal dimension, uniquely requires minor memory overhead for intermediate feature map storage and architectural changes, and does not require any retraining for immediate deployment in existing vision frameworks. The proposed method achieves 18.3% (35.8%) computation reduction at 3.3% (3.2%) memory overhead, and 3.8% (6.8%) accuracy drop in YOLOv1(VGG16) SSD(VGG16) neural networks under the CAMEL dataset.
dc.publisherIEEE
dc.rightsCC0 1.0 Universal
dc.rights.urihttp://creativecommons.org/publicdomain/zero/1.0/
dc.subjectObject detection, deep neural networks, computational efficiency, Internet of Things, inference
dc.typeArticle
dc.contributor.departmentELECTRICAL AND COMPUTER ENGINEERING
dc.description.sourcetitleIEEE
dc.published.statePublished
Appears in Collections:Elements
Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
TEMPDI~1.PDF607.91 kBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check


This item is licensed under a Creative Commons License Creative Commons