Please use this identifier to cite or link to this item: https://doi.org/10.1007/s11241-008-9062-5
DC FieldValue
dc.titleCache-aware timing analysis of streaming applications
dc.contributor.authorChakraborty, S.
dc.contributor.authorMitra, T.
dc.contributor.authorRoychoudhury, A.
dc.contributor.authorThiele, L.
dc.date.accessioned2013-07-04T07:42:11Z
dc.date.available2013-07-04T07:42:11Z
dc.date.issued2009
dc.identifier.citationChakraborty, S., Mitra, T., Roychoudhury, A., Thiele, L. (2009). Cache-aware timing analysis of streaming applications. Real-Time Systems 41 (1) : 52-85. ScholarBank@NUS Repository. https://doi.org/10.1007/s11241-008-9062-5
dc.identifier.issn09226443
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/39464
dc.description.abstractOf late, there has been a considerable interest in models, algorithms and methodologies specifically targeted towards designing hardware and software for streaming applications. Such applications process potentially infinite streams of audio/video data or network packets and are found in a wide range of devices, starting from mobile phones to set-top boxes. Given a streaming application and an architecture, the timing analysis problem is to determine the timing properties of the processed data stream, given the timing properties of the input stream. This problem arises while determining many common performance metrics related to streaming applications and the mapping of such applications onto hardware architectures. Such metrics include the maximum delay experienced by any data item of the stream and the maximum backlog or the buffer requirement to store the incoming stream. Most of the previous work related to estimating or optimizing these metrics take a high-level view of the architecture and neglect micro-architectural features such as caches. In this paper, we show that an accurate estimation of these metrics, however, heavily relies on an appropriate modeling of the processor micro-architecture. Towards this, we present a novel framework for cache-aware timing analysis of stream processing applications. Our framework accurately models the evolution of the instruction cache of the underlying processor as a stream is processed, and the fact that the execution time involved in processing any data item depends on all the previous data items occurring in the stream. The main contribution of our method lies in its ability to seamlessly integrate program analysis techniques for micro-architectural modeling with known analytical methods for analyzing streaming applications, which treat the arrival/service of event streams as mathematical functions. This combination is powerful as it allows to model the code/cache-behavior of the streaming application, as well as the manner in which it is triggered by event arrivals. We employ our analysis method to an MPEG-2 encoder application and our experiments indicate that detailed modeling of the cache behavior is efficient, scalable and leads to more accurate timing/buffer size estimates. © 2008 Springer Science+Business Media, LLC.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/s11241-008-9062-5
dc.sourceScopus
dc.subjectInstruction cache
dc.subjectStreaming applications
dc.subjectTiming analysis
dc.typeArticle
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1007/s11241-008-9062-5
dc.description.sourcetitleReal-Time Systems
dc.description.volume41
dc.description.issue1
dc.description.page52-85
dc.description.codenRESYE
dc.identifier.isiut000261953900003
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.