Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.jpdc.2008.05.007
DC FieldValue
dc.titleAnalysis and reduction of data spikes in thin client computing
dc.contributor.authorSun, Y.
dc.contributor.authorTay, T.T.
dc.date.accessioned2014-06-17T02:38:45Z
dc.date.available2014-06-17T02:38:45Z
dc.date.issued2008-11
dc.identifier.citationSun, Y., Tay, T.T. (2008-11). Analysis and reduction of data spikes in thin client computing. Journal of Parallel and Distributed Computing 68 (11) : 1463-1472. ScholarBank@NUS Repository. https://doi.org/10.1016/j.jpdc.2008.05.007
dc.identifier.issn07437315
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/55063
dc.description.abstractWhile various optimization techniques have been used in existing thin client systems to reduce network traffic, the screen updates triggered by many user operations will still result in long interactive latencies in many contemporary network environments. Long interactive latencies have an unfavorable effect on users' perception of graphical interfaces and visual contents. The long latencies arise when data spikes need to be transferred over a network while the available bandwidth is limited. These data spikes are composed of a large amount of screen update data produced in a very short time. In this paper, we propose a model to analyze the packet-level redundancy in screen update streams caused by repainting of graphical objects. Using this model we analyzed the data spikes in screen update streams. Based on the analysis result we designed a hybrid cache-compression scheme. This scheme caches the screen updates in data spikes on both server and client sides, and uses the cached data as history to better compress the recurrent screen updates in possible data spikes. We empirically studied the effectiveness of our cache scheme on some screen updates generated by one of the most bandwidth-efficient thin client system, Microsoft Terminal Service. The experiment results showed that this cache scheme with a cache of 2M bytes can reduce 26.7%-42.2% data spike count and 9.9%-21.2% network traffic for the tested data, and can reduce 25.8%-38.5% noticeable long latencies for different types of applications. This scheme costs only a little additional computation time and the cache size can be negotiated between the client and server. © 2008 Elsevier Inc. All rights reserved.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/j.jpdc.2008.05.007
dc.sourceScopus
dc.subjectCache scheme
dc.subjectData redundancy analysis
dc.subjectData spike
dc.subjectThin client computing
dc.typeArticle
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.doi10.1016/j.jpdc.2008.05.007
dc.description.sourcetitleJournal of Parallel and Distributed Computing
dc.description.volume68
dc.description.issue11
dc.description.page1463-1472
dc.description.codenJPDCE
dc.identifier.isiut000260095000006
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.