Please use this identifier to cite or link to this item:
|Title:||An adaptive peer-to-peer network for distributed caching of OLAP results|
|Citation:||Kalnis, P.,Ng, W.S.,Ooi, B.C.,Papadias, D.,Tan, K.-L. (2002). An adaptive peer-to-peer network for distributed caching of OLAP results. Proceedings of the ACM SIGMOD International Conference on Management of Data : 25-36. ScholarBank@NUS Repository.|
|Abstract:||Peer-to-Peer (P2P) systems are becoming increasingly popular as they enable users to exchange digital information by participating in complex networks. Such systems are inexpensive, easy to use, highly scalable and do not require central administration. Despite their advantages, however, limited work has been done on employing database systems on top of P2P networks. Here we propose the PeerOLAP architecture for supporting On-Line Analytical Processing queries. A large number of low-end clients, each containing a cache with the most useful results, are connected through an arbitrary P2P network. If a query cannot be answered locally (i.e. by using the cache contents of the computer where it is issued), it is propagated through the network until a peer that has cached the answer is found. An answer may also be constructed by partial results from many peers. Thus PeerOLAP acts as a large distributed cache, which amplifies the benefits of traditional client-side caching. The system is fully distributed and can reconfigure itself on-the-fly in order to decrease the query cost for the observed workload. This paper describes the core components of PeerOLAP and presents our results both from simulation and a prototype installation running on geographically remote peers.|
|Source Title:||Proceedings of the ACM SIGMOD International Conference on Management of Data|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 29, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.