Please use this identifier to cite or link to this item:
https://doi.org/10.1145/3448016.3457546
DC Field | Value | |
---|---|---|
dc.title | Efficient Deep Learning Pipelines for Accurate Cost Estimations over Large Scale Query Workload | |
dc.contributor.author | Zhi Kang, JK | |
dc.contributor.author | Gaurav | |
dc.contributor.author | Tan, SY | |
dc.contributor.author | Cheng, F | |
dc.contributor.author | Sun, S | |
dc.contributor.author | He, B | |
dc.date.accessioned | 2022-02-15T03:59:14Z | |
dc.date.available | 2022-02-15T03:59:14Z | |
dc.date.issued | 2021-01-01 | |
dc.identifier.citation | Zhi Kang, JK, Gaurav, Tan, SY, Cheng, F, Sun, S, He, B (2021-01-01). Efficient Deep Learning Pipelines for Accurate Cost Estimations over Large Scale Query Workload. Proceedings of the ACM SIGMOD International Conference on Management of Data abs/2103.12465 : 1014-1022. ScholarBank@NUS Repository. https://doi.org/10.1145/3448016.3457546 | |
dc.identifier.issn | 0730-8078 | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/215369 | |
dc.description.abstract | The use of deep learning models for forecasting the resource consumption patterns of SQL queries have recently been a popular area of study. While these models have demonstrated promising accuracy, training them over large scale industry workloads are expensive. Space inefficiencies of encoding techniques over large numbers of queries and excessive padding used to enforce shape consistency across diverse query plans implies 1) longer model training time and 2) the need for expensive, scaled up infrastructure to support batched training. In turn, we developed Prestroid, a tree convolution based data science pipeline that accurately predicts resource consumption patterns of query traces, but at a much lower cost. We evaluated our pipeline over 19K Presto OLAP queries, on a data lake of more than 20PB of data from Grab. Experimental results imply that our pipeline outperforms benchmarks on predictive accuracy, contributing to more precise resource prediction for large-scale workloads, yet also reduces per-batch memory footprint by 13.5x and per-epoch training time by 3.45x. We demonstrate direct cost savings of up to 13.2x for large batched model training over Microsoft Azure VMs. | |
dc.publisher | ACM | |
dc.source | Elements | |
dc.subject | cs.LG | |
dc.subject | cs.LG | |
dc.subject | cs.DB | |
dc.type | Article | |
dc.date.updated | 2022-02-14T23:40:14Z | |
dc.contributor.department | DEAN'S OFFICE (SCHOOL OF COMPUTING) | |
dc.contributor.department | DEPARTMENT OF COMPUTER SCIENCE | |
dc.description.doi | 10.1145/3448016.3457546 | |
dc.description.sourcetitle | Proceedings of the ACM SIGMOD International Conference on Management of Data | |
dc.description.volume | abs/2103.12465 | |
dc.description.page | 1014-1022 | |
dc.published.state | Published | |
Appears in Collections: | Staff Publications Elements |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
2103.12465v1.pdf | 1.56 MB | Adobe PDF | OPEN | Post-print | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.