Please use this identifier to cite or link to this item: https://doi.org/10.1145/3448016.3457546
DC FieldValue
dc.titleEfficient Deep Learning Pipelines for Accurate Cost Estimations over Large Scale Query Workload
dc.contributor.authorZhi Kang, JK
dc.contributor.authorGaurav
dc.contributor.authorTan, SY
dc.contributor.authorCheng, F
dc.contributor.authorSun, S
dc.contributor.authorHe, B
dc.date.accessioned2022-02-15T03:59:14Z
dc.date.available2022-02-15T03:59:14Z
dc.date.issued2021-01-01
dc.identifier.citationZhi Kang, JK, Gaurav, Tan, SY, Cheng, F, Sun, S, He, B (2021-01-01). Efficient Deep Learning Pipelines for Accurate Cost Estimations over Large Scale Query Workload. Proceedings of the ACM SIGMOD International Conference on Management of Data abs/2103.12465 : 1014-1022. ScholarBank@NUS Repository. https://doi.org/10.1145/3448016.3457546
dc.identifier.issn0730-8078
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/215369
dc.description.abstractThe use of deep learning models for forecasting the resource consumption patterns of SQL queries have recently been a popular area of study. While these models have demonstrated promising accuracy, training them over large scale industry workloads are expensive. Space inefficiencies of encoding techniques over large numbers of queries and excessive padding used to enforce shape consistency across diverse query plans implies 1) longer model training time and 2) the need for expensive, scaled up infrastructure to support batched training. In turn, we developed Prestroid, a tree convolution based data science pipeline that accurately predicts resource consumption patterns of query traces, but at a much lower cost. We evaluated our pipeline over 19K Presto OLAP queries, on a data lake of more than 20PB of data from Grab. Experimental results imply that our pipeline outperforms benchmarks on predictive accuracy, contributing to more precise resource prediction for large-scale workloads, yet also reduces per-batch memory footprint by 13.5x and per-epoch training time by 3.45x. We demonstrate direct cost savings of up to 13.2x for large batched model training over Microsoft Azure VMs.
dc.publisherACM
dc.sourceElements
dc.subjectcs.LG
dc.subjectcs.LG
dc.subjectcs.DB
dc.typeArticle
dc.date.updated2022-02-14T23:40:14Z
dc.contributor.departmentDEAN'S OFFICE (SCHOOL OF COMPUTING)
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1145/3448016.3457546
dc.description.sourcetitleProceedings of the ACM SIGMOD International Conference on Management of Data
dc.description.volumeabs/2103.12465
dc.description.page1014-1022
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
2103.12465v1.pdf1.56 MBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.