Please use this identifier to cite or link to this item: https://doi.org/10.3390/buildings9090204
DC FieldValue
dc.titleBuilding energy consumption raw data forecasting using data cleaning and deep recurrent neural networks
dc.contributor.authorYang, J.
dc.contributor.authorTan, K.K.
dc.contributor.authorSantamouris, M.
dc.contributor.authorLee, S.E.
dc.date.accessioned2022-01-04T06:24:37Z
dc.date.available2022-01-04T06:24:37Z
dc.date.issued2019
dc.identifier.citationYang, J., Tan, K.K., Santamouris, M., Lee, S.E. (2019). Building energy consumption raw data forecasting using data cleaning and deep recurrent neural networks. Buildings 9 (9) : 204. ScholarBank@NUS Repository. https://doi.org/10.3390/buildings9090204
dc.identifier.issn20755309
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/212937
dc.description.abstractWith the rising focus on building energy big data analysis, there lacks a framework for raw data preprocessing to answer the question of how to handle the missing data in the raw data set. This study presents a methodology and framework for building energy consumption raw data forecasting. A case building is used to forecast the energy consumption by using deep recurrent neural networks. Four different methodologies to impute missing data in the raw data set are compared and implemented. The question of sensitivity of gap size and available data percentage on the imputation accuracy was tested. The cleaned data were then used for building energy forecasting. While the existing studies explored only the use of small recurrent networks of 2 layers and less, the question of whether a deep network of more than 2 layers would be performing better for building energy consumption forecasting should be explored. In addition, the problem of overfitting has been cited as a significant problem in using deep networks. In this study, the deep recurrent neural network is then used to explore the use of deeper networks and their regularization in the context of an energy load forecasting task. The results show a mean absolute error of 2.1 can be achieved through the 2*32 gated neural network model. In applying regularization methods to overcome model overfitting, the study found that weights regularization did indeed delay the onset of overfitting. � 2019 by the authors.
dc.publisherMDPI AG
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.sourceScopus OA2019
dc.subjectData imputation
dc.subjectDeep recurrent neural networks
dc.subjectEnergy forecasting
dc.typeArticle
dc.contributor.departmentDEPT OF BUILDING
dc.description.doi10.3390/buildings9090204
dc.description.sourcetitleBuildings
dc.description.volume9
dc.description.issue9
dc.description.page204
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_3390_buildings9090204.pdf3.61 MBAdobe PDF

OPEN

NoneView/Download

SCOPUSTM   
Citations

16
checked on Dec 2, 2022

Page view(s)

77
checked on Dec 1, 2022

Download(s)

1
checked on Dec 1, 2022

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons