Please use this identifier to cite or link to this item: https://doi.org/10.1145/3437963.3441800
Title: Denoising Implicit Feedback for Recommendation
Authors: Wenjie Wang
Fuli Feng 
Xiangnan He
Liqiang Nie
Tat-Seng Chua 
Keywords: adaptive denoising training
false-positive feedback
recommender system
Issue Date: 8-Mar-2021
Publisher: Association for Computing Machinery, Inc
Citation: Wenjie Wang, Fuli Feng, Xiangnan He, Liqiang Nie, Tat-Seng Chua (2021-03-08). Denoising Implicit Feedback for Recommendation. WSDM 2021 - Proceedings of the 14th ACM International Conference on Web Search and Data Mining : 373-381. ScholarBank@NUS Repository. https://doi.org/10.1145/3437963.3441800
Abstract: The ubiquity of implicit feedback makes them the default choice to build online recommender systems. While the large volume of implicit feedback alleviates the data sparsity issue, the downside is that they are not as clean in reflecting the actual satisfaction of users. For example, in E-commerce, a large portion of clicks do not translate to purchases, and many purchases end up with negative reviews. As such, it is of critical importance to account for the inevitable noises in implicit feedback for recommender training. However, little work on recommendation has taken the noisy nature of implicit feedback into consideration. In this work, we explore the central theme of denoising implicit feedback for recommender training. We find serious negative impacts of noisy implicit feedback, i.e., fitting the noisy data hinders the recommender from learning the actual user preference. Our target is to identify and prune the noisy interactions, so as to improve the efficacy of recommender training. By observing the process of normal recommender training, we find that noisy feedback typically has large loss values in the early stages. Inspired by this observation, we propose a new training strategy named Adaptive Denoising Training (ADT), which adaptively prunes noisy interactions during training. Specifically, we devise two paradigms for adaptive loss formulation: Truncated Loss that discards the large-loss samples with a dynamic threshold in each iteration; and Reweighted Loss that adaptively lowers the weights of large-loss samples. We instantiate the two paradigms on the widely used binary cross-entropy loss and test the proposed ADT strategies on three representative recommenders. Extensive experiments on three benchmarks demonstrate that ADT significantly improves the quality of recommendation over normal training. © 2021 ACM.
Source Title: WSDM 2021 - Proceedings of the 14th ACM International Conference on Web Search and Data Mining
URI: https://scholarbank.nus.edu.sg/handle/10635/190981
ISBN: 9781450382977
DOI: 10.1145/3437963.3441800
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Denoising Implicit Feedback for Recommendation.pdf6.51 MBAdobe PDF

CLOSED

None

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.