Please use this identifier to cite or link to this item:
|Title:||Optimizing F-measures: A tale of two approaches|
|Authors:||Ye, N. |
|Source:||Ye, N.,Chai, K.M.A.,Lee, W.S.,Chieu, H.L. (2012). Optimizing F-measures: A tale of two approaches. Proceedings of the 29th International Conference on Machine Learning, ICML 2012 1 : 289-296. ScholarBank@NUS Repository.|
|Abstract:||F-measures are popular performance metrics, particularly for tasks with imbalanced data sets. Algorithms for learning to maximize F-measures follow two approaches: the empirical utility maximization (EUM) approach learns a classifier having optimal performance on training data, while the decision-theoretic approach learns a probabilistic model and then predicts labels with maximum expected F-measure. In this paper, we investigate the theoretical justifications and connections for these two approaches, and we study the conditions under which one approach is preferable to the other using synthetic and real datasets. Given accurate models, our results suggest that the two approaches are asymptotically equivalent given large training and test sets. Nevertheless, empirically, the EUM approach appears to be more robust against model misspecification, and given a good model, the decision-theoretic approach appears to be better for handling rare classes and a common domain adaptation scenario. Copyright 2012 by the author(s)/owner(s).|
|Source Title:||Proceedings of the 29th International Conference on Machine Learning, ICML 2012|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 9, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.