Please use this identifier to cite or link to this item: http://scholarbank.nus.edu.sg/handle/10635/86005
Title: Learning multiple models via regularized weighting
Authors: Vainsencher, D.
Mannor, S.
Xu, H. 
Issue Date: 2013
Source: Vainsencher, D.,Mannor, S.,Xu, H. (2013). Learning multiple models via regularized weighting. Advances in Neural Information Processing Systems. ScholarBank@NUS Repository.
Abstract: We consider the general problem of Multiple Model Learning (MML) from data, from the statistical and algorithmic perspectives; this problem includes clustering, multiple regression and subspace clustering as special cases. A common approach to solving new MML problems is to generalize Lloyd's algorithm for clustering (or Expectation-Maximization for soft clustering). However this approach is unfortunately sensitive to outliers and large noise: a single exceptional point may take over one of the models. We propose a different general formulation that seeks for each model a distribution over data points; the weights are regularized to be sufficiently spread out. This enhances robustness by making assumptions on class balance. We further provide generalization bounds and explain how the new iterations may be computed efficiently. We demonstrate the robustness benefits of our approach with some experimental results and prove for the important case of clustering that our approach has a non-trivial breakdown point, i.e., is guaranteed to be robust to a fixed percentage of adversarial unbounded outliers.
Source Title: Advances in Neural Information Processing Systems
URI: http://scholarbank.nus.edu.sg/handle/10635/86005
ISSN: 10495258
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

9
checked on Feb 16, 2018

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.