Please use this identifier to cite or link to this item:
|Title:||Learning multiple models via regularized weighting|
|Citation:||Vainsencher, D.,Mannor, S.,Xu, H. (2013). Learning multiple models via regularized weighting. Advances in Neural Information Processing Systems. ScholarBank@NUS Repository.|
|Abstract:||We consider the general problem of Multiple Model Learning (MML) from data, from the statistical and algorithmic perspectives; this problem includes clustering, multiple regression and subspace clustering as special cases. A common approach to solving new MML problems is to generalize Lloyd's algorithm for clustering (or Expectation-Maximization for soft clustering). However this approach is unfortunately sensitive to outliers and large noise: a single exceptional point may take over one of the models. We propose a different general formulation that seeks for each model a distribution over data points; the weights are regularized to be sufficiently spread out. This enhances robustness by making assumptions on class balance. We further provide generalization bounds and explain how the new iterations may be computed efficiently. We demonstrate the robustness benefits of our approach with some experimental results and prove for the important case of clustering that our approach has a non-trivial breakdown point, i.e., is guaranteed to be robust to a fixed percentage of adversarial unbounded outliers.|
|Source Title:||Advances in Neural Information Processing Systems|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Sep 28, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.