Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.neunet.2019.09.014
Title: Salience-Aware Adaptive Resonance Theory for Large-Scale Sparse Data Clustering
Authors: Lei Meng 
Ah-Hwee Tan
Chunyan Miao
Keywords: Adaptive resonance theory
Clustering
Feature weighting
Parameter adaptation
Sparse data
Subspace learning
Issue Date: 1-Sep-2019
Citation: Lei Meng, Ah-Hwee Tan, Chunyan Miao (2019-09-01). Salience-Aware Adaptive Resonance Theory for Large-Scale Sparse Data Clustering. Neural Networks : 143-157. ScholarBank@NUS Repository. https://doi.org/10.1016/j.neunet.2019.09.014
Abstract: Sparse data is known to pose challenges to cluster analysis, as the similarity between data tends to be ill-posed in the high-dimensional Hilbert space. Solutions in the literature typically extend either k-means or spectral clustering with additional steps on representation learning and/or feature weighting. However, adding these usually introduces new parameters and increases computational cost, thus inevitably lowering the robustness of these algorithms when handling massive ill-represented data. To alleviate these issues, this paper presents a class of self-organizing neural networks, called the salience-aware adaptive resonance theory (SA-ART) model. SA-ART extends Fuzzy ART with measures for cluster-wise salient feature modeling. Specifically, two strategies, i.e. cluster space matching and salience feature weighting, are incorporated to alleviate the side-effect of noisy features incurred by high dimensionality. Additionally, cluster weights are bounded by the statistical means and minimums of the samples therein, making the learning rate also self-adaptable. Notably, SA-ART allows clusters to have their own sets of self-adaptable parameters. It has the same time complexity of Fuzzy ART and does not introduce additional hyperparameters that profile cluster properties. Comparative experiments have been conducted on the ImageNet and BlogCatalog datasets, which are large-scale and include sparsely-represented data. The results show that, SA-ART achieves 51.8% and 18.2% improvement over Fuzzy ART, respectively. While both have a similar time cost, SA-ART converges faster and can reach a better local minimum. In addition, SA-ART consistently outperforms six other state-of-the-art algorithms in terms of precision and F1 score. More importantly, it is much faster and exhibits stronger robustness to large and complex data. © 2019 Elsevier Ltd
Source Title: Neural Networks
URI: https://scholarbank.nus.edu.sg/handle/10635/167768
ISSN: 08936080
DOI: 10.1016/j.neunet.2019.09.014
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
1-s2.0-S0893608019302758-main.pdf1.91 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.