Please use this identifier to cite or link to this item:
Title: Study of Adaptation Methods Towards Advanced Brain-computer Interfaces
Keywords: Brain-computer Interfacing, Electroencephalography (EEG), Ensemble Classification, joint approximate diagonalization (JAD), Kernel Adaptation.
Issue Date: 23-Jan-2013
Citation: SIDATH RAVINDRA LIYANAGE (2013-01-23). Study of Adaptation Methods Towards Advanced Brain-computer Interfaces. ScholarBank@NUS Repository.
Abstract: Classification in Brain-Computer Interfaces is made more challenging due to the inherent non-stationarity of the EEG data. Therefore, adaptive methods were applied to overcome the problems caused by non-stationarity in EEG. Firstly, a new multi-class Common Spatial Patterns (CSP), based on Joint Approximate Diagonalization (JAD) for feature extraction of multi-class motor motion imagery is proposed. The current standard, over-versus-rest (OVR) implementation of simultaneous diagonalization limits the Information Transfer Rate (ITR) in the multi-class classification setting. The fast Frobenius diagonalization (FFDIAG) method based CSP is able to jointly diagonalize multiple covariance matrices, thus overcoming the bottleneck created by OVR implementation. Consequently, a classifier ensemble with a novel adaptive weighting method was developed to improve the classification accuracies under non-statioary conditions. Classifier ensemble based on clustering for BCI with novel weighting technique for classifier combination is also presented. Clustered training data was used to train classifiers on specific groups of training data. When test data is presented, the similarity to the existing clusters is evaluated. This similarity measure is used in adaptively weighting the classifier decisions for each test sample. Adaptation of feature extraction models using feedback training data is also proposed as it is difficult to address the non-stationarity issue by adapting classifiers alone. Significant brain signal changes from calibration session to feedback training sessions can make the feature space derived from calibration data ineffective. Towards the end, error entropy based Kernel adaptation for adaptive classifier training is proposed. Error entropy criterion takes into account the amount of information in the error distributions. Therefore minimization of error entropy considers the error distributions rather than error values. The error entropy is used to adapt the width of the Gaussian kernel of the SVM classifier. A subset of data from a subsequent session is used as adaptation data to estimate an error entropy based cost function which is minimized by adapting the kernel width. In conclusion, the future research directions of the proposed methods are unveiled.
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
LiyanageSR.pdf1.56 MBAdobe PDF



Page view(s)

checked on Oct 20, 2018


checked on Oct 20, 2018

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.