Please use this identifier to cite or link to this item: https://doi.org/10.3389/fnins.2014.00373
DC FieldValue
dc.titleHybrid fNIRS-EEG based classification of auditory and visual perception processes
dc.contributor.authorPutze, F
dc.contributor.authorHesslinger, S
dc.contributor.authorTse, C.-Y
dc.contributor.authorHuang, Y
dc.contributor.authorHerff, C
dc.contributor.authorGuan, C
dc.contributor.authorSchultz, T
dc.date.accessioned2020-09-14T08:22:20Z
dc.date.available2020-09-14T08:22:20Z
dc.date.issued2014
dc.identifier.citationPutze, F, Hesslinger, S, Tse, C.-Y, Huang, Y, Herff, C, Guan, C, Schultz, T (2014). Hybrid fNIRS-EEG based classification of auditory and visual perception processes. Frontiers in Neuroscience 8 (OCT) : Article 373. ScholarBank@NUS Repository. https://doi.org/10.3389/fnins.2014.00373
dc.identifier.issn1662-4548
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/176173
dc.description.abstractFor multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6% and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy. © 2014 Putze, Hesslinger, Tse, Huang, Herff, Guan and Schultz.
dc.sourceUnpaywall 20200831
dc.subjectadult
dc.subjectArticle
dc.subjectauditory stimulation
dc.subjectbrain computer interface
dc.subjectcerebral oximeter
dc.subjectclassification
dc.subjectcognition
dc.subjectelectroencephalography
dc.subjectfemale
dc.subjectfunctional neuroimaging
dc.subjecthearing
dc.subjecthuman
dc.subjecthuman experiment
dc.subjectmale
dc.subjectnear infrared spectroscopy
dc.subjectnormal human
dc.subjectvalidation study
dc.subjectvision
dc.subjectvisual stimulation
dc.typeArticle
dc.contributor.departmentELECTRICAL AND COMPUTER ENGINEERING
dc.contributor.departmentTEMASEK LABORATORIES
dc.description.doi10.3389/fnins.2014.00373
dc.description.sourcetitleFrontiers in Neuroscience
dc.description.volume8
dc.description.issueOCT
dc.description.pageArticle 373
dc.published.statePublished
Appears in Collections:Elements
Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_3389_fnins_2014_00373.pdf3.2 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.