Please use this identifier to cite or link to this item: https://doi.org/10.3389/fnins.2020.622759
Title: An Investigation of Deep Learning Models for EEG-Based Emotion Recognition
Authors: Zhang, Y.
Chen, J.
Tan, J.H. 
Chen, Y.
Chen, Y.
Li, D.
Yang, L.
Su, J.
Huang, X.
Che, W.
Keywords: CNN (convolutional neural network)
CNN-LSTM
DNN (deep neural network)
EEG
emotion recognition
Issue Date: 2020
Publisher: Frontiers Media S.A.
Citation: Zhang, Y., Chen, J., Tan, J.H., Chen, Y., Chen, Y., Li, D., Yang, L., Su, J., Huang, X., Che, W. (2020). An Investigation of Deep Learning Models for EEG-Based Emotion Recognition. Frontiers in Neuroscience 14 : 622759. ScholarBank@NUS Repository. https://doi.org/10.3389/fnins.2020.622759
Rights: Attribution 4.0 International
Abstract: Emotion is the human brain reacting to objective things. In real life, human emotions are complex and changeable, so research into emotion recognition is of great significance in real life applications. Recently, many deep learning and machine learning methods have been widely applied in emotion recognition based on EEG signals. However, the traditional machine learning method has a major disadvantage in that the feature extraction process is usually cumbersome, which relies heavily on human experts. Then, end-to-end deep learning methods emerged as an effective method to address this disadvantage with the help of raw signal features and time-frequency spectrums. Here, we investigated the application of several deep learning models to the research field of EEG-based emotion recognition, including deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid model of CNN and LSTM (CNN-LSTM). The experiments were carried on the well-known DEAP dataset. Experimental results show that the CNN and CNN-LSTM models had high classification performance in EEG-based emotion recognition, and their accurate extraction rate of RAW data reached 90.12 and 94.17%, respectively. The performance of the DNN model was not as accurate as other models, but the training speed was fast. The LSTM model was not as stable as the CNN and CNN-LSTM models. Moreover, with the same number of parameters, the training speed of the LSTM was much slower and it was difficult to achieve convergence. Additional parameter comparison experiments with other models, including epoch, learning rate, and dropout probability, were also conducted in the paper. Comparison results prove that the DNN model converged to optimal with fewer epochs and a higher learning rate. In contrast, the CNN model needed more epochs to learn. As for dropout probability, reducing the parameters by ~50% each time was appropriate. © Copyright © 2020 Zhang, Chen, Tan, Chen, Chen, Li, Yang, Su, Huang and Che.
Source Title: Frontiers in Neuroscience
URI: https://scholarbank.nus.edu.sg/handle/10635/196263
ISSN: 1662-4548
DOI: 10.3389/fnins.2020.622759
Rights: Attribution 4.0 International
Appears in Collections:Elements
Staff Publications

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_3389_fnins_2020_622759.pdf2.12 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons