Sen, DogancanSert, Mustafa2023-08-152023-08-152018978-1-5386-1501-02165-0608http://hdl.handle.net/11727/10274Automatic analysis of human emotions by computer systems is an important task for human-machine interaction. Recent studies show that, the temporal characteristics of emotions play an important role in the success of automatic recognition. Also, the use of different signals (facial expressions, bio-signals, etc.) is important for the understanding of emotions. In this study, we propose a multi-modal method based on feature-level fusion of human facial expressions and electroencephalograms (EEG) data to predict human emotions in continuous valence dimension. For this purpose, a recursive neural network (LSTM-RNN) with long short-term memory units is designed. The proposed method is evaluated on the MAHNOB-HCI performance data set.turinfo:eu-repo/semantics/closedAccessfacial expressionlstmcontinuous valence predictionemotion recognitionContinuous Valence Prediction Using Recurrent Neural Networks with Facial Expressions and EEG SignalsconferenceObject140005114485003822-s2.0-85050816683