G. V. Shilpa
A person’s expressions of feeling are readable from their face, which is widely regarded as the most significant feature of the human body. Detecting and recognizing a person’s face is more accurate and less expensive than other types of biometrics. It is possible to infer a person’s intention and state based on their emotional state, thanks to a modality known as emotion. Within the realm of computer vision research, expression analysis and recognition have emerged as one of the more exciting research topics. New HCI research considers the user’s emotional state to deliver a smooth interface. This study suggests a hybrid deep learning technique for emotion analysis based on face images. The suggested system is an amalgamation of VGG16 and Bidirectional LSTM techniques to classify various emotions on the face. Binary cross-entropy was used as a loss function to optimize the model. The model was taught and tested on the KDEF dataset. Another hybrid model comprising of Conv2D, Maxpooling2D, and Bidirectional LSTM models was tested on the CK+48 dataset. Both models showed efficient performance accuracy in training and testing the f ace emotion classifications.
Image Processing, Convolutional Neural Network, VGG-16, Bi-LSTM, Maxpooling
Cite this paper
G. V. Shilpa. (2022) Emotional Analysis Using Hybrid Deep Learning Models. International Journal of Signal Processing, 7, 62-73