Ensemble Learning by High-Dimensional Acoustic Features for Emotion Recognition from Speech Audio Signal


In the recent past, handling the high dimensionality demonstrated in the auditory features of speech signals has been a primary focus for machine learning (ML-)based emotion recognition. The incorporation of high-dimensional characteristics in training datasets in the learning phase of ML models influences contemporary approaches to emotion prediction with significant false alerting. The curse of the excessive dimensionality of the training corpus is addressed in the majority of contemporary models. Modern models, on the other hand, place a greater emphasis on merging many classifiers, which can only increase emotion recognition accuracy even when the training corpus contains high-dimensional data points. “Ensemble Learning by High-Dimensional Acoustic Features (EL-HDAF)” is an innovative ensemble model that leverages the diversity assessment of feature values spanned over diversified classes to recommend the best features. Furthermore, the proposed technique employs a one-of-a-kind clustering process to limit the impact of high-dimensional feature values. The experimental inquiry evaluates and compares emotion forecasting using spoken audio data to current methods that use machine learning for emotion recognition. Fourfold cross-validation is used for performance analysis with the standard data corpus.

Security and Communication Networks
Shitharth Selvarajan
Shitharth Selvarajan
Lecturer in Cyber Security

My research interests include Cyber Security, Blockchain, Critical Infrastructure & Systems, Network Security & Ethical Hacking.