Notice: Undefined index: linkPowrot in C:\wwwroot\wwwroot\publikacje\publikacje.php on line 1275
Publikacje
Pomoc (F2)
[82760] Artykuł:

Multi-Classifier Speech Emotion Recognition System

Czasopismo: Proceedings of the 26th International Telecommunications Forum (TELFOR 2018)   Tom: 26, Strony: 1-4
ISBN:  978-1-5386-7171-9
Wydawca:  IEEE, 345 E 47TH ST, NEW YORK, NY 10017 USA
Opublikowano: Listopad 2018
 
  Autorzy / Redaktorzy / Twórcy
Imię i nazwisko Wydział Katedra Do oświadczenia
nr 3
Grupa
przynależności
Dyscyplina
naukowa
Procent
udziału
Liczba
punktów
do oceny pracownika
Liczba
punktów wg
kryteriów ewaluacji
Pavol Partila Niespoza "N" jednostki30.00.00  
Jaromir Tovarek Niespoza "N" jednostki20.00.00  
Miroslav Voznak Niespoza "N" jednostki20.00.00  
Jan Rozhon Niespoza "N" jednostki10.00.00  
Lukas Sevcik Niespoza "N" jednostki10.00.00  
Remigiusz Baran orcid logo WEAiIKatedra Informatyki, Elektroniki i Elektrotechniki *Niespoza "N" jednostkiInformatyka techniczna i telekomunikacja1015.00.00  

Grupa MNiSW:  Materiały z konferencji międzynarodowej (zarejestrowane w Web of Science)
Punkty MNiSW: 15
Klasyfikacja Web of Science: Proceedings Paper


Pełny tekstPełny tekst     DOI LogoDOI     Web of Science Logo Web of Science    
Keywords:

Feature extraction  Databases  Speech recognition  Emotion recognition  Support vector machines  Telecommunications  System analysis and design 



Abstract:

This article describes the design and application of a speech emotion recognition system. The system is trained in Czech emotionally coloured speech. The output of the system is the evaluation one of the four emotional states. Classification of emotion from extracted features is performed by a multi-classifier whose structure consists of three sub-classifiers fusion by Bayes Believe rule. The proposed system was deployed in the Secure Mobile Communication Infrastructure developed for the Czech Republic Security Components.



B   I   B   L   I   O   G   R   A   F   I   A
1. W.R. Picard, “Affective computing,” MIT Press, 1997, ISBN 02-621-6170-2.
2. M. Stanek, and M. Sigmund, “Psychological stress detection in speech using return-to-opening phase ratios in glottis,” Elektronika ir Elektrotechnika, vol. 21, no. 5 pp. 59–63, 2015.
3. S. Ramakrishnan and I.M.M. El Emary. “Speech emotion recognition approaches in human computer interaction,” Telecommunication Systems, vol. 52, no. 3, pp. 1467–1478, 2013.
4. R. Banse and K.R. Scherer. “Acoustic profiles in vocal emotion expression,” Journal of personality and social psychology vol. 70, no. 3, 1996.
5. C. Brester, , “Evolutionary feature selection for emotion recognition in multilingual speech analysis,” In: Evolutionary Computation (CEC), 2015 IEEE Congress on. IEEE, 2015.
6. D. Uhrin , “Design and implementation of Czech database of speech emotions,” In: Telecommunications Forum Telfor (TELFOR), 2014 22nd. IEEE, 2014.
7. D. Uhrin , “One approach to design of speech emotion database,” In: Machine Intelligence and Bio-inspired Computation: Theory and Applications X. Vol. 9850. International Society for Optics and Photonics, 2016.
8. P. Partila , “Speech emotions recognition using 2-d neural classifier,” In: Nostradamus 2013: Prediction, modeling and analysis of complex systems, Springer, Heidelberg, 2013, pp. 221–231.
9. F. Eyben, M. Woellmer, and B. Schuller. “The Munich open speech and music interpretation by large space extraction toolkit,” IEEE Netw. vol. 24, no. 2, pp. 36–41, 2010.
10. W. Li, J. Hou, and L. Yin. “A classifier fusion method based on classifier accuracy,” In: Mechatronics and Control (ICMC), 2014 International Conference on. IEEE, 2014.
11. M. Woźniak, M. Graña, and E. Corchado. “A survey of multiple classifier systems as hybrid systems,” Information Fusion, vol. 16, pp. 3–17, 2014.
12. A.J. Ma, Y.C. Pong and J.-H. Lai. “Linear dependency modeling for classifier fusion and feature combination,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 5 pp. 1135–1148, 2013.
13. P. Partila , “Pattern recognition methods and features selection for speech emotion recognition system,” The Scientific World Journal 2015 (2015).
14. H. Zouari,, “Building diverse classifier outputs to evaluate the behavior of combination methods: the case of two classifiers,” In: International Workshop on Multiple Classifier Systems. Springer, Berlin, Heidelberg, 2004.
15. Y. Ding, A. Rattani, and A. Ross. “Bayesian belief models for integrating match scores with liveness and quality measures in a fingerprint verification system”, In: Biometrics (ICB), 2016 International Conference on. IEEE, 2016.
16. P. Partila, M. Voznak, M. Mikulec, J. Zdralek, “Fundamental frequency extraction method using central clipping and its importance for the classification of emotional state” Advances in Electrical and Electronic Engineering, vol. 10, n. 4, pp. 270–275, 2012.
17. J. Tovarek, G.H. Ilk, P. Partila, M. Voznak, “Human Abnormal Behavior Impact on Speaker Verification Systems” IEEE Access, 6, art. no. 6287639, pp. 40120–40127, 2018.