TY - JOUR
T1 - Age and gender recognition using a convolutional neural network with a specially designed multi-attention module through speech spectrograms
AU - Tursunov, Anvarjon
AU - Mustaqeem,
AU - Choeh, Joon Yeon
AU - Kwon, Soonil
N1 - Funding Information:
Funding: This research was supported by the Artificial Intelligence Learning Data Collection research project through the National Information Society Agency (NIA) funded by the Ministry of Science and ICT. (No.2020-Data-We81-1: The construction of Speech command AI data).
Funding Information:
The Korean speech recognition database was developed using 24,000 sentences related to utilizing the sentence for AI-based systems, involving 7601 Korean speakers collected from residents of seven regions within the province of Korea. This study was financially supported by the AI database project of the National Information Society Agency (NIA) in 2020. The recordings occurred in a silent room using a personal smart device. The participants were requested to record themselves according to the prepared sentences. After each sentence was recorded, they listened to and checked the quality of the voice data. If not correctly recorded, the same sentence was re-recorded until the recording quality was sufficient. The evaluation of the recorded data was conducted by professional researchers. The database contains records of three situations, AI secretary, AI robot, and Kiosk, from the following three groups: children, adults, and the elderly. The total recording time was 10,000 h. We selected random samples from the AI secretary situation for our experiments and evaluated our model. A detailed description of the dataset is provided in Table 3.
Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2021/9/1
Y1 - 2021/9/1
N2 - Speech signals are being used as a primary input source in human–computer interaction (HCI) to develop several applications, such as automatic speech recognition (ASR), speech emotion recognition (SER), gender, and age recognition. Classifying speakers according to their age and gender is a challenging task in speech processing owing to the disability of the current methods of extracting salient high-level speech features and classification models. To address these problems, we introduce a novel end-to-end age and gender recognition convolutional neural network (CNN) with a specially designed multi-attention module (MAM) from speech signals. Our proposed model uses MAM to extract spatial and temporal salient features from the input data effectively. The MAM mechanism uses a rectangular shape filter as a kernel in convolution layers and comprises two separate time and frequency attention mechanisms. The time attention branch learns to detect temporal cues, whereas the frequency attention module extracts the most relevant features to the target by focusing on the spatial frequency features. The combination of the two extracted spatial and temporal features complements one another and provide high performance in terms of age and gender classification. The proposed age and gender classification system was tested using the Common Voice and locally developed Korean speech recognition datasets. Our suggested model achieved 96%, 73%, and 76% accuracy scores for gender, age, and age-gender classification, respectively, using the Common Voice dataset. The Korean speech recognition dataset results were 97%, 97%, and 90% for gender, age, and age-gender recognition, respectively. The prediction performance of our proposed model, which was obtained in the experiments, demonstrated the superiority and robustness of the tasks regarding age, gender, and age-gender recognition from speech signals.
AB - Speech signals are being used as a primary input source in human–computer interaction (HCI) to develop several applications, such as automatic speech recognition (ASR), speech emotion recognition (SER), gender, and age recognition. Classifying speakers according to their age and gender is a challenging task in speech processing owing to the disability of the current methods of extracting salient high-level speech features and classification models. To address these problems, we introduce a novel end-to-end age and gender recognition convolutional neural network (CNN) with a specially designed multi-attention module (MAM) from speech signals. Our proposed model uses MAM to extract spatial and temporal salient features from the input data effectively. The MAM mechanism uses a rectangular shape filter as a kernel in convolution layers and comprises two separate time and frequency attention mechanisms. The time attention branch learns to detect temporal cues, whereas the frequency attention module extracts the most relevant features to the target by focusing on the spatial frequency features. The combination of the two extracted spatial and temporal features complements one another and provide high performance in terms of age and gender classification. The proposed age and gender classification system was tested using the Common Voice and locally developed Korean speech recognition datasets. Our suggested model achieved 96%, 73%, and 76% accuracy scores for gender, age, and age-gender classification, respectively, using the Common Voice dataset. The Korean speech recognition dataset results were 97%, 97%, and 90% for gender, age, and age-gender recognition, respectively. The prediction performance of our proposed model, which was obtained in the experiments, demonstrated the superiority and robustness of the tasks regarding age, gender, and age-gender recognition from speech signals.
KW - Age and gender recognition
KW - Convolutional neural network
KW - Human-computer interaction
KW - Multi-attention module
KW - Speech signals
UR - http://www.scopus.com/inward/record.url?scp=85114105047&partnerID=8YFLogxK
U2 - 10.3390/s21175892
DO - 10.3390/s21175892
M3 - Article
C2 - 34502785
AN - SCOPUS:85114105047
VL - 21
JO - Sensors (Switzerland)
JF - Sensors (Switzerland)
SN - 1424-8220
IS - 17
M1 - 5892
ER -