Utilizing CNN architectures for non-invasive diagnosis of speech disorders – further experiments and insights

  • Filip Ratajczak*
  • , Mikolaj Najda
  • , Kamil Szyc
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

This research investigated the application of deep neural networks for diagnosing diseases that affect the voice and speech mechanisms through the non-invasive analysis of vowel sound recordings. Using the Saarbruecken Voice Database, the voice recordings were converted to spectrograms to train the models, specifically focusing on the vowels /a/, /u/, and /i/. The study used Explainable Artificial Intelligence (XAI) methodologies to identify essential features within these spectrograms for pathology identification, with the aim of providing medical professionals with enhanced insight into how diseases manifest in sound production. The F1 Score performance evaluation showed that the DenseNet model scored 0.70 ± 0.03 with a top of 0.74. The findings indicated that neither vowel selection nor data augmentation strategies significantly improved model performance. Additionally, the research highlighted that signal splitting was ineffective in enhancing the models’ ability to extract features. This study builds on our previous research [1], offering a more comprehensive understanding of the topic. 1
Original languageEnglish
JournalInternational Journal of Electronics and Telecommunications
Volume71
Issue number3
DOIs
Publication statusPublished - 2025

Keywords

  • Convolutional Neural Networks (CNNs)
  • Explainable Artificial Intelligence (XAI)
  • Voice Disorder Diagnosis
  • Vowel Sound Analysis

Fingerprint

Dive into the research topics of 'Utilizing CNN architectures for non-invasive diagnosis of speech disorders – further experiments and insights'. Together they form a unique fingerprint.

Cite this