Wyniki wyszukiwania

Filtruj wyniki

  • Czasopisma
  • Autorzy
  • Słowa kluczowe
  • Data
  • Typ

Wyniki wyszukiwania

Wyników: 4
Wyników na stronie: 25 50 75
Sortuj wg:

Abstrakt

This paper proposes a comprehensive study on machine listening for localisation of snore sound excitation. Here we investigate the effects of varied frame sizes, and overlap of the analysed audio chunk for extracting low-level descriptors. In addition, we explore the performance of each kind of feature when it is fed into varied classifier models, including support vector machines, k-nearest neighbours, linear discriminant analysis, random forests, extreme learning machines, kernel-based extreme learning machines, multilayer perceptrons, and deep neural networks. Experimental results demonstrate that, wavelet packet transform energy can outperform most other features. A deep neural network trained with subband energy ratios reaches the highest performance achieving an unweighted average recall of 72.8% from four types for snoring.

Przejdź do artykułu

Autorzy i Afiliacje

Qian Kun
Christoph Janott
Zhang Zixing
Deng Jun
Alice Baird
Heiser Clemens
Winfried Hohenhorst
Michael Herzog
Hemmert Werner
Björn Schuller

Abstrakt

Sleep apnea syndrome is a common sleep disorder. Detection of apnea and differentiation of its type: obstructive (OSA), central (CSA) or mixed is important in the context of treatment methods, however, it typically requires a great deal of technical and human resources. The aim of this research was to propose a quasi-optimal procedure for processing single-channel electroencephalograms (EEG) from overnight recordings, maximizing the accuracy of automatic apnea or hypopnea detection, as well as distinguishing between the OSA and CSA types. The proposed methodology consisted in processing the EEG signals divided into epochs, with the selection of the best methods at the stages of preprocessing, extraction and selection of features, and classification. Normal breathing was unmistakably distinguished from apnea by the k-nearest neighbors (kNN) and an artificial neural network (ANN), and with 99.98% accuracy by the support vector machine (SVM). The average accuracy of multinomial classification was: 82.29%, 83.26%, and 82.25% for the kNN, SVM and ANN, respectively. The sensitivity and precision of OSA and CSA detection ranged from 55 to 66%, and the misclassification cases concerned only the apnea type.
Przejdź do artykułu

Autorzy i Afiliacje

Monika A. Prucnal
1
Adam G. Polak
1

  1. Department of Electronic and Photonic Metrology, Faculty of Electronics, Photonics and Microsystems, Wroclaw University of Science and Technology, Wroclaw, Poland

Abstrakt

Obstructive Sleep Apnea is one common form of sleep apnea and is now tested by means of a process called Polysomnography which is time-consuming, expensive and also requires a human observer throughout the study of the subject which makes it inconvenient and new detection techniques are now being developed to overcome these difficulties. Heart rate variability has proven to be related to sleep apnea episodes and thus the features from the ECG signal can be used in the detection of sleep apnea. The proposed detection technique uses Support Vector Machines using Grid search algorithm and the classifier is trained using features based on heart rate variability derived from the ECG signal. The developed system is tested using the dataset and the results show that this classification system can recognize the disorder with an accuracy rate of 89%. Further, the use of the grid search algorithm has made this system a reliable and an accurate means for the classification of sleep apnea and can serve as a basis for the future development of its screening.
Przejdź do artykułu

Autorzy i Afiliacje

K.K. Valavan
1
S. Manoj
1
S. Abishek
1
T.G. Gokull Vijay
1
A.P. Vojaswwin
1
J. Rolant Gini
1
K.I. Ramachandran
2

  1. Department of Electronics and Communication Engineering, Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India
  2. Centre for Computational Engineering & Networking (CEN), Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India

Abstrakt

Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a common and high-risk sleep-related breathing disorder. Snoring detection is a simple and non-invasive method. In many studies, the feature maps are obtained by applying a short-time Fourier transform (STFT) and feeding the model with single-channel input tensors. However, this approach may limit the potential of convolutional networks to learn diverse representations of snore signals. This paper proposes a snoring sound detection algorithm using a multi-channel spectrogram and convolutional neural network (CNN). The sleep recordings from 30 subjects at the hospital were collected, and four different feature maps were extracted from them as model input, including spectrogram, Mel-spectrogram, continuous wavelet transform (CWT), and multi-channel spectrogram composed of the three single-channel maps. Three methods of data set partitioning are used to evaluate the performance of feature maps. The proposed feature maps were compared through the training set and test set of independent subjects by using a CNN model. The results show that the accuracy of the multi-channel spectrogram reaches 94.18%, surpassing that of the Mel-spectrogram that exhibits the best performance among the single-channel spectrograms. This study optimizes the system in the feature extraction stage to adapt to the superior feature learning capability of the deep learning model, providing a more effective feature map for snoring detection.
Przejdź do artykułu

Autorzy i Afiliacje

Ziqiang Ye
1
Jianxin Peng
2
Xiaowen Zhang
3
Lijuan Song
3

  1. School of Physics and Optoelectronics, South China University of Technology Guangzhou, China`
  2. School of Physics and Optoelectronics, South China University of Technology Guangzhou, China
  3. State Key Laboratory of Respiratory Disease, Department of Otolaryngology-Head and Neck Surgery

Ta strona wykorzystuje pliki 'cookies'. Więcej informacji