CN116343828A - Sleep apnea event recognition system based on time sequence algorithm - Google Patents

Sleep apnea event recognition system based on time sequence algorithm Download PDF

Info

Publication number
CN116343828A
CN116343828A CN202310438705.0A CN202310438705A CN116343828A CN 116343828 A CN116343828 A CN 116343828A CN 202310438705 A CN202310438705 A CN 202310438705A CN 116343828 A CN116343828 A CN 116343828A
Authority
CN
China
Prior art keywords
module
sleep
audio
mfcc
time sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310438705.0A
Other languages
Chinese (zh)
Inventor
邱禧荷
李斌
谭晓宇
方志军
沈骏
黄晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202310438705.0A priority Critical patent/CN116343828A/en
Publication of CN116343828A publication Critical patent/CN116343828A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4818Sleep apnoea
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a sleep apnea event identification system based on a time sequence algorithm, which comprises an audio acquisition module, a sleep module and a sleep module, wherein the audio acquisition module is used for acquiring night sleep audio of a user; the audio signal feature extraction module is used for carrying out feature extraction on the extracted night sleep audio of the user by adopting the MFCC to obtain MFCC features; the feature labeling module is used for labeling the MFCC features; the training set construction module is used for constructing a training set of the time sequence classification algorithm model based on the marked MFCC characteristics; and the statistical classification module is used for training the linear classifier based on the training set of the time sequence classification algorithm model, classifying to obtain the number of the sleep apnea events of the user at night, and calculating and completing the recognition of the sleep apnea events through an AHI index formula. Compared with the prior art, the method has the advantages of strong practicability, low model calculation complexity, high calculation speed and the like.

Description

Sleep apnea event recognition system based on time sequence algorithm
Technical Field
The invention relates to the technical field of audio signal processing, in particular to a sleep apnea event identification system based on a time sequence algorithm.
Background
Obstructive Sleep Apnea (OSA) is a sleep respiratory disorder, one of the common diseases in sleep. OSA is often accompanied by problems of reduced sleep quality, somnolence, fatigue, inattention, etc., as well as increased risk of various chronic diseases, such as hypertension, cardiovascular disease, diabetes, etc., which can be seriously life threatening. According to the latest studies, the prevalence of OSA in china is quite high, with more than 20% of the population suffering from OSA, with a severity of 8.8%, and the timely diagnosis and treatment of OSA is critical for preventing and alleviating the risk of related diseases, as well as reducing social and economic burden.
Laboratory Polysomnography (PSG) is a gold standard for diagnosing OSA. The severity of OSA is judged by the conditions of apnea and hypopnea during sleep, thus calculating the apnea-hypopnea index (AHI). However, PSG examination requires overnight in a sleep laboratory and a large number of lead wires to be connected, which is inconvenient for the life and work of the patient, so many patients with sleep disorder are not diagnosed in time, and the subsequent treatment is affected. Snoring is the earliest symptom of OSA and is also a related typical symptom, and can be directly obtained through a microphone, thus greatly reducing the cost of obtaining data. Recent studies have shown that OSA assisted detection can be performed by snore acoustic feature extraction analysis.
Time series classification algorithms play an important role in biomedical signal classification. These signals include Electrocardiogram (ECG), electroencephalogram (EEG), electromyogram (EMG), respiratory signals, blood pressure signals, and the like. Meanwhile, sound data is a signal having a time property, and can be represented by a series of time-series data. Common time series classification algorithms include feature extraction-based methods and deep learning-based methods. Feature extraction-based methods typically extract features from the sound signal and then classify the features using a classifier. The deep learning based method may directly input the original sound signal and classify it using a depth model. This approach typically requires a larger data set and higher computational resources, but can more accurately identify the sound signal and extract more information.
In summary, the existing sleep apnea event recognition method has the defects of high cost, troublesome detection and the like. In view of this, chinese application CN111613210a proposes a classification detection system for various apnea syndromes, which belongs to the field of snore detection and disease discrimination; the snore detecting device comprises an audio acquisition module, a snore extracting module, a characteristic extracting module, a snore identifying module and a statistics judging module, wherein the audio acquisition module is used for acquiring audio of a detected patient in a sleeping state of the whole night; the snore extracting module is used for extracting all snore segment audios in the complete audios; the feature extraction module is used for extracting features of the acquired snore segments; the snore identification module is used for automatically identifying and detecting various snores on all snore segments by using a model based on an EfficientNeT neural network; the statistics judging module is used for counting various snore conditions and completing classification detection of various apnea syndromes according to the AHI index. Although this application can effectively identify the apnea syndrome, the model used in this application is relatively complex, resulting in an insufficient computational speed.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a sleep apnea event identification system based on a time sequence algorithm.
The aim of the invention can be achieved by the following technical scheme:
a sleep apnea event recognition system based on a time sequence algorithm comprises an audio acquisition module, an audio signal feature extraction module, a feature labeling module, a training set construction module and a statistical classification module;
the audio acquisition module is used for acquiring night sleep audio of the user;
the audio signal feature extraction module adopts an MFCC to extract features of the extracted night sleep audio of the user to obtain MFCC features;
the characteristic labeling module is used for labeling the MFCC characteristics;
the training set construction module constructs a training set of the time sequence classification algorithm model based on the marked MFCC characteristics;
the statistical classification module trains a linear classifier based on the training set of the time sequence classification algorithm model, classifies the number of the sleep apnea events of the user at night, and calculates and completes the recognition of the sleep apnea events through an AHI index formula.
Further, in the audio signal feature extraction module, MFCC is used to convert the night sleep audio of the user from time domain to frequency domain.
Further, features extracted using MFCCs include pitch, volume, and formants.
Further, the feature extraction of the extracted night sleep audio of the user by using the MFCC specifically includes the following steps:
dividing an audio signal into a plurality of time windows, and converting the audio signal into a frequency domain representation by applying short-time Fourier transform in each window to obtain a frequency domain signal;
converting the frequency domain signal into a Mel frequency representation by using a Mel filter bank, and taking logarithms of energy in each frequency band to obtain a logarithmic energy sequence;
applying discrete cosine transform to the logarithmic energy sequences within each window to obtain a set of MFCC coefficients;
the first 13-dimensional MFCC coefficients are kept as characteristic representations.
Further, the first 13-dimensional MFCC coefficients include 1-dimensional signal frame energy and 12-dimensional DCT coefficients.
Further, the feature labeling module labels the MFCC features in a programmed automatic labeling manner, and labeling labels include apnea and hypopnea events and normal sleep.
Further, the training set construction module constructs a training set of the time sequence classification algorithm model based on the annotated MFCC features, specifically including the following steps:
extracting the characteristics of a signal frame according to one frame of 25ms for night sleep audio of a user, and carrying out characteristic aggregation every 1 second;
taking an average value of the MFCC characteristics of each dimension of 13 dimensions, and outputting the characteristics of each second in each dimension, wherein the characteristics represent the audio characteristics within 1 second;
marking the time sequence into two labels of an apnea event, a hypopnea event and a normal sleep event through an automatic labeling program, and storing the labels as a data set sample;
repeating the steps until the preset standard is reached, and completing the training set construction of the time sequence classification algorithm model.
Further, the statistical classification module classifies the downstream tasks by adopting a MiniRoccket time sequence classification algorithm, and the statistical classification module comprises the following steps:
performing feature transformation by using the convolution check time sequence data, and applying each convolution kernel to the time sequence to generate a corresponding feature map;
generating two feature values from each feature map in an aggregation way, wherein the two feature values comprise the global pool maximum value and the positive value proportion of the feature map;
training a ridge classifier based on the feature values by using the time-series features after feature conversion, and performing regression operation by converting the tag y into { -1,1 };
each convolution kernel has a swelling degree d to carry out cavity convolution so as to obtain different receptive fields to capture time sequence features of different scales;
night sleep audio clip feature time sequence X for a given user i And has a degree of expansionConvolution kernel ω of d j And performing feature transformation through a cavity convolution kernel to obtain:
Figure BDA0004193086060000041
wherein the expansion degree d takes the value of d= { [2 ] 0 ],…,[2 max ]},[·]Representation rounding;
Figure BDA0004193086060000042
wherein, I kernel For the convolution kernel length, the max value ensures that the receptive field of the cavity convolution covers the whole time sequence, l input Is the length of the input sequence.
Further, the statistical classification module classifies the number of the sleep apnea event and the low ventilation event of the user in the night sleep audio, and the recognition of the sleep apnea event is completed through calculation of an AHI index formula, wherein the AHI index formula has the expression:
Figure BDA0004193086060000043
further, the statistical classification module outputs OSA symptom index of the patient based on the calculated AHI index, and OSA symptom classification criteria are: simple snoring disease: AHI epsilon [0,5]; light: AHI epsilon [5,15]; and (3) moderately: AHI epsilon [15,30]; severe: AHI e [30, ++).
Compared with the prior art, the invention has the following beneficial effects:
1. the invention provides an intelligent auxiliary decision system for classifying the severity of OSA of a suspected OSA patient by calculating an AHI index based on the audio data characteristics of sleep snore of a time sequence classification algorithm MiniRocet, which can analyze sleep apnea event of the user in a more economical way, thereby reducing the economic burden of the user and having good practicability and economic benefit.
2. The automatic label marking system is adopted, so that manual marking is avoided, the cost is saved, and time and labor are saved.
3. The invention relates to an end-to-end classification model, which can output the severity classification of sleep apnea event of a user only by inputting original sleep audio, and avoids artificial feature design.
4. The invention converts the audio information into time series characteristics, and captures the relation between the signal frames and the adjacent frames by utilizing convolution kernels with different expansion degrees, thereby enhancing the characteristic representation.
5. The MiniRoccket time sequence algorithm adopted by the invention is a very rapid algorithm, compared with a deep network, the method reduces the calculation complexity and the model calculation time, and can rapidly and accurately output the classification result.
Drawings
FIG. 1 is a schematic diagram of a system identification process according to the present invention;
FIG. 2 is a flow chart of MFCC feature extraction steps in an embodiment of the present invention.
Fig. 3 is a model diagram of a time series algorithm in an embodiment of the invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
The invention provides an intelligent auxiliary decision system for classifying the severity of OSA of a suspected OSA patient by calculating an AHI index by analyzing the audio data characteristics of sleep snore of the suspected OSA patient based on a time sequence classification algorithm MiniRocet, which greatly reduces the economic burden of the patient.
Specifically, as shown in fig. 1, the invention provides a sleep apnea event recognition system based on a time sequence algorithm, which comprises an audio acquisition module, an audio signal feature extraction module, a feature labeling module, a training set construction module and a statistical classification module;
the audio acquisition module is used for acquiring night sleep audio of the user; the audio signal feature extraction module adopts the MFCC to extract the features of the extracted night sleep audio of the user to obtain the MFCC features; the feature labeling module is used for labeling the MFCC features; the training set construction module constructs a training set of the time sequence classification algorithm model based on the marked MFCC characteristics; the statistical classification module trains a linear classifier based on a training set of a time sequence classification algorithm model, classifies the number of the sleep apnea events of the user at night, and calculates and completes the recognition of the sleep apnea events through an AHI index formula.
1. Automatic feature labeling based on MFCC (Mel-frequency cepstral coefficient) audio features
Firstly, the snore signal characteristic adopts a mel cepstrum feature method (namely MFCC) to extract the primary characteristic of the sleeping audio of the patient overnight. The primary function of MFCCs is to transform an audio signal from the time domain to the frequency domain, extracting features therein, such as pitch, volume, formants, etc., for subsequent analysis. The main extraction process is shown in fig. 2, where the audio signal is divided into several time windows and a Short-time fourier transform (STFT) signal is applied within each window to convert to a frequency domain representation. Next, the frequency domain signal is converted to a mel frequency representation using a mel filter bank and the logarithm of the energy in each frequency band is taken. A discrete cosine transform (Discrete Cosine Transform, DCT) is then applied to the logarithmic energy sequences within each window, resulting in a set of MFCC coefficients. We retain the first 13-dimensional MFCC coefficients as a characteristic representation, comprising the energy of the 1-dimensional signal frame, which refers to the total energy of the audio signal at each frame, and the 12-dimensional DCT coefficients, to enhance the discrimination accuracy.
And secondly, labeling the extracted MFCC features by expert labels, namely, labeling the apnea event, the hypopnea event and the normal sleep. In order to avoid the increase of the cost of manual labeling, each section of audio of a patient is automatically cut and labeled in a programming automatic labeling mode.
2. Time series processing of audio features
In the preprocessing part, the characteristics of the signal frames are extracted according to one frame of 25ms for the intercepted audio fragments, the characteristics are aggregated every 1 second, the average value of the MFCC characteristics of each dimension of 13 dimensions is taken, the characteristics of each second in each dimension are output, and the characteristics of the audio within 1 second are represented. An audio fragment has time sequences of 30-60 seconds, which are equivalent to 30-60 time steps, the time sequences are marked into two labels of an apnea event and a normal sleep through an automatic marking program, the labels are stored as a data set sample, and the steps are repeated to complete the training set construction of a time sequence classification algorithm model.
3. Time sequence classification algorithm model based on MiniRoccket
MiniRoccket is a time series classification algorithm based on the improvement of Rocker. The algorithm idea originates from a convolutional neural network (Convolutional Neural Networks, CNN) using convolutional kernels for feature extraction. The MiniRocet takes out the convolution kernels extracted by the features in the convolution neural network, performs feature transformation by using a large number of small and determined convolution check time series data, trains a linear classifier by using the transformed data, and classifies downstream tasks.
Specifically, using a MiniRocet includes the steps of:
performing feature transformation by using the convolution check time sequence data, and applying each convolution kernel to the time sequence to generate a corresponding feature map; generating two feature values from each feature map in an aggregation way, wherein the two feature values comprise the global pool maximum value and the positive value proportion of the feature map; training a ridge classifier, also called a least squares support vector machine of a linear kernel, based on the feature values, using the feature-converted time-series features, performing a regression operation by converting the tag y into { -1,1 };
each convolution kernel has a certain expansion degree d to carry out cavity convolution so as to obtain different receptive fields to capture time sequence features of different scales. Specifically, given a patient audio clip feature time sequence X i And a convolution kernel ω having a degree of dilation d j And performing feature transformation through a cavity convolution kernel to obtain:
Figure BDA0004193086060000061
wherein the expansion degree d has a value range of d= { [2 ] 0 ],…,[2 max ]},[·]Representation rounding;
Figure BDA0004193086060000071
wherein, I kernel For the convolution kernel length, the max value ensures that the receptive field of the cavity convolution covers the whole time sequence, l input Is the length of the input sequence. The purpose of the cavity convolution is to expand the range of the convolution kernels for performing feature transformation corresponding to the original sequence, and the receptive field of each convolution kernel is randomly distributed, so that the richness of the algorithm for converting the audio features is ensured.
In OSA assisted diagnosis, this embodiment defines a normal sleep label of 0, an apnea and hypopnea event of 1, and performs a two-class training task, as shown in the flowchart of fig. 3. Through MFCC feature calculation, audio feature information is extracted, the audio feature information is averaged every 1 second and used as time series data, and a MiniRocet algorithm is used for classifying the apneic events. Finally, counting the number of the classified apnea events in the audio of the recording pen of the patient overnight, calculating the AHI index of the patient through an AHI index formula, and outputting the OSA symptom index of the patient to finish intelligent auxiliary diagnosis.
Figure BDA0004193086060000072
The OSA symptom classification criteria used in this embodiment are: simple snoring disease: AHI e [0,5], mild: AHI e [5,15], moderate: AHI e [15,30], severe: AHI e [30, ++).
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. The sleep apnea event recognition system based on the time sequence algorithm is characterized by comprising an audio acquisition module, an audio signal characteristic extraction module, a characteristic labeling module, a training set construction module and a statistical classification module;
the audio acquisition module is used for acquiring night sleep audio of the user;
the audio signal feature extraction module adopts an MFCC to extract features of the extracted night sleep audio of the user to obtain MFCC features;
the characteristic labeling module is used for labeling the MFCC characteristics;
the training set construction module constructs a training set of the time sequence classification algorithm model based on the marked MFCC characteristics;
the statistical classification module trains a linear classifier based on the training set of the time sequence classification algorithm model, classifies the number of the sleep apnea events of the user at night, and calculates and completes the recognition of the sleep apnea events through an AHI index formula.
2. The sleep apnea event recognition system of claim 1, wherein the audio signal feature extraction module uses MFCC to transform the user night sleep audio from time domain to frequency domain.
3. The sleep apnea event identification system of claim 1, wherein the features extracted using MFCC include pitch, volume and formants.
4. The sleep apnea event recognition system based on time series algorithm of claim 1, wherein the feature extraction of the extracted night sleep audio of the user using MFCC comprises the following steps:
dividing an audio signal into a plurality of time windows, and converting the audio signal into a frequency domain representation by applying short-time Fourier transform in each window to obtain a frequency domain signal;
converting the frequency domain signal into a Mel frequency representation by using a Mel filter bank, and taking logarithms of energy in each frequency band to obtain a logarithmic energy sequence;
applying discrete cosine transform to the logarithmic energy sequences within each window to obtain a set of MFCC coefficients;
the first 13-dimensional MFCC coefficients are kept as characteristic representations.
5. The sleep apnea event identification system based on a time series algorithm of claim 4, wherein the first 13-dimensional MFCC coefficients include 1-dimensional signal frame energy and 12-dimensional DCT coefficients.
6. The sleep apnea event identification system based on time series algorithm of claim 1, wherein the feature labeling module labels the MFCC features in a programmed automatic labeling manner, labeling labels including apnea and hypopnea events and normal sleep.
7. The sleep apnea event recognition system based on time series algorithm of claim 1, wherein the training set construction module constructs a training set of a time series classification algorithm model based on the annotated MFCC features, specifically comprising the following steps:
extracting the characteristics of a signal frame according to one frame of 25ms for night sleep audio of a user, and carrying out characteristic aggregation every 1 second;
taking an average value of the MFCC characteristics of each dimension of 13 dimensions, and outputting the characteristics of each second in each dimension, wherein the characteristics represent the audio characteristics within 1 second;
marking the time sequence into two labels of an apnea event, a hypopnea event and a normal sleep event through an automatic labeling program, and storing the labels as a data set sample;
repeating the steps until the preset standard is reached, and completing the training set construction of the time sequence classification algorithm model.
8. The sleep apnea event recognition system based on time series algorithm of claim 1, wherein the statistical classification module classifies downstream tasks using a minirock time series classification algorithm, comprising the steps of:
performing feature transformation by using the convolution check time sequence data, and applying each convolution kernel to the time sequence to generate a corresponding feature map;
generating two feature values from each feature map in an aggregation way, wherein the two feature values comprise the global pool maximum value and the positive value proportion of the feature map;
training a ridge classifier based on the feature values by using the time-series features after feature conversion, and performing regression operation by converting the tag y into { -1,1 };
each convolution kernel has a swelling degree d to carry out cavity convolution so as to obtain different receptive fields to capture time sequence features of different scales;
night sleep audio clip feature time sequence X for a given user i And a convolution kernel ω having a degree of dilation d j And performing feature transformation through a cavity convolution kernel to obtain:
Figure FDA0004193086050000021
wherein the expansion degree d takes the value of d= { [2 ] 0 ],…,[2 max ]},[·]Representation rounding;
Figure FDA0004193086050000031
wherein, I kernel For the convolution kernel length, the max value ensures that the receptive field of the cavity convolution covers the whole time sequence, l input For input ofThe length of the sequence.
9. The sleep apnea event identification system based on time series algorithm of claim 1, wherein the statistics classification module classifies the number of apnea and hypopnea events in the user's night sleep audio, and the identification of sleep apnea events is accomplished by calculating an AHI index formula, the expression of the AHI index formula is:
Figure FDA0004193086050000032
10. the sleep apnea event identification system based on time series algorithm of claim 1, wherein the statistical classification module outputs patient OSA symptom scores based on the calculated AHI index, OSA symptom classification criteria are: simple snoring disease: AHI epsilon [0,5]; light: AHI epsilon [5,15]; and (3) moderately: AHI epsilon [15,30]; severe: AHI e [30, ++).
CN202310438705.0A 2023-04-21 2023-04-21 Sleep apnea event recognition system based on time sequence algorithm Pending CN116343828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310438705.0A CN116343828A (en) 2023-04-21 2023-04-21 Sleep apnea event recognition system based on time sequence algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310438705.0A CN116343828A (en) 2023-04-21 2023-04-21 Sleep apnea event recognition system based on time sequence algorithm

Publications (1)

Publication Number Publication Date
CN116343828A true CN116343828A (en) 2023-06-27

Family

ID=86876126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310438705.0A Pending CN116343828A (en) 2023-04-21 2023-04-21 Sleep apnea event recognition system based on time sequence algorithm

Country Status (1)

Country Link
CN (1) CN116343828A (en)

Similar Documents

Publication Publication Date Title
Mendonca et al. A review of obstructive sleep apnea detection approaches
Shen et al. Multiscale deep neural network for obstructive sleep apnea detection using RR interval from single-lead ECG signal
CN108670200B (en) Sleep snore classification detection method and system based on deep learning
US10007480B2 (en) Multi-parametric analysis of snore sounds for the community screening of sleep apnea with non-Gaussianity index
Zarei et al. Automatic classification of apnea and normal subjects using new features extracted from HRV and ECG-derived respiration signals
Jin et al. New approaches for spectro-temporal feature extraction with applications to respiratory sound classification
US20200365271A1 (en) Method for predicting sleep apnea from neural networks
WO2021114761A1 (en) Lung rale artificial intelligence real-time classification method, system and device of electronic stethoscope, and readable storage medium
Cheng et al. Automated sleep apnea detection in snoring signal using long short-term memory neural networks
WO2018011801A1 (en) Estimation of sleep quality parameters from whole night audio analysis
CN111685774B (en) OSAHS Diagnosis Method Based on Probability Integrated Regression Model
Ulukaya et al. Overcomplete discrete wavelet transform based respiratory sound discrimination with feature and decision level fusion
Shen et al. Detection of snore from OSAHS patients based on deep learning
CN111248859A (en) Automatic sleep apnea detection method based on convolutional neural network
CN113288065A (en) Real-time apnea and hypopnea prediction method based on snore
Wu et al. A novel approach to diagnose sleep apnea using enhanced frequency extraction network
Sun et al. Snorenet: Detecting snore events from raw sound recordings
Razi et al. Sleep apnea classification using random forest via ECG
CN111938650A (en) Method and device for monitoring sleep apnea
CN111312293A (en) Method and system for identifying apnea patient based on deep learning
Tyagi et al. Automatic detection of sleep apnea from single-lead ECG signal using enhanced-deep belief network model
CN113974607A (en) Sleep snore detecting system based on impulse neural network
Ma et al. Application of time-frequency domain and deep learning fusion feature in non-invasive diagnosis of congenital heart disease-related pulmonary arterial hypertension
CN117064333B (en) Primary screening device for obstructive sleep apnea hypopnea syndrome
CN116343828A (en) Sleep apnea event recognition system based on time sequence algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination