CN111012306B - Sleep respiratory sound detection method and system based on double neural networks - Google Patents

Sleep respiratory sound detection method and system based on double neural networks Download PDF

Info

Publication number
CN111012306B
CN111012306B CN201911134574.7A CN201911134574A CN111012306B CN 111012306 B CN111012306 B CN 111012306B CN 201911134574 A CN201911134574 A CN 201911134574A CN 111012306 B CN111012306 B CN 111012306B
Authority
CN
China
Prior art keywords
frame
neural network
artificial neural
training data
data samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911134574.7A
Other languages
Chinese (zh)
Other versions
CN111012306A (en
Inventor
许志勇
董文秀
赵兆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201911134574.7A priority Critical patent/CN111012306B/en
Publication of CN111012306A publication Critical patent/CN111012306A/en
Application granted granted Critical
Publication of CN111012306B publication Critical patent/CN111012306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4815Sleep quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a sleep respiratory sound detection method and system based on a double neural network, wherein the method comprises the following steps: collecting actually measured sleep sound data, and respectively marking respiratory sound and non-respiratory sound; dividing the actually measured sleep sound data into training data samples and testing data samples; acquiring an energy threshold for distinguishing respiratory sounds with different intensities, and dividing training data samples into two types of training data samples according to the threshold; training the corresponding artificial neural networks by using the two types of training data samples respectively; and carrying out respiratory sound detection on the sleep sound data to be detected by using the trained artificial neural network. The system is used for realizing the method. The invention is based on the sleep measured data, adopts the double artificial neural network for identification, can realize the rapid and effective detection of the sleep breath sound, can distinguish the low-intensity breath sound from the non-breath sound, has simple and easily realized detection principle and high detection precision, and has important significance for vital sign detection, sleep quality monitoring and the like.

Description

Sleep respiratory sound detection method and system based on double neural networks
Technical Field
The invention belongs to the technical field of non-speech recognition, and particularly relates to a sleep breathing sound detection method and system based on a double neural network.
Background
The breath sound detection has important significance in aspects of vital sign detection, sleep quality monitoring, reduction of misjudgment of diagnosis of various sleep respiratory disorder diseases, assistance of doctors to complete various pathological diagnoses and the like. However, how to detect the sleep breath sound with low intensity which can not be heard by human ears from the record of sleep and the like is a difficult point of current research. Takahiro Emoto et al, a paper Detection of sleep breathing sound based on a single artificial neural network, discloses a method for detecting breathing sound based on a single artificial neural network, but the method has a relatively high requirement on the signal-to-noise ratio of the applied environment, and has an average classification accuracy of only 71.5% in a sleep environment with a relatively low signal-to-noise ratio, and the Detection classification accuracy is worse for sound interferences such as air conditioning sound, walking sound, cough sound and the like occurring in the actual sleep environment.
Disclosure of Invention
The invention aims to provide a sleep respiratory sound detection method and system which have the advantages of simple detection principle, high detection precision and the like.
The technical solution for realizing the purpose of the invention is as follows: a sleep respiratory sound detection method based on a dual neural network comprises the following steps:
step 1, collecting actually measured sleep sound data, marking the data, and marking the breathing sound as 1 and the non-breathing sound as 0;
step 2, dividing the actually measured sleep sound data into training data samples and testing data samples;
step 3, acquiring an energy threshold T for distinguishing different intensity respiratory sounds according to the training data sample h Dividing the training data samples into two types of training data samples according to the threshold;
step 4, training the corresponding artificial neural networks by utilizing the two types of training data samples respectively;
and 5, carrying out respiratory sound detection on the sleep sound data to be detected by using the trained artificial neural network.
A dual neural network-based sleep respiratory sound detection system, comprising:
the data acquisition module is used for acquiring actually measured sleep sound data, marking the data, and marking the breathing sound as 1 and the non-breathing sound as 0;
the first sample dividing module is used for dividing the sleep sound actual measurement data collected by the data collecting module into training data samples and testing data samples;
a second sample division module for obtaining an energy threshold T for distinguishing different intensity breath sounds according to the training data samples divided by the first sample division module h Dividing the training data samples into two types of training data samples according to the threshold value;
the training module is used for utilizing the two types of training data samples divided by the second sample dividing module to respectively train the corresponding artificial neural networks;
and the detection module is used for detecting the breathing sound of the sleep sound data to be detected by utilizing the artificial neural network trained by the training module.
Compared with the prior art, the invention has the following remarkable advantages: 1) the automatic detection of the sleep breath sound recorded by the microphone and the like can be realized, a large amount of time for manually intercepting data is saved, the detection result is accurate, and the performance is excellent; 2) based on the double artificial neural network technology, the low-intensity sleep respiratory sound during the actual sleep period can be detected, the low-intensity respiratory sound and non-respiratory sound (such as snore) can be distinguished, and the detection principle is simple and easy to realize.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a flow chart of a sleep respiratory sound detection method based on a dual neural network according to the present invention.
FIG. 2 is a flow diagram of artificial neural network training in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
With reference to fig. 1, the present invention provides a sleep respiratory sound detection method based on a dual neural network, including the following steps:
step 1, collecting actually measured sleep sound data, marking the data, and marking the breathing sound as 1 and the non-breathing sound as 0;
step 2, dividing the actually measured sleep sound data into training data samples and testing data samples;
step 3, acquiring an energy threshold T for distinguishing different intensity respiratory sounds according to the training data sample h Dividing the training data samples into two types of training data samples according to the threshold;
step 4, training the corresponding artificial neural networks by using the two types of training data samples respectively;
and 5, carrying out breathing sound detection on the sleep sound data to be detected by using the trained artificial neural network.
Further, in one embodiment, the energy threshold T for distinguishing different strength breath sounds is obtained in step 3 according to the training data sample h The method specifically comprises the following steps:
step 3-1, pre-emphasis and frame pre-processing are carried out on the training data samples;
step 3-2, the frame energy of each frame of the training data sample is obtained, and the formula is as follows:
Figure BDA0002279235780000031
in the formula, E j Represents the frame energy of the j-th frame of the training data sample, M represents the frame length, b j (n) represents a j frame data sample among the training data samples;
step 3-3, drawing a frame energy statistical histogram, calculating a minimum value point of the histogram, and taking the minimum value point as an energy threshold T for distinguishing respiratory sounds with different intensities h
3-4, according to the energy threshold value T h Dividing the training data samples into two types of training data samples, specifically comprising: the frame energy is larger than the energy threshold value T h The training data samples are divided into first class training data samples, and the frame energy is less than or equal to an energy threshold value T h The training data samples of (a) are classified into a second class of training data samples.
Further, in one embodiment, with reference to fig. 2, in step 4, two types of training data samples are used to train the artificial neural networks corresponding to the two types of training data samples, which specifically includes:
step 4-1, initializing an artificial neural network structure, a learning rate, an activation function and an iteration number threshold value p 1 Minimum gradient threshold p 2 A connection weight and a threshold;
step 4-2, respectively carrying out normalization processing on the two types of training data samples;
4-3, respectively inputting the two types of training data samples after normalization processing into corresponding artificial neural networks for training;
4-4, updating the connection weight and the threshold value in the artificial neural network by utilizing a back propagation algorithm in combination with the output of the artificial neural network;
step 4-5, judging whether the reverse error is increased or whether the current iteration number n is more than or equal to the iteration number threshold value p 1 And if so, finishing the training of the artificial neural network, otherwise, repeating the step 4-3 to the step 4-5.
Illustratively and preferably, in one embodiment, the back propagation algorithm in the step 4-4 specifically adopts an error back propagation algorithm based on Levenberg-Marquard.
Further, in one embodiment, the step 5 of performing respiratory sound detection on the sleep sound data to be detected by using the trained artificial neural network specifically includes:
step 5-1, respectively inputting the two types of training data samples into the corresponding artificial neural networks; the first type of training data sample and the second type of training data sample respectively correspond to the first type of artificial neural network and the second type of artificial neural network;
step 5-2, drawing a receiver operating characteristic ROC curve corresponding to the artificial neural network according to data output by the first type of artificial neural network and the second type of artificial neural network;
step 5-3, respectively selecting points closest to (0,1) points on a coordinate axis from ROC curves corresponding to the first type of artificial neural network and the second type of artificial neural network as optimal detection threshold values, and respectively marking as Th1 and Th 2;
step 5-4, pre-emphasis and frame pre-processing are carried out on the test data sample, and the frame energy of each frame of the test data sample is obtained;
step 5-5, comparing the frame energy of each frame of the test data sample obtained in the step 5-4 with an energy threshold T h If the frame energy is greater than T h Inputting the frame data into the first type artificial neural network, otherwise, inputting the frame data into the second type artificial neural network;
step 5-6, framing the output data of the first type artificial neural network or the second type artificial neural network in the step 5-5, and solving the frame energy of each frame of the output data;
and 5-7, comparing the frame energy of each frame of the output data obtained in the step 5-6 with the size of Th1 or Th2, and if the frame energy is greater than Th1 or Th2, judging the frame data as breathing sound, otherwise, judging the frame data as non-breathing sound.
Further, in one embodiment, step 5-7 is preceded by: and performing median smoothing on the frame energy of each frame of the output data of the step 5-6.
The invention provides a sleep respiratory sound detection system based on a dual neural network, which comprises:
the data acquisition module is used for acquiring actually measured sleep sound data, marking the data, and marking the breathing sound as 1 and the non-breathing sound as 0;
the first sample dividing module is used for dividing the actually measured sleep sound data acquired by the data acquisition module into training data samples and testing data samples;
a second sample dividing module for acquiring an energy threshold T for distinguishing different intensity respiratory sounds according to the training data samples divided by the first sample dividing module h Dividing the training data samples into two types of training data samples according to the threshold value;
the training module is used for utilizing the two types of training data samples divided by the second sample dividing module to respectively train the corresponding artificial neural networks;
and the detection module is used for detecting the breathing sound of the sleep sound data to be detected by utilizing the artificial neural network trained by the training module.
Further, in one embodiment, the second sample dividing module includes:
the first preprocessing unit is used for performing pre-emphasis and framing preprocessing on the training data samples;
the frame energy calculating unit is used for calculating the frame energy of each frame of the training data samples, and the formula is as follows:
Figure BDA0002279235780000051
in the formula, E j Represents the frame energy of the j frame of the training data sample, M represents the frame length, b j (n) represents a j frame data sample among the training data samples;
an energy threshold calculation unit for drawing a frame energy statistical histogram, calculating a minimum value point of the histogram, and using the minimum value point as an energy threshold T for distinguishing respiratory sounds with different intensities h
A sample division unit for dividing the sample according to an energy threshold T h Dividing the training data samples into two types of training data samples, specifically comprising: the frame energy is larger than the energy threshold value T h The training data samples are divided into first class training data samples, and the frame energy is less than or equal to an energy threshold value T h The training data samples of (a) are classified into a second class of training data samples.
Further, in one embodiment, the training module includes:
an initialization unit for initializing the structure of the artificial neural network, the learning rate, the activation function, and the threshold value p of the number of iterations 1 Minimum gradient threshold p 2 A connection weight and a threshold;
the second preprocessing unit is used for respectively carrying out normalization processing on the two types of training data samples;
the training unit is used for respectively inputting the two types of training data samples subjected to normalization processing by the second preprocessing unit into the corresponding artificial neural networks for training;
the parameter updating unit is used for updating the connection weight and the threshold value in the artificial neural network by utilizing a back propagation algorithm in combination with the output of the artificial neural network;
a first judging unit for judging whether the reverse error of the artificial neural network is increased or whether the current iteration number n is greater than or equal to the iteration number threshold value p 1 And if so, completing the training of the artificial neural network, otherwise, repeatedly operating the training unit and the parameter updating unit.
Further, in one embodiment, the detection module includes:
the data input unit is used for respectively inputting the two types of training data samples into the corresponding artificial neural networks; the first type of training data sample and the second type of training data sample respectively correspond to the first type of artificial neural network and the second type of artificial neural network;
the ROC curve generating unit is used for drawing a receiver operating characteristic ROC curve corresponding to the artificial neural network according to data output by the first type of artificial neural network and the second type of artificial neural network;
a detection threshold determining unit, configured to select, from ROC curves corresponding to the first type of artificial neural network and the second type of artificial neural network, a point closest to a (0,1) point on a coordinate axis as an optimal detection threshold, and mark the optimal detection threshold as Th1 and Th2, respectively;
the third preprocessing unit is used for performing pre-emphasis and frame-division preprocessing on the test data sample and solving the frame energy of each frame of the test data sample;
a second judging unit for comparing the frame energy of each frame of the test data sample obtained by the third preprocessing unit with an energy threshold T h If the frame energy is greater than T h Inputting the frame data into a first type artificial neural network, otherwise, inputting the frame data into a second type artificial neural network;
the fourth preprocessing unit is used for framing the output data of the first type artificial neural network or the second type artificial neural network determined by the second judging unit and solving the frame energy of each frame of the output data;
and the third judging unit is used for comparing the frame energy of each frame of the output data obtained by the fourth preprocessing unit with the size of Th1 or Th2, and if the frame energy is greater than Th1 or Th2, the frame data is judged to be respiratory sound, otherwise, the frame data is judged to be non-respiratory sound.
Further, in one embodiment, the detection module further includes:
and the fifth preprocessing unit is used for performing median smoothing on the frame energy of each frame of the output data obtained by the fourth preprocessing unit.
Illustratively, the method of the present invention is used to detect the actually measured 9 segments of sleep sound, wherein there are 417 segments of respiratory sound, and 391 segments of respiratory sound are correctly detected, and the specific results are shown in table 1 and table 2 below. The data in the table show that the detection accuracy of the method is 93.7%, the accuracy is 87.8%, and compared with the detection result (the detection accuracy is 89.7%, and the accuracy is 85.8%) of the sleep respiratory sound detection method based on the artificial single neural network proposed by Takahiro Emoto et al, the method of the invention has more accurate detection result and better performance.
TABLE 1 test Performance index results
Figure BDA0002279235780000071
TABLE 2 test results
Figure BDA0002279235780000072
In conclusion, the invention is based on the acquired sleep actual measurement data, adopts the double artificial neural network for identification, can realize the rapid and effective detection of the sleep respiratory sound, can distinguish the low-intensity respiratory sound from the non-respiratory sound, has simple and easily realized detection principle and high detection precision, and has important significance for vital sign detection, sleep quality monitoring and the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (4)

1. A sleep respiratory sound detection method based on a dual neural network is characterized by comprising the following steps:
step 1, collecting actually measured sleep sound data, marking the data, and marking the breathing sound as 1 and the non-breathing sound as 0;
step 2, dividing the actually measured sleep sound data into training data samples and testing data samples;
step 3, acquiring an energy threshold T for distinguishing different intensity respiratory sounds according to the training data sample h Dividing the training data samples into two types of training data samples according to the threshold; acquiring an energy threshold T for distinguishing different intensities of breathing sounds according to the training data sample h The method specifically comprises the following steps:
step 3-1, pre-emphasis and framing pre-processing are carried out on the training data samples;
step 3-2, the frame energy of each frame of the training data sample is obtained, and the formula is as follows:
Figure FDA0003697666210000011
in the formula, E j Represents the frame energy of the j-th frame of the training data sample, M represents the frame length, b j (n) represents a j frame data sample among the training data samples;
step 3-3, drawing a frame energy statistical histogram, calculating a minimum value point of the histogram, and taking the minimum value point as an energy threshold T for distinguishing respiratory sounds with different intensities h
3-4, according to the energy threshold value T h Dividing the training data samples into two types of training data samples, specifically comprising: the frame energy is larger than the energy threshold value T h The training data samples are divided into first class training data samples, and the frame energy is less than or equal to an energy threshold value T h The training data samples are divided into second class training data samples;
and 4, training the corresponding artificial neural networks by using the two types of training data samples respectively, wherein the training data samples specifically comprise:
step 4-1, initializing an artificial neural network structure, a learning rate, an activation function and an iteration number threshold value p 1 Minimum gradient threshold p 2 The connection weight and the threshold;
step 4-2, respectively carrying out normalization processing on the two types of training data samples;
4-3, respectively inputting the two types of training data samples after the normalization processing into corresponding artificial neural networks for training;
4-4, updating the connection weight and the threshold value in the artificial neural network by utilizing a back propagation algorithm in combination with the output of the artificial neural network;
step 4-5, judging whether the reverse error is increased or not or whether the current iteration number n is more than or equal to the iteration number threshold value p 1 If so, completing the training of the artificial neural network, otherwise, repeating the step 4-3 to the step 4-5;
and 5, performing breath sound detection on the sleep sound data to be detected by using the trained artificial neural network, and specifically comprising the following steps of:
step 5-1, inputting the two types of training data samples into corresponding artificial neural networks respectively; the first type of training data sample and the second type of training data sample respectively correspond to a first type of artificial neural network and a second type of artificial neural network;
step 5-2, drawing a receiver operating characteristic ROC curve corresponding to the artificial neural network according to data output by the first type of artificial neural network and the second type of artificial neural network;
step 5-3, respectively selecting points closest to (0,1) points on a coordinate axis from ROC curves corresponding to the first type of artificial neural network and the second type of artificial neural network as optimal detection threshold values, and respectively marking as Th1 and Th 2;
step 5-4, performing pre-emphasis and frame pre-processing on the test data sample, and solving the frame energy of each frame of the test data sample;
step 5-5, comparing the frame energy of each frame of the test data sample obtained in the step 5-4 with an energy threshold T h If the frame energy is greater than T h Inputting the frame data into the first type artificial neural network, otherwise, inputting the frame data into the second type artificial neural network;
step 5-6, framing the output data of the first type artificial neural network or the second type artificial neural network in the step 5-5, and solving the frame energy of each frame of the output data;
and 5-7, comparing the frame energy of each frame of the output data obtained in the step 5-6 with the size of Th1 or Th2, and if the frame energy is greater than Th1 or Th2, judging the frame data as breathing sound, otherwise, judging the frame data as non-breathing sound.
2. The dual neural network-based sleep respiratory sound detection method according to claim 1, wherein the steps 5-7 are preceded by: and performing median smoothing on the frame energy of each frame of the output data in the step 5-6.
3. A sleep respiratory sound detection system based on a dual neural network, comprising:
the data acquisition module is used for acquiring actually measured sleep sound data, marking the data, and marking the breathing sound as 1 and the non-breathing sound as 0;
the first sample dividing module is used for dividing the sleep sound actual measurement data collected by the data collecting module into training data samples and testing data samples;
a second sample dividing module for acquiring an energy threshold T for distinguishing different intensity breath sounds according to the training data samples divided by the first sample dividing module h Dividing the training data samples into two types of training data samples according to the threshold value; the method comprises the following steps:
the first preprocessing unit is used for performing pre-emphasis and framing preprocessing on the training data samples;
the frame energy calculating unit is used for calculating the frame energy of each frame of the training data sample, and the formula is as follows:
Figure FDA0003697666210000031
in the formula, E j Represents the frame energy of the j-th frame of the training data sample, M represents the frame length, b j (n) represents a j frame data sample among the training data samples;
an energy threshold calculation unit for drawing a frame energy statistical histogram, calculating a minimum value point of the histogram, and using the minimum value point as an energy threshold T for distinguishing different intensity respiratory sounds h
A sample division unit for dividing samples according to an energy threshold T h Dividing the training data samples into two types of training data samples, specifically comprising: the frame energy is larger than the energy threshold value T h The training data samples are divided into first class training data samples, and the frame energy is less than or equal to an energy threshold value T h The training data samples are divided into second class training data samples;
the training module is used for utilizing the two types of training data samples divided by the second sample dividing module to respectively train the corresponding artificial neural networks; the method comprises the following steps:
an initialization unit for initializing the artificial neural network structure, learning rate, activation function, and iteration threshold p 1 Minimum gradient threshold p 2 A connection weight and a threshold;
the second preprocessing unit is used for respectively carrying out normalization processing on the two types of training data samples;
the training unit is used for inputting the two types of training data samples subjected to normalization processing by the second preprocessing unit into corresponding artificial neural networks respectively for training;
the parameter updating unit is used for updating the connection weight and the threshold value in the artificial neural network by utilizing a back propagation algorithm in combination with the output of the artificial neural network;
a first judging unit for judging whether the reverse error of the artificial neural network is increased or whether the current iteration number n is increasedIs greater than or equal to the iteration number threshold value p 1 If so, completing the training of the artificial neural network, otherwise, repeatedly operating the training unit and the parameter updating unit;
the detection module is used for detecting the breathing sound of the sleep sound data to be detected by utilizing the artificial neural network trained by the training module; the method comprises the following steps:
the data input unit is used for respectively inputting the two types of training data samples into the corresponding artificial neural networks; the first type of training data sample and the second type of training data sample respectively correspond to a first type of artificial neural network and a second type of artificial neural network;
the ROC curve generating unit is used for drawing a receiver operating characteristic ROC curve corresponding to the artificial neural network according to data output by the first type of artificial neural network and the second type of artificial neural network;
a detection threshold determining unit, configured to select, from ROC curves corresponding to the first type of artificial neural network and the second type of artificial neural network, a point closest to a (0,1) point on a coordinate axis as an optimal detection threshold, and mark the optimal detection threshold as Th1 and Th2, respectively;
the third preprocessing unit is used for performing pre-emphasis and frame-division preprocessing on the test data sample and solving the frame energy of each frame of the test data sample;
a second judging unit for comparing the frame energy of each frame of the test data samples obtained by the third preprocessing unit with an energy threshold T h If the frame energy is greater than T h Inputting the frame data into a first type artificial neural network, otherwise, inputting the frame data into a second type artificial neural network;
the fourth preprocessing unit is used for framing the output data of the first type artificial neural network or the second type artificial neural network determined by the second judging unit and solving the frame energy of each frame of the output data;
and the third judging unit is used for comparing the frame energy of each frame of the output data obtained by the fourth preprocessing unit with the size of Th1 or Th2, and if the frame energy is greater than Th1 or Th2, the frame data is judged to be respiratory sound, otherwise, the frame data is judged to be non-respiratory sound.
4. The dual neural network-based sleep respiratory sound detection system of claim 3, wherein the detection module further comprises:
and the fifth preprocessing unit is used for performing median smoothing on the frame energy of each frame of the output data obtained by the fourth preprocessing unit.
CN201911134574.7A 2019-11-19 2019-11-19 Sleep respiratory sound detection method and system based on double neural networks Active CN111012306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911134574.7A CN111012306B (en) 2019-11-19 2019-11-19 Sleep respiratory sound detection method and system based on double neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911134574.7A CN111012306B (en) 2019-11-19 2019-11-19 Sleep respiratory sound detection method and system based on double neural networks

Publications (2)

Publication Number Publication Date
CN111012306A CN111012306A (en) 2020-04-17
CN111012306B true CN111012306B (en) 2022-08-16

Family

ID=70200630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911134574.7A Active CN111012306B (en) 2019-11-19 2019-11-19 Sleep respiratory sound detection method and system based on double neural networks

Country Status (1)

Country Link
CN (1) CN111012306B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668556B (en) * 2021-01-21 2024-06-07 广东白云学院 Breathing sound identification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1410759A1 (en) * 2002-10-17 2004-04-21 Sibel, S.A. Procedure for analysis of snoring and apnea and apparatus to carry out this analysis
CN102088911A (en) * 2008-06-17 2011-06-08 皇家飞利浦电子股份有限公司 Acoustical patient monitoring using a sound classifier and a microphone
CN107292286A (en) * 2017-07-14 2017-10-24 中国科学院苏州生物医学工程技术研究所 Breath sound discrimination method and system based on machine learning
CN108720837A (en) * 2017-04-18 2018-11-02 英特尔公司 Mthods, systems and devices for detecting respiration phase

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190076098A1 (en) * 2017-09-08 2019-03-14 Arizona Board Of Regents On Behalf Of The Universty Of Arizona Artificial Neural Network Based Sleep Disordered Breathing Screening Tool

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1410759A1 (en) * 2002-10-17 2004-04-21 Sibel, S.A. Procedure for analysis of snoring and apnea and apparatus to carry out this analysis
CN102088911A (en) * 2008-06-17 2011-06-08 皇家飞利浦电子股份有限公司 Acoustical patient monitoring using a sound classifier and a microphone
CN108720837A (en) * 2017-04-18 2018-11-02 英特尔公司 Mthods, systems and devices for detecting respiration phase
CN107292286A (en) * 2017-07-14 2017-10-24 中国科学院苏州生物医学工程技术研究所 Breath sound discrimination method and system based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Detection of sleep breathing sound based on artificial neural network analysis;Takahiro Emoto 等;《Biomedical Signal Processing and Control》;20171123(第41期);第81-89页 *

Also Published As

Publication number Publication date
CN111012306A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN108670200B (en) Sleep snore classification detection method and system based on deep learning
WO2021135672A1 (en) Signal detection method and system for assessing sleep apnea
CN116705337B (en) Health data acquisition and intelligent analysis method
CN108597601B (en) Support vector machine-based chronic obstructive pulmonary disease diagnosis auxiliary system and method
CN100418480C (en) Heart disease automatic classification system based on heart sound analysis and heart sound segmentation method
JP5585428B2 (en) Respiratory state analyzer, respiratory state display device, and program for them
Karunajeewa et al. Silence–breathing–snore classification from snore-related sounds
CN106618497A (en) Method for monitoring sleep in complicated environment based on channel state information
WO2012114080A1 (en) Respiration monitoring method and system
CN108992053B (en) Method for real-time non-binding detection of heart rate and heartbeat interval
CN110234279B (en) Method for characterizing sleep disordered breathing
CN105448291A (en) Parkinsonism detection method and detection system based on voice
CN110731773A (en) abnormal electrocardiogram screening method based on fusion of global and local depth features of electrocardiogram
Cohen-McFarlane et al. Comparison of silence removal methods for the identification of audio cough events
CN114358194A (en) Gesture tracking based detection method for abnormal limb behaviors of autism spectrum disorder
CN111012306B (en) Sleep respiratory sound detection method and system based on double neural networks
CN110786849B (en) Electrocardiosignal identity recognition method and system based on multi-view discriminant analysis
CN105962897A (en) Self-adaptive snoring sound signal detection method
CN110718301A (en) Alzheimer disease auxiliary diagnosis device and method based on dynamic brain function network
CN111067513A (en) Sleep quality detection key brain area judgment method based on characteristic weight self-learning
WO2021253125A1 (en) Systems and methods for screening obstructive sleep apnea during wakefulness using anthropometric information and tracheal breathing sounds
Porieva et al. Investigation of lung sounds features for detection of bronchitis and COPD using machine learning methods
CN113456061A (en) Sleep posture monitoring method and system based on wireless signals
Groll et al. Automated relative fundamental frequency algorithms for use with neck-surface accelerometer signals
CN110113998B (en) Method for characterizing sleep disordered breathing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant