CN107811649B - Heart sound multi-classification method based on deep convolutional neural network - Google Patents
Heart sound multi-classification method based on deep convolutional neural network Download PDFInfo
- Publication number
- CN107811649B CN107811649B CN201711332126.9A CN201711332126A CN107811649B CN 107811649 B CN107811649 B CN 107811649B CN 201711332126 A CN201711332126 A CN 201711332126A CN 107811649 B CN107811649 B CN 107811649B
- Authority
- CN
- China
- Prior art keywords
- heart sound
- classification
- neural network
- convolutional neural
- classification results
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B7/00—Instruments for auscultation
- A61B7/02—Stethoscopes
- A61B7/04—Electric stethoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/7257—Details of waveform analysis characterised by using transforms using Fourier transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Abstract
The invention discloses a heart sound multi-classification method based on a deep convolutional neural network, and relates to the field of heart sound classification based on deep learning; it includes: 1) processing the obtained original heart sound data to obtain N sections of heart sound signals; 2) inputting N segments of heart sound signals into a heart sound classification model based on a two-dimensional convolutional neural network and a one-dimensional convolutional neural network, and classifying according to frequency domain and time domain characteristics to obtain 2N classification results; 3) training the 2N classification results by adopting a Lasso framework to obtain corresponding weights, and multiplying the weights by the 2N classification results to complete regression to obtain final classification results; the method solves the problem that the classification accuracy is low due to the fact that the existing heart sound classification method only adopts a two-dimensional convolution network to cause low resolution performance and adopts a multi-classifier to perform classification, and achieves the effect of improving the accuracy of multi-classification of heart sounds.
Description
Technical Field
The invention relates to the field of heart sound classification based on deep learning, in particular to a heart sound multi-classification method based on a deep convolutional neural network.
Background
In 2014, a global disease burden report 2013 is issued by a world medical authority magazine lancet, and the research evaluates the death conditions of 188 countries between 1990 and 2013, and according to the report, the three diseases with the highest death rate in China are stroke, coronary heart disease and chronic obstructive pulmonary disease respectively, the death number accounts for 46% of all the death numbers in 2013, which indicates that the three most fatal health killers in China and the cardiovascular diseases account for two; the heart sound is used for detecting the heart health condition, so that the method is a widely used cheap noninvasive method, and is convenient for finding heart problems in time and treating the heart problems in advance; the change of the physical structure of the heart can cause the change of the heart sound, so that the heart sound signal contains a large amount of heart activity information, and the information can be easily collected; the heart sound signal can truly reflect the state of the heart, so the heart sound signal attracts the attention of a large number of researchers at home and abroad; meanwhile, machine learning (especially deep learning) has achieved subversive success in image and voice recognition in recent years, overwhelming traditional algorithms with overwhelming advantages, and a great number of deep learning researchers are beginning to turn the application field to medical treatment while achieving great success in image and voice recognition. Among them, the classification of heart sounds is also one of the directions.
The existing heart sound classification technology can be roughly divided into two categories, namely a classification method based on traditional machine learning and a classification method based on deep learning; the former mainly adopts methods such as support vector machine, K-neighbor and fuzzy network identification, and the like, and has some problems although certain results are obtained: (1) a large amount of manual feature extraction performed in the early stage can not truly and comprehensively reflect the essential features of the data; (2) the calculation is complex, the use in a big data environment is not facilitated, and the precision needs to be improved; the latter mainly adopts some neural network models for classification, such as BP neural network, convolution neural network, etc.; specifically, the students adopt short-time Fourier transform to convert original audio frequency into spectrogram and send the spectrogram into a multilayer two-dimensional convolutional neural network for classification, so that a better classification effect is obtained, but still some defects exist: (1) the problem of incomplete information extraction; (2) the problem of big data is solved, and the method uses a small data set (wherein the heart sound recording is only one and a half hours) during training, so that the method is not beneficial to the comprehensive characteristic learning of a neural network; (3) and a shallow network model is adopted, only from frequency domain analysis, the accuracy of classification is low, and the training precision is low.
Disclosure of Invention
The invention aims to: the invention provides a heart sound multi-classification method based on a deep convolutional neural network, which solves the problem that the classification accuracy is low due to the fact that only a two-dimensional convolutional network is adopted in the existing heart sound classification method and then a multi-classifier is adopted for classification.
The technical scheme adopted by the invention is as follows:
a heart sound multi-classification method based on a deep convolutional neural network comprises the following steps:
step 1: processing the obtained original heart sound data to obtain N sections of heart sound signals;
step 2: inputting N segments of heart sound signals into a heart sound classification model based on a two-dimensional convolutional neural network and a one-dimensional convolutional neural network, and classifying according to frequency domain and time domain characteristics to obtain 2N classification results;
and step 3: and training the 2N classification results by adopting a Lasso framework to obtain corresponding weights, and multiplying the weights by the 2N classification results to finish regression so as to obtain a final classification result.
Preferably, the step 1 comprises the steps of:
step 1.1: acquiring heart sound data by adopting an electronic stethoscope with a microphone, extracting partial data from the standard data set, and integrating the heart sound data and the partial data to obtain original heart sound data;
step 1.2: denoising original heart sound data through a band-pass filter to obtain a cleaned heart sound signal;
step 1.3: selecting a plurality of cycles from a plurality of heartbeat cycles in the cleaned heart sound signals to complete the segmentation of the heart sound signals;
step 1.4: and moving the starting points of the segments left and right randomly to serve as the final starting points of the heart sound signal segments to complete data amplification to obtain N segments of heart sound signals.
Preferably, the step 2 comprises the steps of:
step 2.1: carrying out short-time Fourier transform on the N sections of heart sound signals according to time sequence to obtain a spectrogram, and sending the spectrogram into a heart sound classification model based on a two-dimensional convolutional neural network to obtain N classification results;
step 2.2: carrying out frequency band decomposition on N sections of heart sound signals according to time sequence to obtain power spectrums of four basic sounds, calculating median powers of N frequency bands corresponding to the four basic sounds in each period, calculating the mean value of the median powers of the N frequency bands in all periods, and sending the mean value as a frequency domain characteristic into a heart sound classification model based on a one-dimensional convolutional neural network to obtain N classification results;
step 2.3: based on the steps 2.1 and 2.2, inputting N segments of heart sound signals into the heart sound model for classification to obtain 2N classification results.
Preferably, the step 3 comprises the steps of:
step 3.1: inputting the 2N classification results into a Lasso framework, and training the classification results by using a Lasso algorithm to obtain corresponding correlation coefficients; the formula of the Lasso algorithm is as follows:(wherein, β is a correlation coefficient,is a least squares term, X represents the input result of each classifier, y represents the desired result, and λ represents the regularization coefficient);
step 3.2: and multiplying the correlation coefficient by the corresponding classification result to obtain a final classification result.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the invention adopts a two-dimensional convolution network to classify frequency spectrums and a one-dimensional convolution network to classify frequency bands, respectively increases the resolution performance from a time domain and a frequency domain, trains the results of two classifiers through a Lasso algorithm to obtain corresponding weights, and improves the classification accuracy; the problem that the classification accuracy is low due to the fact that the existing heart sound classification method only adopts a two-dimensional convolution network to cause low resolution performance and adopts a multi-classifier to perform classification is solved, and the effect of improving the accuracy of multi-classification of heart sounds is achieved;
2. the network model of the invention adopts the two-dimensional convolution network and the one-dimensional convolution network to simultaneously analyze data, and simultaneously classifies from a frequency domain and a time domain, thereby promoting the comprehensive capture of the characteristics of the heart sound signals, providing the most comprehensive classification information for the final classification, increasing the resolution performance and promoting the improvement of the classification accuracy;
3. according to the method, the classification results of the segmented heart sound signals are trained through the Lasso algorithm to obtain corresponding weights, the learning results of a plurality of weak learners are integrated to obtain a final result, accidental errors of a single learner can be effectively corrected, the robustness of the method is enhanced, and the classification effect is further improved;
4. the heart sound classification method based on the natural environment collects the heart sound in the natural environment, analyzes the heart sound facing to the practical application environment, and further improves the accuracy of heart sound classification;
5. the method adopts the segmentation and splicing of the heart sounds, has low requirement on data length, does not need high-requirement data length, and improves the convenience of heart sound classification;
6. compared with the traditional method, the method is simpler and more convenient to operate in the early-stage pretreatment part, a large number of denoising rules do not need to be manually formulated to clean the data, and the complexity of pretreatment is reduced.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic flow diagram of the present invention;
FIG. 3 is a schematic diagram of a two-dimensional convolutional neural network-based heart sound classification model of the present invention;
FIG. 4 is a schematic diagram of a heart sound classification model based on a one-dimensional convolutional neural network of the present invention;
FIG. 5 is a schematic diagram of the Lasso algorithm framework of the present invention;
fig. 6 is an effect data table of the present invention.
Detailed Description
All of the features disclosed in this specification, or all of the steps in any method or process so disclosed, may be combined in any combination, except combinations of features and/or steps that are mutually exclusive.
The present invention is described in detail below with reference to fig. 1-6.
Example 1
A heart sound multi-classification method based on a deep convolutional neural network comprises the following steps:
step 1: processing the obtained original heart sound data to obtain N sections of heart sound signals;
step 2: inputting N segments of heart sound signals into a heart sound classification model based on a two-dimensional convolutional neural network and a one-dimensional convolutional neural network, and classifying according to frequency domain and time domain characteristics to obtain 2N classification results;
and step 3: and training the 2N classification results by adopting a Lasso framework to obtain corresponding weights, and multiplying the weights by the 2N classification results to finish regression so as to obtain a final classification result.
Example 2
Step 1.1: acquiring heart sound data by adopting an electronic stethoscope with a microphone, extracting partial data from the standard data set, and integrating the heart sound data and the partial data to obtain original heart sound data;
step 1.2: denoising original heart sound data through a band-pass filter to obtain a cleaned heart sound signal;
step 1.3: selecting a plurality of cycles from a plurality of heartbeat cycles in the cleaned heart sound signals to complete the segmentation of the heart sound signals;
step 1.4: randomly moving the initial points of the segments left and right to serve as the final initial points of the heart sound signal segments to complete data amplification to obtain N segments of heart sound signals;
step 2.1: carrying out short-time Fourier transform on the N sections of heart sound signals according to time sequence to obtain a spectrogram, and sending the spectrogram into a heart sound classification model based on a two-dimensional convolutional neural network to obtain N classification results; the adopted discrete Fourier transform formula is as follows:
(wherein x (N) is a finite-length discrete signal, N is 0, 1, …, N-1, x (k) is an FDT of x (N));
step 2.2: carrying out frequency band decomposition on N sections of heart sound signals according to time sequence to obtain power spectrums of four basic sounds, calculating median powers of N frequency bands corresponding to the four basic sounds in each period, calculating the mean value of the median powers of the N frequency bands in all periods, and sending the mean value as a frequency domain characteristic into a heart sound classification model based on a one-dimensional convolutional neural network to obtain a classification result II; acquiring power spectrums of four basic tones (namely S1, S2, S3 and S4) of heart sounds by using Hanning window sliding combined with short-time Fourier transform, calculating median powers of N frequency bands of fixed intervals of S1, S2, S3 and S4 of each heart cycle, and then feeding the mean value of the median powers of the N frequency bands of all the cycles into a one-dimensional convolution-based neural network model as a frequency domain feature;
step 2.3: based on the steps 2.1 and 2.2, inputting N segments of heart sound signals into a heart sound model for classification to obtain 2N classification results;
step 3.1: step 3.1: inputting the 2N classification results into a Lasso framework, and training the classification results by using a Lasso algorithm to obtain corresponding correlation coefficients; the formula of the Lasso algorithm is as follows:
(wherein, β is a correlation coefficient,is a least squares term, X represents the input result of each classifier, y represents the desired result, and λ represents the regularization coefficient);
step 3.2: and multiplying the correlation coefficient by the corresponding classification result to obtain a final classification result.
Because the credible effect of the recording at different segmentation moments is considered, different weights are given by using a Lasso algorithm, so that training is performed, an optimal weak classifier is selected, and the data classification capability is effectively improved; according to FIG. 6, the classification accuracy of the present invention is 0.53-0.65, while the classification accuracy of other methods is about 0.39; the data show that the classification accuracy of the multi-classification method is obviously higher than that of other classification methods, the method solves the problem that the classification accuracy is low due to the fact that the existing heart sound classification method only adopts a two-dimensional convolution network to cause low resolution performance and adopts a multi-classifier to perform classification, and achieves the effect of improving the accuracy of multi-classification of heart sounds.
Claims (1)
1. A heart sound multi-classification method based on a deep convolutional neural network is characterized by comprising the following steps: the method comprises the following steps:
step 1: processing the obtained original heart sound data to obtain N sections of heart sound signals;
step 2: inputting N segments of heart sound signals into a heart sound classification model based on a two-dimensional convolutional neural network and a one-dimensional convolutional neural network, and classifying according to frequency domain and time domain characteristics to obtain 2N classification results;
and step 3: training the 2N classification results by adopting a Lasso frame to obtain corresponding weights, multiplying the weights by the 2N classification results to complete regression to obtain final classification results,
the step 1 comprises the following steps:
step 1.1: acquiring heart sound data by adopting an electronic stethoscope with a microphone, extracting partial data from the standard data set, and integrating the heart sound data and the partial data to obtain original heart sound data;
step 1.2: denoising original heart sound data through a band-pass filter to obtain a cleaned heart sound signal;
step 1.3: selecting a plurality of cycles from a plurality of heartbeat cycles in the cleaned heart sound signals to complete the segmentation of the heart sound signals;
step 1.4: randomly moving the initial points of the segments left and right to serve as the final initial points of the heart sound signal segments to complete data amplification to obtain N segments of heart sound signals;
the step 2 comprises the following steps:
step 2.1: carrying out short-time Fourier transform on the N sections of heart sound signals according to time sequence to obtain a spectrogram, and sending the spectrogram into a heart sound classification model based on a two-dimensional convolutional neural network to obtain N classification results;
step 2.2: carrying out frequency band decomposition on N sections of heart sound signals according to time sequence to obtain power spectrums of four basic sounds, calculating median powers of N frequency bands corresponding to the four basic sounds in each period, calculating the mean value of the median powers of the N frequency bands in all periods, and sending the mean value as a frequency domain characteristic into a heart sound classification model based on a one-dimensional convolutional neural network to obtain N classification results;
step 2.3: based on the steps 2.1 and 2.2, inputting N segments of heart sound signals into a heart sound model for classification to obtain 2N classification results;
the step 3 comprises the following steps:
step 3.1: inputting the 2N classification results into a Lasso framework, and training the classification results by using a Lasso algorithm to obtain corresponding correlation coefficients; the formula of the Lasso algorithm is as follows:
where R is the set of all real numbers, RPRepresenting a p-dimensional vector, each component being a real number, beta being a correlation coefficient,is a least square term, X represents the input result of each classifier, y represents the expected result, and lambda represents the regularization coefficient;
step 3.2: and multiplying the correlation coefficient by the corresponding classification result to obtain a final classification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711332126.9A CN107811649B (en) | 2017-12-13 | 2017-12-13 | Heart sound multi-classification method based on deep convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711332126.9A CN107811649B (en) | 2017-12-13 | 2017-12-13 | Heart sound multi-classification method based on deep convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107811649A CN107811649A (en) | 2018-03-20 |
CN107811649B true CN107811649B (en) | 2020-12-22 |
Family
ID=61606724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711332126.9A Active CN107811649B (en) | 2017-12-13 | 2017-12-13 | Heart sound multi-classification method based on deep convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107811649B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190365342A1 (en) * | 2018-06-04 | 2019-12-05 | Robert Bosch Gmbh | Method and system for detecting abnormal heart sounds |
CN109919210A (en) * | 2019-02-26 | 2019-06-21 | 华南理工大学 | A kind of heart sound semisupervised classification method based on depth convolutional network |
CN112131907A (en) * | 2019-06-24 | 2020-12-25 | 华为技术有限公司 | Method and device for training classification model |
CN110731778B (en) * | 2019-07-22 | 2022-04-29 | 华南师范大学 | Method and system for recognizing breathing sound signal based on visualization |
CN110558944A (en) * | 2019-09-09 | 2019-12-13 | 成都智能迭迦科技合伙企业(有限合伙) | Heart sound processing method and device, electronic equipment and computer readable storage medium |
CN110795996B (en) * | 2019-09-18 | 2024-03-12 | 平安科技(深圳)有限公司 | Method, device, equipment and storage medium for classifying heart sound signals |
CN112086103B (en) * | 2020-08-17 | 2022-10-04 | 广东工业大学 | Heart sound classification method |
CN114305484A (en) * | 2021-12-15 | 2022-04-12 | 浙江大学医学院附属儿童医院 | Heart disease heart sound intelligent classification method, device and medium based on deep learning |
CN114831595A (en) * | 2022-03-16 | 2022-08-02 | 复旦大学附属儿科医院 | Big data-based neonatal congenital heart disease intelligent screening algorithm and automatic upgrading system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130226019A1 (en) * | 2010-08-25 | 2013-08-29 | Diacoustic Medical Devices (Pty) Ltd | System and method for classifying a heart sound |
US20130237773A1 (en) * | 2012-03-07 | 2013-09-12 | Cardiac Pacemakers, Inc. | Heart sound detection systems and methods using updated heart sound expectation window functions |
US20140194702A1 (en) * | 2006-05-12 | 2014-07-10 | Bao Tran | Health monitoring appliance |
CN106214123A (en) * | 2016-07-20 | 2016-12-14 | 杨平 | A kind of electrocardiogram compressive classification method based on degree of depth learning algorithm |
CN106251880A (en) * | 2015-06-03 | 2016-12-21 | 创心医电股份有限公司 | Identify method and the system of physiological sound |
CN106344005A (en) * | 2016-10-28 | 2017-01-25 | 张珈绮 | Mobile ECG (electrocardiogram) monitoring system and monitoring method |
CN107137072A (en) * | 2017-04-28 | 2017-09-08 | 北京科技大学 | A kind of ventricular ectopic beating detection method based on 1D convolutional neural networks |
-
2017
- 2017-12-13 CN CN201711332126.9A patent/CN107811649B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140194702A1 (en) * | 2006-05-12 | 2014-07-10 | Bao Tran | Health monitoring appliance |
US20130226019A1 (en) * | 2010-08-25 | 2013-08-29 | Diacoustic Medical Devices (Pty) Ltd | System and method for classifying a heart sound |
US20150164466A1 (en) * | 2010-08-25 | 2015-06-18 | Diacoustic Medical Devices (Pty) Ltd | System and method for classifying a heart sound |
US20130237773A1 (en) * | 2012-03-07 | 2013-09-12 | Cardiac Pacemakers, Inc. | Heart sound detection systems and methods using updated heart sound expectation window functions |
CN106251880A (en) * | 2015-06-03 | 2016-12-21 | 创心医电股份有限公司 | Identify method and the system of physiological sound |
CN106214123A (en) * | 2016-07-20 | 2016-12-14 | 杨平 | A kind of electrocardiogram compressive classification method based on degree of depth learning algorithm |
CN106344005A (en) * | 2016-10-28 | 2017-01-25 | 张珈绮 | Mobile ECG (electrocardiogram) monitoring system and monitoring method |
CN107137072A (en) * | 2017-04-28 | 2017-09-08 | 北京科技大学 | A kind of ventricular ectopic beating detection method based on 1D convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
Classifying heart sound recordings using deep convolutional neural networks and mel-frequency cepstral coefficients;Rubin J, Abreu R, Ganguli A, et al;《IEEE 2016 Computing in Cardiology Conference》;20161231;第43卷;第813-816页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107811649A (en) | 2018-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107811649B (en) | Heart sound multi-classification method based on deep convolutional neural network | |
CN111046824B (en) | Efficient denoising and high-precision reconstruction modeling method and system for time series signals | |
CN109726751B (en) | Method for recognizing electroencephalogram based on deep convolutional neural network | |
CN108827605B (en) | Mechanical fault feature automatic extraction method based on improved sparse filtering | |
CN107736894A (en) | A kind of electrocardiosignal Emotion identification method based on deep learning | |
CN109961017A (en) | A kind of cardiechema signals classification method based on convolution loop neural network | |
CN109493874A (en) | A kind of live pig cough sound recognition methods based on convolutional neural networks | |
CN109243470A (en) | Broiler chicken cough monitoring method based on Audiotechnica | |
CN111368627B (en) | Method and system for classifying heart sounds by combining CNN (computer numerical network) with improved frequency wavelet slice transformation | |
CN114469124B (en) | Method for identifying abnormal electrocardiosignals in movement process | |
CN105448291A (en) | Parkinsonism detection method and detection system based on voice | |
CN108567418A (en) | A kind of pulse signal inferior health detection method and detecting system based on PCANet | |
CN114190944B (en) | Robust emotion recognition method based on electroencephalogram signals | |
Zakaria et al. | Three resnet deep learning architectures applied in pulmonary pathologies classification | |
CN112863667B (en) | Lung sound diagnostic device based on deep learning | |
CN113116361A (en) | Sleep staging method based on single-lead electroencephalogram | |
CN110543831A (en) | brain print identification method based on convolutional neural network | |
Zakaria et al. | VGG16, ResNet-50, and GoogLeNet deep learning architecture for breathing sound classification: a comparative study | |
Hadi et al. | Classification of heart sound based on s-transform and neural network | |
CN113343869A (en) | Electroencephalogram signal automatic classification and identification method based on NTFT and CNN | |
CN112183354A (en) | Single-period pulse wave signal quality evaluation method based on support vector machine | |
CN115762578A (en) | Interpretable heart sound abnormity identification method and system based on fractional domain Fourier transform | |
CN110443276A (en) | Time series classification method based on depth convolutional network Yu the map analysis of gray scale recurrence | |
Huang et al. | Classification of cough sounds using spectrogram methods and a parallel-stream one-dimensional deep convolutional neural network | |
CN113229842B (en) | Heart and lung sound automatic separation method based on complex deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |