CN110731778A - respiratory sound signal identification method and system based on visualization - Google Patents

respiratory sound signal identification method and system based on visualization Download PDF

Info

Publication number
CN110731778A
CN110731778A CN201910658420.1A CN201910658420A CN110731778A CN 110731778 A CN110731778 A CN 110731778A CN 201910658420 A CN201910658420 A CN 201910658420A CN 110731778 A CN110731778 A CN 110731778A
Authority
CN
China
Prior art keywords
sound
breath
signal
signals
sound signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910658420.1A
Other languages
Chinese (zh)
Other versions
CN110731778B (en
Inventor
张金区
欧建荣
宋立国
罗虎
鲁玉佳
钱朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ai Bei Technology Co Ltd
South China Normal University
Original Assignee
Guangzhou Ai Bei Technology Co Ltd
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ai Bei Technology Co Ltd, South China Normal University filed Critical Guangzhou Ai Bei Technology Co Ltd
Priority to CN201910658420.1A priority Critical patent/CN110731778B/en
Publication of CN110731778A publication Critical patent/CN110731778A/en
Application granted granted Critical
Publication of CN110731778B publication Critical patent/CN110731778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes

Abstract

The invention relates to the field of audio signal identification, and discloses visual breath sound signal identification methods and systems, wherein short-time Fourier transform is used for carrying out time-frequency analysis on cut breath sound periodic signals, -dimensional audio signals are converted into two-dimensional visual signals, a data set is formed by processing and analyzing images, so that the image classification of a convolutional neural network is carried out, and the normal and three pathological breath sounds are distinguished.

Description

respiratory sound signal identification method and system based on visualization
Technical Field
The present invention relates to the field of audio signal identification, and more particularly, to breath sound signal identification methods and systems based on visualization.
Background
The respiratory sound signals are physiological signals generated by the respiratory system of a human body and the outside in the ventilation process, and the respiratory sound contains a large amount of physiological and pathological information and can well reflect the health condition of the respiratory system of the human body, so the respiratory sound has very important research significance in respiratory acoustics, clinical medicine and the like.
The heart-lung auscultation arouses attention again due to the excellent characteristics of rapidness, convenience, non-invasiveness and the like, but unskilled medical staff monitor the respiratory tract diseases directly by using a stethoscope and the diagnosis is difficult, while the development of the automatic respiratory sound diagnosis technology undoubtedly brings important help to the respiratory tract disease diagnosis, the development of hardware devices such as an electronic stethoscope and other signal acquisition technologies and software such as automatic identification and disease early warning and the like advances to further promote the research and progress of the analysis and identification technology of modern respiratory sound signals.
Common breath sound feature extraction algorithms include an Auto-regression coefficient (AR) algorithm, a Power Spectral Density (PSD) based algorithm, a cepstrum-based (Mel-frequency cepstrum) MFCC coefficient method, a Wavelet Transform (WT) based discrete Wavelet decomposition and Wavelet packet decomposition method, and the like.
The invention uses short-time Fourier transform to perform time-frequency analysis on the cut respiratory sound periodic signals, converts dimensional audio signals into two-dimensional visual signals, forms a data set through processing and analyzing images, and performs image classification of a convolutional neural network to realize the distinction between normal and three pathological respiratory sounds.
Disclosure of Invention
The invention aims to provide visual-based respiratory sound signal identification methods and systems, wherein a time-frequency analysis method is applied, short-time Fourier transform is used for carrying out time-frequency analysis on a cut respiratory sound periodic signal, a -dimensional audio signal is converted into a two-dimensional visual signal, a data set is formed by processing and analyzing an image, and a convolutional neural network is used for classifying visual images and distinguishing normal and three pathological respiratory sounds.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
A visual breath sound signal identification method includes S1 collecting original breath sound signals, conducting filtering separation processing on the signals to obtain preprocessed breath sound signals, S2 conducting period division on the preprocessed breath sound signals to obtain breath sound signals of a set period, determining division points of the breath sound signals, S3 conducting Fourier transform on the breath sound signals of the set period to obtain frequency information of the breath sound signals, S4 conducting processing on a two-dimensional spectrogram of the breath sound signals according to the division points of the breath sound periodic signals to obtain a two-dimensional spectrogram of a single period, S5 building a frequency map data set according to the two-dimensional spectrogram, S6 building a convolutional neural network model according to the frequency map data set, and S7 conducting prediction analysis on various new breath sounds through the convolutional neural network model.
, the specific process of the filtering and separating process in step S1 includes S100, performing high-pass filtering process on the original breath sound signal to effectively extract the ambient noise, current noise and other noise in the original breath sound signal to obtain a heart sound and breath sound mixed signal and copying the heart sound and breath sound mixed signal, S101, performing wavelet transformation on the heart sound and breath sound mixed signal to obtain a heart sound interference signal in the breath sound and separately separate the heart sound interference signal, and S102, obtaining a pre-processed breath sound signal (relatively pure breath sound signal) by subtracting the heart sound interference signal from the heart sound and breath sound mixed signal.
, in step S2, the preprocessed breathing sound signal is periodically divided by using a moving rectangular window, and division points of the breathing sound periodic signals are determined, the breathing sound periodic signals of the normal breathing sound, the wheezing sound, the twilight sound and the pleural pathological changes are respectively cut, the minimum value before the coming of the next periods (the calling times and the sucking times are periods) is obtained by using a moving rectangular window method, the processed parameters are that the size of the rectangular window is 0.8-2S, namely the number of 0.8 fs-2 fs sample points, the vertical line in the figure is the cut point, and it can be seen that the cut breathing sound signal is divided into the obvious breathing sound periods of calling and sucking, wherein the small peak part corresponding to the small amplitude represents the expiration, and the large peak part corresponding to the large amplitude represents the inspiration.
, the types of the primitive breath sounds include normal breath sounds, wheezing sounds, twiddle sounds, and pleural effusions.
, the specific steps of step S3 include S300, performing windowing truncation on the breath sound signal with a set period by using an analysis window moving along with time, and decomposing the breath sound signal into series of approximately stationary short-time signals, S301, obtaining a two-dimensional spectrogram of each short-time stationary signal through Fourier transform, performing windowing truncation on the non-stationary signal by using analysis windows sliding along with time, then decomposing the non-stationary signal into series of approximately stationary short-time signals (long-time non-stationary signals, -hour time interval signals are truncated by adding sliding windows, the signals can be considered to be approximately stationary in the short time), and finally analyzing the frequency spectrum of each short-time stationary signal through Fourier transform.
The short-time Fourier transform is to multiply functions and a window function, then to carry out -dimensional Fourier transform, and finally to obtain the transformed result through the sliding of the window function, and to exclude the obtained result to obtain two-dimensional representations, the formula of the short-time Fourier transform is as follows:
Figure BDA0002137673970000031
in the formula, Z (t) is an original signal, g (t) is a window function, and u is integral variables for integral calculation.
, the step S5 includes S500, cutting the two-dimensional spectrogram according to the dividing points of the respiration sound signal to form a single-period two-dimensional spectrogram set, and S501, performing specification processing on the single-period two-dimensional spectrogram set to form a spectrogram data set.
, the specification processing in step S501 includes S5010, summing two-dimensional time-frequency atlas, S5011, performing RGB component analysis on the two-dimensional time-frequency atlas obtained from summing , and S5012, performing picture compression to obtain a frequency map data set.
, the four time-frequency graphs are cut separately, the coordinate size is different, the picture width is not , the picture is needed to be processed, the cut breath sound picture database is used for a data set which can be convolutely trained, the processing flow is as follows, S700, new various breath sound signals are obtained and are processed by high-pass filtering and wavelet transformation, S701, the breath sound signals with noise removed are periodically divided by a moving matrix window, the division points of the breath sound periodic signals are determined, S702, the divided breath sound periodic signals are subjected to short-time Fourier transformation, S703, the spectrogram subjected to short-time Fourier transformation is periodically divided according to the division points of the divided breath sound signals to obtain a single-period two-dimensional spectrogram, S704, R components in the periodic spectrogram are extracted and are compressed into spectrogram data with set size, and S705, the compressed data are put into a pre-trained neural network model for prediction, and each category is the final breath sound category.
The scheme also provides a system of visual breath sound signal identification methods, which comprises a signal acquisition and processing unit used for collecting original breath sound signals and carrying out filtering and separation processing on the signals to obtain preprocessed breath sound signals, a period division module which adopts a movable rectangular window to carry out period division and determines division points of the breath sound periodic signals, a Fourier transform module used for carrying out Fourier transform on the preprocessed breath sound signals to obtain frequency information of the breath sound signals, a cutting module used for processing a two-dimensional spectrogram of the breath sound signals according to the division points of the breath sound periodic signals to obtain a single-period two-dimensional spectrogram, a spectrogram data set establishing module used for establishing a spectrogram data set according to the two-dimensional spectrogram, and a convolutional neural network module used for establishing a convolutional neural network model according to the spectrogram data set and carrying out predictive analysis on various new breath sounds through the convolutional neural network model.
, the signal acquiring and processing unit comprises a high-pass filtering module for removing environmental noise, current noise and other noise in the original breathing sound signal, a wavelet transformation module for dividing the heart sound component in the filtered breathing sound signal, reconstructing the heart sound component in the breathing sound signal and separating the heart sound interference signal separately, a copying module for copying the heart sound and breathing sound mixed signal obtained after the high-pass filtering, a separating module for separating the heart sound interference signal obtained after the wavelet transformation separately, and a subtracting module for subtracting the heart sound interference signal from the heart sound and breathing sound mixed signal to obtain the preprocessed breathing sound signal.
Compared with the prior art, the invention has the advantages that visual-based breath sound signal identification methods and systems are provided, the invention uses a time-frequency analysis method, uses short-time Fourier transform to perform time-frequency analysis on cut breath sound periodic signals, converts -dimensional audio signals into two-dimensional visual signals, forms a data set through processing and analyzing images, classifies visual images based on a convolutional neural network, and distinguishes normal and three pathological breath sounds.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a block diagram of the method of the present invention;
FIG. 2 is a block diagram of a method for filtering and separating original breath sound signals according to the present invention;
FIG. 3 is a block diagram of a method of creating a spectrogram data set from a two-dimensional spectrogram in accordance with the present invention;
FIG. 4 is a block diagram of a method of specification processing in the present invention;
FIG. 5 is a block diagram of the method for predictive analysis of new breath sounds by a convolutional neural network model according to the present invention;
FIG. 6 is a block diagram of a system in the present invention;
FIG. 7 is a waveform of a normal breathing sound signal according to the present invention;
FIG. 8 is a waveform of the invention after the normal breathing sound signal is divided;
fig. 9 is a system block diagram of a signal acquisition processing unit in the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, and the scope of the present invention will be more clearly and clearly defined.
Referring to fig. 1, visual breath sound signal identification methods include S1, collecting original breath sound signals, filtering and separating the signals to obtain preprocessed breath sound signals, S2, periodically dividing the preprocessed breath sound signals to obtain breath sound signals of a set period, determining dividing points of the breath sound signals, S3, performing fourier transform on the breath sound signals of the set period to obtain frequency information of the breath sound signals, S4, processing a two-dimensional spectrogram of the breath sound signals according to the dividing points of the breath sound periodic signals to obtain a two-dimensional spectrogram of a single period, S5, establishing a frequency map data set according to the two-dimensional spectrogram, S6, establishing a convolutional neural network model according to the frequency map data set, and S7, performing predictive analysis on various new breath sounds through the convolutional neural network model.
The convolution neural network adopts two layers of convolution, two layers of pooling and two layers of full connection. The pooled nuclei size was 2 x 2. And (5) training by adopting a LeNet model.
LeNet calculates that input and output are eight layers in total:
and , data input layer, data are first normalized , and the interval range is gray scale map 0-255.
A second layer: convolutional layer c1
The convolutional layer is the core of the convolutional neural network, and the features of the picture are obtained through different convolutional kernels, wherein the convolutional kernels correspond to filters, and different filters extract different features.
And a third layer: pooling pond layer
poling layers are arranged behind each convolution layer basically, the purpose is to reduce dimension, the size of an output matrix of the original convolution layer is changed to half of the original output matrix, operation after the operation is simple, in addition, the poling layers increase the robustness of the system, and the original accurate description is changed to the approximate description , so that overfitting is prevented to a certain extent.
A fourth layer: convolutional layer
Similar to before, the features are further extracted , and the features are expressed in a deeper level of the original sample.
And a fifth layer: layer of pooling
A sixth layer: convolutional layer (full connection)
There are 100 convolution kernels, here fully connected, and the matrix is convolved into numbers, which facilitates the decision of the following network.
A seventh layer: full connection layer
And hidden layer in MLP, to obtain the expression of high-dimensional spatial data.
An eighth layer: output layer
Here , an RBF network is used, the center of each RBF is a mark of each category, the output minimum value is the result of the discrimination category finally predicted by the network, and for the experiment, the result is the finally predicted breath sound category.
In this embodiment, referring to fig. 2, the specific process of the filtering separation process in step S1 includes: s100, carrying out high-pass filtering processing on the original breathing sound signal, and effectively extracting environmental noise, current noise and other noise in the original breathing sound signal to obtain a heart sound and breathing sound mixed signal and copying the heart sound and breathing sound mixed signal; s101, performing wavelet transformation on the heart sound and breath sound mixed signals to obtain heart sound interference signals in the breath sounds and separating the heart sound interference signals separately; and S102, subtracting the heart sound interference signal from the heart sound and respiratory sound mixed signal to obtain a preprocessed respiratory sound signal (relatively pure respiratory sound signal).
In this embodiment, referring to fig. 7 and 8, in step S2, the pre-processed breathing sound signal is periodically divided by using a moving rectangular window, and division points of the breathing sound periodic signals are determined, the breathing sound periodic signals of the normal breathing sound, the wheezing sound, the twilight sound, and the pleural variant sound are respectively cut, the minimum value before the next periods (the expiration times and the inspiration times are periods) is obtained by using a moving rectangular window method, the processed parameters are that the size of the rectangular window is 0.8 to 2S, that is, 0.8 fs to 2 fs sample points, and the vertical line in fig. 7 is the cut point, so that the cut breathing sound signal is divided into the periods of the breathing sound, where a small peak portion corresponding to a small amplitude represents expiration and a large peak portion corresponding to a large amplitude represents inspiration.
In this embodiment, the types of the original breath sounds include normal breath sounds, wheezing sounds, twiddle sounds, and pleural effusion sounds.
In this embodiment, the specific steps of step S3 include S300, performing windowing truncation on the breath sound signal of a set period by using an analysis window moving with time, and decomposing the breath sound signal into series of approximately stationary short-time signals, S301, obtaining a two-dimensional spectrogram of each short-time stationary signal through fourier transform, performing windowing truncation on the non-stationary signal by using analysis windows sliding with time, then decomposing the non-stationary signal into series of approximately stationary short-time signals (a long-time non-stationary signal, a -hour time period signal is truncated through the sliding window, and the signal can be considered to be approximately stationary in this short time), and finally analyzing the frequency spectrum of each short-time stationary signal through fourier transform.
The short-time Fourier transform is to multiply functions and a window function, then to carry out -dimensional Fourier transform, and finally to obtain the transformed result through the sliding of the window function, and to exclude the obtained result to obtain two-dimensional representations, the formula of the short-time Fourier transform is as follows:
Figure BDA0002137673970000061
in the formula, Z (t) is an original signal, g (t) is a window function, and u is integral variables for integral calculation.
In this embodiment, referring to fig. 3, the step S5 includes: s500, cutting the two-dimensional frequency spectrum graph according to the dividing point of the breathing sound signal to form a single-period two-dimensional frequency spectrum graph set; s501, performing specification processing on the two-dimensional spectrogram set of the single period to form a spectrogram data set.
In this embodiment, referring to fig. 4, the specific steps of the specification processing in step S501 include S5010, summing the size of the two-dimensional time-frequency atlas, S5011, performing RGB component analysis on the two-dimensional time-frequency atlas having the size of summing , and S5012, performing picture compression to obtain a frequency map data set.
In the embodiment, four time-frequency graphs are cut separately, the coordinate size is different, the picture width is not due to the fact that the picture width is not , and therefore the picture needs to be processed, the cut breath sound picture database is converted into a data set capable of being subjected to convolution training, the processing flow is as follows, please refer to the picture 5, S700 obtains new various breath sound signals and carries out high-pass filtering and wavelet transformation processing on the signals, S701 carries out periodic division on the breath sound signals with noise removed by using a moving matrix window and determines the division points of the breath sound periodic signals, S702 carries out short-time Fourier transformation on the divided breath sound periodic signals, S703 carries out periodic division on a frequency spectrum graph subjected to short-time Fourier transformation according to the division points of the divided breath sound signals to obtain a single-period two-dimensional frequency spectrum graph, S704 extracts R components in RGB in the periodic frequency graph and sets the size of the frequency spectrum graph data, and S705 puts the compressed data into a pre-trained neural network model to predict, and predict the final class of the breath sound.
The scheme also provides systems based on a visual breath sound signal identification method, and as shown in fig. 6, the systems include a signal acquisition processing unit 802 for collecting an original breath sound signal and performing filtering and separation processing on the signal to obtain a preprocessed breath sound signal, a period division module 804 for performing period division by using a moving rectangular window and determining division points of a breath sound periodic signal, a fourier transform module 806 for performing fourier transform on the preprocessed breath sound signal to obtain frequency information of the breath sound signal, a cutting module 808 for processing a two-dimensional spectrogram of the breath sound signal according to the division points of the breath sound periodic signal to obtain a single-period two-dimensional spectrogram, a spectrogram data set establishing module 810 for establishing a spectrogram data set according to the two-dimensional spectrogram, and a convolutional neural network module 812 for establishing a convolutional neural network model according to the spectrogram data set and performing predictive analysis on new types of breath sounds through the convolutional neural network model.
In this embodiment, referring to fig. 9, the signal acquiring and processing unit 802 includes: the high-pass filtering module 8021 is configured to remove environmental noise, current noise, and other noise in the original breathing sound signal; the wavelet transform module 8023 is configured to partition off heart sound components in the filtered respiratory sound signals, reconstruct the heart sound components in the respiratory sound signals, and separate out heart sound interference signals; the copying module 8022 is configured to copy the heart sound and breath sound mixed signal obtained after the high-pass filtering processing; a separation module 8024, configured to separately separate the heart sound interference signals obtained after the wavelet transform; the subtraction module 8025 is configured to subtract the heart sound interference signal from the heart sound and respiratory sound mixed signal to obtain a preprocessed respiratory sound signal.
This is the working principle of the method and system for identifying respiratory sound signals based on visualization, and the content not described in detail in this specification is the prior art known to those skilled in the art.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, various changes or modifications may be made by the patentees within the scope of the appended claims, and within the scope of the invention, as long as they do not exceed the scope of the invention described in the claims.

Claims (10)

1, A visual-based breath sound signal identification method, comprising:
s1, collecting original breath sound signals, and filtering and separating the signals to obtain preprocessed breath sound signals;
s2, carrying out periodic division on the preprocessed breathing sound signal to obtain a breathing sound signal with a set period, and determining a division point of the breathing sound signal;
s3, carrying out Fourier transform on the breath sound signal with the set period to obtain frequency information of the breath sound signal;
s4, processing the two-dimensional spectrogram of the breath sound signal according to the division points of the breath sound periodic signal to obtain a single-period two-dimensional spectrogram;
s5, establishing a frequency spectrum data set according to the two-dimensional frequency spectrum diagram;
s6, establishing a convolutional neural network model according to the frequency map data set;
and S7, carrying out predictive analysis on the new breath sounds through the convolutional neural network model.
2. The visualization-based breath sound signal identification method of claim 1, wherein the specific process of the filtering separation process in step S1 comprises:
s100, carrying out high-pass filtering processing on the original breath sound signal to obtain a heart sound and breath sound mixed signal and copying the heart sound and breath sound mixed signal;
s101, performing wavelet transformation on the heart sound and breath sound mixed signals to obtain heart sound interference signals in the breath sounds and separating the heart sound interference signals separately;
and S102, subtracting the heart sound interference signal from the heart sound and respiration mixed signal to obtain a preprocessed respiration sound signal.
3. The visualization-based breath sound signal identification method of claim 1, wherein the pre-processed breath sound signal in step S2 is periodically divided by a moving rectangular window, and the division point of the breath sound periodic signal is determined.
4. The visualization-based breath sound signal identification method of claim 1, wherein the types of the original breath sound comprise normal breath sound, wheezing sound, twiddle sound, and pleural lesion sound.
5. The visualization-based breath sound signal identification method of claim 1, wherein the specific step of step S3 comprises:
s300, windowing and cutting the breath sound signal with the set period by adopting an analysis window moving along with time, and decomposing the breath sound signal into series of approximately stable short-time signals;
s301, obtaining a two-dimensional spectrogram of each short-time stationary signal through Fourier transform.
6. The visualization-based breath sound signal identification method of claim 1, wherein the step S5 comprises:
s500, cutting the two-dimensional frequency spectrum graph according to the dividing point of the breathing sound signal to form a single-period two-dimensional frequency spectrum graph set;
s501, performing specification processing on the two-dimensional spectrogram set of the single period to form a spectrogram data set.
7. The visualization-based breath sound signal identification method of claim 6, wherein the specification processing in step S501 comprises the following specific steps:
s5010, calculating the size of a two-dimensional time-frequency atlas of a system ;
s5011, performing RGB component analysis on the two-dimensional time-frequency atlas with the size of the system ;
and S5012, carrying out picture compression to obtain a frequency picture data set.
8. The visualization-based breath sound signal identification method of claim 1, wherein the step S7 specifically comprises:
s700, acquiring new various respiratory sound signals and carrying out high-pass filtering and wavelet transformation processing on the signals;
s701, periodically dividing the breath sound signals with the noise removed by using a mobile matrix window, and determining division points of the breath sound periodic signals;
s702, carrying out short-time Fourier transform on the divided breath sound periodic signals;
s703, according to the divided points of the breathing sound signal, carrying out periodic division on the spectrogram subjected to short-time Fourier transform to obtain a single-period two-dimensional spectrogram;
s704, extracting R components in RGB in the periodic frequency chart, and compressing into spectrogram data with set size;
s705, the compressed data is put into a pre-trained neural network model for prediction, and the predicted category is the final breath sound category.
9, A system based on visual breath sound signal identification method, comprising:
the signal acquisition and processing unit is used for collecting original breath sound signals and filtering and separating the signals to obtain preprocessed breath sound signals;
the periodic division module is used for carrying out periodic division by adopting a movable rectangular window and determining division points of the periodic signals of the breath sounds;
the Fourier transform module is used for carrying out Fourier transform on the preprocessed breathing sound signal to obtain frequency information of the breathing sound signal;
the cutting module is used for processing the two-dimensional spectrogram of the breathing sound signal according to the division points of the breathing sound periodic signal to obtain a single-period two-dimensional spectrogram;
the frequency map data set establishing module is used for establishing a frequency map data set according to the two-dimensional frequency spectrogram;
and the convolutional neural network module is used for establishing a convolutional neural network model according to the frequency map data set and carrying out predictive analysis on various new breath sounds through the convolutional neural network model.
10. The system of the visual-based breath sound signal identification method of claim 9, wherein the signal acquisition processing unit comprises:
the high-pass filtering module is used for removing environmental noise, current noise and other noise in the original breathing sound signal;
the wavelet transform module is used for dividing the heart sound components in the filtered respiratory sound signals, reconstructing the heart sound components in the respiratory sound signals and independently separating the heart sound interference signals;
the copying module is used for copying the heart sound and breath sound mixed signal obtained after the high-pass filtering processing;
the separation module is used for separating the heart sound interference signals obtained after the wavelet transformation;
and the subtraction module is used for subtracting the heart sound interference signal from the heart sound and respiratory sound mixed signal to obtain a preprocessed respiratory sound signal.
CN201910658420.1A 2019-07-22 2019-07-22 Method and system for recognizing breathing sound signal based on visualization Active CN110731778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910658420.1A CN110731778B (en) 2019-07-22 2019-07-22 Method and system for recognizing breathing sound signal based on visualization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910658420.1A CN110731778B (en) 2019-07-22 2019-07-22 Method and system for recognizing breathing sound signal based on visualization

Publications (2)

Publication Number Publication Date
CN110731778A true CN110731778A (en) 2020-01-31
CN110731778B CN110731778B (en) 2022-04-29

Family

ID=69267389

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910658420.1A Active CN110731778B (en) 2019-07-22 2019-07-22 Method and system for recognizing breathing sound signal based on visualization

Country Status (1)

Country Link
CN (1) CN110731778B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640439A (en) * 2020-05-15 2020-09-08 南开大学 Deep learning-based breath sound classification method
CN111863021A (en) * 2020-07-21 2020-10-30 上海宜硕网络科技有限公司 Method, system and equipment for recognizing breath sound data
CN111938691A (en) * 2020-08-18 2020-11-17 中国科学院声学研究所 Basic heart sound identification method and equipment
CN114176566A (en) * 2021-12-23 2022-03-15 北京航空航天大学 Multi-sensor integrated wireless sputum sedimentation alarm system and method
CN115995282A (en) * 2023-03-23 2023-04-21 山东纬横数据科技有限公司 Expiratory flow data processing system based on knowledge graph

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090326402A1 (en) * 2008-06-30 2009-12-31 Nellcor Puritan Bennett Ireland Systems and methods for determining effort
CN102697520A (en) * 2012-05-08 2012-10-03 天津沃康科技有限公司 Electronic stethoscope based on intelligent distinguishing function
WO2012131290A1 (en) * 2011-03-30 2012-10-04 Nellcor Puritan Bennett Ireland Systems and methods for autonomic nervous system monitoring
US20120289850A1 (en) * 2011-05-09 2012-11-15 Xerox Corporation Monitoring respiration with a thermal imaging system
CN103462642A (en) * 2013-08-20 2013-12-25 广东工业大学 Instant heart rate detection method and device for Doppler fetal heart sound based on time-frequency analysis
EP2967377A1 (en) * 2013-03-14 2016-01-20 Koninklijke Philips N.V. Device and method for obtaining vital sign information of a subject
CN106580301A (en) * 2016-12-21 2017-04-26 广州心与潮信息科技有限公司 Physiological parameter monitoring method, device and hand-held device
CN107798350A (en) * 2017-11-08 2018-03-13 华南师范大学 A kind of heart and lung sounds signal recognition methods and system
CN107811649A (en) * 2017-12-13 2018-03-20 四川大学 A kind of more sorting techniques of heart sound based on depth convolutional neural networks
US20180214088A1 (en) * 2016-09-24 2018-08-02 Sanmina Corporation System and method for obtaining health data using a neural network
CN109273085A (en) * 2018-11-23 2019-01-25 南京清科信息科技有限公司 The method for building up in pathology breath sound library, the detection system of respiratory disorder and the method for handling breath sound
WO2019048960A1 (en) * 2017-09-05 2019-03-14 Bat Call D. Adler Ltd. Electronic stethoscope with enhanced features
CN109965858A (en) * 2019-03-28 2019-07-05 北京邮电大学 Based on ULTRA-WIDEBAND RADAR human body vital sign detection method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090326402A1 (en) * 2008-06-30 2009-12-31 Nellcor Puritan Bennett Ireland Systems and methods for determining effort
WO2012131290A1 (en) * 2011-03-30 2012-10-04 Nellcor Puritan Bennett Ireland Systems and methods for autonomic nervous system monitoring
US20120289850A1 (en) * 2011-05-09 2012-11-15 Xerox Corporation Monitoring respiration with a thermal imaging system
CN102697520A (en) * 2012-05-08 2012-10-03 天津沃康科技有限公司 Electronic stethoscope based on intelligent distinguishing function
EP2967377A1 (en) * 2013-03-14 2016-01-20 Koninklijke Philips N.V. Device and method for obtaining vital sign information of a subject
CN103462642A (en) * 2013-08-20 2013-12-25 广东工业大学 Instant heart rate detection method and device for Doppler fetal heart sound based on time-frequency analysis
US20180214088A1 (en) * 2016-09-24 2018-08-02 Sanmina Corporation System and method for obtaining health data using a neural network
CN106580301A (en) * 2016-12-21 2017-04-26 广州心与潮信息科技有限公司 Physiological parameter monitoring method, device and hand-held device
WO2019048960A1 (en) * 2017-09-05 2019-03-14 Bat Call D. Adler Ltd. Electronic stethoscope with enhanced features
CN107798350A (en) * 2017-11-08 2018-03-13 华南师范大学 A kind of heart and lung sounds signal recognition methods and system
CN107811649A (en) * 2017-12-13 2018-03-20 四川大学 A kind of more sorting techniques of heart sound based on depth convolutional neural networks
CN109273085A (en) * 2018-11-23 2019-01-25 南京清科信息科技有限公司 The method for building up in pathology breath sound library, the detection system of respiratory disorder and the method for handling breath sound
CN109965858A (en) * 2019-03-28 2019-07-05 北京邮电大学 Based on ULTRA-WIDEBAND RADAR human body vital sign detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NIU, JL,ET AL: "A Novel Method for Automatic Identification of Breathing State", 《SCIENTIFIC REPORTS》 *
张柯欣: "基于数学形态学的病理性附加肺音时频谱图分析", 《中华中医药学刊》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640439A (en) * 2020-05-15 2020-09-08 南开大学 Deep learning-based breath sound classification method
CN111863021A (en) * 2020-07-21 2020-10-30 上海宜硕网络科技有限公司 Method, system and equipment for recognizing breath sound data
CN111938691A (en) * 2020-08-18 2020-11-17 中国科学院声学研究所 Basic heart sound identification method and equipment
CN114176566A (en) * 2021-12-23 2022-03-15 北京航空航天大学 Multi-sensor integrated wireless sputum sedimentation alarm system and method
CN115995282A (en) * 2023-03-23 2023-04-21 山东纬横数据科技有限公司 Expiratory flow data processing system based on knowledge graph

Also Published As

Publication number Publication date
CN110731778B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN110731778B (en) Method and system for recognizing breathing sound signal based on visualization
CN108388912B (en) Sleep staging method based on multi-sensor feature optimization algorithm
CN110123367B (en) Computer device, heart sound recognition method, model training device, and storage medium
Alsmadi et al. Design of a DSP-based instrument for real-time classification of pulmonary sounds
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
Jaber et al. A telemedicine tool framework for lung sounds classification using ensemble classifier algorithms
Cinyol et al. Incorporating support vector machine to the classification of respiratory sounds by Convolutional Neural Network
Akbal et al. FusedTSNet: An automated nocturnal sleep sound classification method based on a fused textural and statistical feature generation network
CN111938650A (en) Method and device for monitoring sleep apnea
Yan et al. Nonlinear analysis of auscultation signals in TCM using the combination of wavelet packet transform and sample entropy
CN113925459A (en) Sleep staging method based on electroencephalogram feature fusion
Al-Dhief et al. Dysphonia detection based on voice signals using naive bayes classifier
Wang et al. A multi-channel UNet framework based on SNMF-DCNN for robust heart-lung-sound separation
CN113870903A (en) Pathological voice recognition method, device, equipment and storage medium
Kala et al. An objective measure of signal quality for pediatric lung auscultations
Faustino Crackle and wheeze detection in lung sound signals using convolutional neural networks
WO2023200955A1 (en) Detecting and de-noising abnormal lung sounds and extracting a respiratory cycle from an auditory signal
CN113509169A (en) Multi-parameter-based non-contact sleep apnea detection system and method
Escobar-Pajoy et al. Computerized analysis of pulmonary sounds using uniform manifold projection
US20230329666A1 (en) Detecting and de-noising abnormal lung sounds
US20230329643A1 (en) Extracting a respiratory cycle from an auditory signal
Brahma et al. Integrated swarm intelligence and IoT for early and accurate remote voice-based pathology detection and water sound quality estimation
Dhavala et al. An MFCC features-driven subject-independent convolution neural network for detection of chronic and non-chronic pulmonary diseases
Pal A novel method for automatic separation of pulmonary crackles from normal breath sounds
Woo et al. Sleep stage classification using electroencephalography via Mel frequency cepstral coefficients

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant