CN111466947A - Electronic auscultation lung sound signal processing method - Google Patents

Electronic auscultation lung sound signal processing method Download PDF

Info

Publication number
CN111466947A
CN111466947A CN202010297231.9A CN202010297231A CN111466947A CN 111466947 A CN111466947 A CN 111466947A CN 202010297231 A CN202010297231 A CN 202010297231A CN 111466947 A CN111466947 A CN 111466947A
Authority
CN
China
Prior art keywords
layer
lung sound
sound signal
convolution
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010297231.9A
Other languages
Chinese (zh)
Inventor
路程
李鑫慧
刘国栋
侯代玉
许梓艺
刘炳国
林春红
包智慧
王晓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202010297231.9A priority Critical patent/CN111466947A/en
Publication of CN111466947A publication Critical patent/CN111466947A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/003Detecting lung or respiration noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Artificial Intelligence (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Pulmonology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A lung sound signal processing method of electronic auscultation belongs to the signal classification field of machine learning. The invention aims to solve the problems that the existing processing process for auscultation recorded signals of lung sounds is complicated, so that the processing result is accurate and poor, and the final judgment result of the type of the lung sounds is influenced. The method comprises the following steps: sequentially performing band-pass filtering, down-sampling and normalization processing on the acquired original lung sound signals to obtain lung sound signals to be trained; processing the lung sound signal to be trained by adopting a plurality of convolution units to obtain a lung sound signal feature vector; the connection mode of the convolution units comprises sequential connection and jump connection; and processing the lung sound signal feature vectors finally output by the convolution unit by adopting a full connection layer to obtain a classification result. The method is used for classifying the lung sound signals.

Description

Electronic auscultation lung sound signal processing method
Technical Field
The invention relates to a lung sound signal processing method for electronic auscultation, and belongs to the field of signal classification of machine learning.
Background
The electronic stethoscope is composed of three parts, wherein the first part is a radio device, the structure of the radio device is similar to that of a stethoscope head of a common stethoscope, respiratory sound signals of lungs can be received, and the radio device is connected to the second part through a switching device connected with the tail end of the second part; the second part is a recording pen which is responsible for recording and storing the sound received by the radio equipment; the third part is a listening device, i.e. a mechanical headset. In the process of sampling the lung sound signals of the patient by a doctor, listening and recording are required to be guaranteed so as to judge and record the type of the lung sound; the lung sound type judgment result is used for assisting the addition of the following neural network training labels.
The use method of the electronic stethoscope comprises the following steps: the doctor pushes the recording button to place the auscultation head for receiving the sound at the pre-designated body testing position, and after listening to the lung sound information at the testing position, the recording is finished and the lung sound information is stored. The doctor will label each recorded sound for assisting the next neural network classification.
Because hospital consulting room environment is comparatively noisy, and radio equipment and recording equipment are extremely sensitive to the seizure of sound signal, lead to the auscultation audio signal recorded noise more, for preventing neural network from categorizing the noise signal as useful signal when training and causing the error, need to remove noise etc. preliminary treatment to the audio signal, increase the rate of accuracy of neural network training.
At present, the acquisition method of the lung sound signal is to record the lung sound of one part for a long time. This approach has no utility, so that the lung sound classification only stays in the laboratory. In addition, the current processing mode of the lung sound signal is the traditional method of empirical mode decomposition, wavelet transformation and the like to extract the characteristics for training and testing. This makes the processing of the signal in the early stage cumbersome, and the kind of extracted features is fixed and only differs in the parameter frame at the time of extraction. The problem that all effective information cannot be captured exists, the upper limit of the final training test result is possibly caused, the accuracy is poor, and therefore the final judgment result of the lung sound type is influenced.
Disclosure of Invention
The invention provides a lung sound signal processing method for electronic auscultation, aiming at the problems that the existing processing process of the lung sound auscultation recorded signals is complicated, the accuracy of the processing result is poor, and the final judgment result of the type of the lung sound is influenced.
The invention discloses a lung sound signal processing method for electronic auscultation, which comprises the following steps:
sequentially performing band-pass filtering, down-sampling and normalization processing on the acquired original lung sound signals to obtain lung sound signals to be trained;
processing the lung sound signal to be trained by adopting a plurality of convolution units to obtain a lung sound signal feature vector; the connection mode of the convolution units comprises sequential connection and jump connection;
and processing the lung sound signal feature vectors finally output by the convolution unit by adopting a full connection layer to obtain a classification result.
According to the lung sound signal processing method of electronic auscultation of the present invention,
the performing band-pass filtering includes:
performing 20Hz to 1800Hz band-pass filtering on the original lung sound signal by adopting a Chebyshev band-pass filter to obtain a filtered lung sound signal Ha(jΩ):
Figure BDA0002452641740000021
In the formula, the pass band ripple coefficient, CN(. for) denotes Chebyshev polynomial, omega is the original lungFrequency of tone signal omegapuCut-off frequency on pass band, ΩplIs the cut-off frequency below the pass-band,
Ω0is the intermediate variable(s) of the variable,
Figure BDA0002452641740000022
according to the lung sound signal processing method of electronic auscultation of the present invention,
the down-sampling frequency is 4000Hz, and the original lung sound signal frequency is 44100 Hz.
According to the lung sound signal processing method of electronic auscultation of the present invention,
the normalization processing of the lung sound signal obtained after the down sampling comprises the following steps:
and increasing the number of the lung sound signals obtained after down-sampling by using a data set enhancing means to obtain a plurality of sections of samples, and then performing 0-1 normalization processing on each section of sample to obtain the lung sound signals to be trained.
According to the lung sound signal processing method of electronic auscultation of the present invention,
the convolution unit sequentially comprises a convolution layer, a pooling layer, a batch normalization layer and an activation layer;
for the first convolution unit, the input signal of the first convolution unit is the lung sound signal to be trained, and the input signals of other convolution units comprise the output signal of any preceding stage convolution unit;
the convolution layer uses convolution kernel to carry out convolution operation on the input signal section by section to obtain corresponding lung sound signal characteristics;
the pooling layer performs down-sampling on the lung sound signal characteristics to obtain down-sampled characteristics;
the batch normalization layer performs batch normalization on the down-sampled features to obtain lung sound signal data with a mean value of 0 and a variance of 1;
and the activation layer activates the lung sound signal data by adopting an activation function to obtain a lung sound signal characteristic vector as an output signal of the current convolution unit.
According to the lung sound signal processing method of electronic auscultation of the present invention,
the sequential connection comprises that the adjacent convolution units sequentially transmit the lung sound signal characteristic vectors to process;
the jump connection comprises that the activation layers of all the post-stage convolution units except the first-stage convolution unit can receive the lung sound signal characteristic vector output by any pre-stage convolution unit; and the activation layer adds the received lung sound signal characteristic vector output by the preceding stage convolution unit and the lung sound signal data obtained by the batch normalization layer and then adopts an activation function for activation.
According to the lung sound signal processing method of electronic auscultation of the present invention,
the full connection layer processes the lung sound signal feature vectors finally output by the convolution units, and the full connection layer spreads the lung sound signal feature vectors into one-dimensional feature vectors; the full connection layer comprises an input layer, a hidden layer and an output layer, and the one-dimensional characteristic vector is input to the input layer and then passes through the hidden layer to the output layer; and the output layer and the input layer are fully connected, and finally, a classification result of the lung sound signal feature vector is obtained.
According to the lung sound signal processing method of electronic auscultation of the present invention,
the forward propagation formula of the full connection layer is as follows:
Figure BDA0002452641740000031
in the formula zl+1(j)The logits value of the jth neuron at the l +1 layer is obtained, and the l +1 layer is the layer below the random layer l; n represents the total number of neurons in the random layer l, j represents the number of neurons in the layer l +1,
Figure BDA0002452641740000032
is the weight between the ith neuron at the l < th > layer and the jth neuron at the l +1 < th > layer, al(i)Represents the activation value of the ith neuron of the random layer l,
Figure BDA0002452641740000033
all neurons in the random layer l are representedBias values for the jth neuron at layer l + 1;
the activation function is Re L U when the l +1 layer is the hidden layer, and Softmax when the l +1 layer is the output layer.
According to the lung sound signal processing method of electronic auscultation of the present invention,
the activation function adopted by the activation layer in the convolution unit is a Re L U function.
The invention has the beneficial effects that: the method of the invention directly puts the signals into the neural network after filtering and other processing, realizes the capture of the signal characteristics by utilizing the network, and achieves the purpose of classification through training and testing.
In the invention, the input information is directly transmitted to the output by bypassing in a hopping connection mode of a plurality of convolution units, so that the integrity of the information is protected, and therefore, the network only needs to learn the part of the input and output difference. After the jump connection is added, the network learning target can be simplified, the learning difficulty is reduced, and the problems of gradient disappearance or gradient explosion and the like caused in the network deepening process are solved, so that the accuracy of the final judgment result of the lung sound type is ensured.
Drawings
Fig. 1 is a signal processing flow chart of a lung sound signal processing method of electronic auscultation according to the present invention;
FIG. 2 is a schematic diagram of a jump connection of the convolution unit;
FIG. 3 is a schematic diagram of a network architecture of a fully connected layer;
FIG. 4 is a time domain diagram of an acquired original lung sound signal;
FIG. 5 is a time domain diagram of a standard lung sound signal;
fig. 6 is a waveform diagram after band-pass filtering of the acquired original lung sound signal;
fig. 7 is a schematic diagram of a simple module in a neural network system.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In a first embodiment, with reference to fig. 1 to 3, the present invention provides a method for processing a lung sound signal in electronic auscultation, including:
sequentially performing band-pass filtering, down-sampling and normalization processing on the acquired original lung sound signals to obtain lung sound signals to be trained;
processing the lung sound signal to be trained by adopting a plurality of convolution units to obtain a lung sound signal feature vector; the connection mode of the convolution units comprises sequential connection and jump connection;
and processing the lung sound signal feature vectors finally output by the convolution unit by adopting a full connection layer to obtain a classification result.
After the band-pass filtering is performed on the original lung sound signal, fine noise can be filtered, for example, the fine noise can be directly cut off.
Further, the performing band-pass filtering includes:
performing 20Hz to 1800Hz band-pass filtering on the original lung sound signal by adopting a Chebyshev band-pass filter to obtain a filtered lung sound signal Ha(jΩ):
Figure BDA0002452641740000041
The passband ripple factor is expressed in the formula and is a positive number smaller than 1; cN(. for) represents Chebyshev polynomial, omega is the original lung sound signal frequency, omegapuCut-off frequency on pass band, ΩplIs the cut-off frequency below the pass-band,
Ω0is the intermediate variable(s) of the variable,
Figure BDA0002452641740000051
the frequency omega of the original lung sound signal is basically distributed between 20Hz and 1800Hz
Still further, the down-sampling frequency is 4000Hz, and the original lung sound signal frequency is 44100 Hz.
Still further, the normalization processing of the lung sound signal obtained after the down-sampling includes:
and increasing the number of the lung sound signals obtained after down-sampling by using a data set enhancing means to obtain a plurality of sections of samples, and then performing 0-1 normalization processing on each section of sample to obtain the lung sound signals to be trained.
For the original lung sound signal, the number is limited, so the data set enhancement method is used to increase the number of training samples. For example, 100000 points exist in a section of lung sound signal obtained after down sampling, the length of the training sample is 8192, and the offset is 2, then 45905 sections of samples can be intercepted at most in the section of audio, thereby meeting the requirement of neural network training.
The calculation method of the sample number y comprises the following steps:
Figure BDA0002452641740000052
wherein x is the number of the lung sound signal points, a is the sample length, and b is the offset.
Still further, as shown in fig. 1, the convolution unit sequentially includes a convolution layer, a pooling layer, a batch normalization layer, and an activation layer;
for the first convolution unit, the input signal of the first convolution unit is the lung sound signal to be trained, and the input signals of other convolution units comprise the output signal of any preceding stage convolution unit;
the convolution layer uses convolution kernel to carry out convolution operation on the input signal section by section to obtain corresponding lung sound signal characteristics;
the pooling layer performs down-sampling on the lung sound signal characteristics to obtain down-sampled characteristics;
the batch normalization layer performs batch normalization on the down-sampled features to obtain lung sound signal data with a mean value of 0 and a variance of 1;
and the activation layer activates the lung sound signal data by adopting an activation function to obtain a lung sound signal characteristic vector as an output signal of the current convolution unit.
The plurality of convolution units and the full connection layer described in this embodiment form a deep neural network. The convolution layer performs convolution operation on the input signal region by using convolution kernel, and the convolution kernel yl(i,j)The operation formula is as follows:
Figure BDA0002452641740000061
wherein
Figure BDA0002452641740000062
Is the jth' weight of the ith convolution kernel of the ith layer, wherein
Figure BDA0002452641740000063
For the jth convolved signal segment in the ith layer, W represents the width of a convolution kernel; each of said convolved signal segments corresponds to a local region of the input signal.
The bulk normalization layer (BN layer) may improve the gradient of flow through the network; a larger learning rate is allowed, and the training speed is greatly improved; reducing the strong dependence on initialization.
Still further, as shown in fig. 1 and fig. 2, the sequentially connecting includes sequentially transmitting the lung sound signal feature vectors to adjacent convolution units for processing;
the jump connection comprises that the activation layers of all the post-stage convolution units except the first-stage convolution unit can receive the lung sound signal characteristic vector output by any pre-stage convolution unit; and the activation layer adds the received lung sound signal characteristic vector output by the preceding stage convolution unit and the lung sound signal data obtained by the batch normalization layer and then adopts an activation function for activation.
For example, the convolution unit includes a first-level convolution unit, a second-level convolution unit and a third-level convolution unit, the lung sound signal feature vector output by the first-level convolution unit can be directly input to the convolution layer of the second-level convolution unit, and can also be input to the activation layer of the third-level convolution unit, and the activation layer of the third-level convolution unit adds the lung sound signal feature vector output by the first-level convolution unit and the lung sound signal data output by the batch normalization layer in the third-level convolution unit, and then activates the lung sound signal by using an activation function.
The jump connection directly bypasses input information to output to protect the integrity of the information, so that the network only needs to learn the part of input and output difference. After the jump connection is added, the network learning target can be simplified, the learning difficulty is reduced, and the problems of gradient disappearance or gradient explosion and the like brought in the network deepening process are solved.
The jump connection is an identical connection, when the two connected layers have equal output result dimensions, the two layers can be directly added, and the next layer is put into the activated layer, when the two connected layers have different output result dimensions, the two connected layers can be regarded as a convolution layer with the convolution kernel dimension of 1 x 1, and taking fig. 2 as an example, the nth layer output can be added with the mth layer output result after being subjected to lower convolution.
Still further, with reference to fig. 3, the processing, by the full-link layer, of the lung sound signal feature vectors finally output by the convolution units includes spreading the lung sound signal feature vectors into one-dimensional feature vectors; the full connection layer comprises an input layer, a hidden layer and an output layer, and the one-dimensional characteristic vector is input to the input layer and then passes through the hidden layer to the output layer; and the output layer and the input layer are fully connected, and finally, a classification result of the lung sound signal feature vector is obtained.
The fully-connected layer classifies the features proposed by the multi-layer convolution unit.
Still further, the forward propagation formula of the full connection layer is as follows:
Figure BDA0002452641740000064
in the formula zl+1(j)The logits value of the jth neuron at the l +1 layer is obtained, and the l +1 layer is the layer below the random layer l; n represents the total number of neurons in the random layer l, j represents the number of neurons in the layer l +1,
Figure BDA0002452641740000071
is the weight between the ith neuron at the l < th > layer and the jth neuron at the l +1 < th > layer, al(i)Represents the activation value of the ith neuron of the random layer l,
Figure BDA0002452641740000072
representing the bias value of all neurons of a random layer l to a jth neuron of a layer l + 1;
the activation function is Re L U when the l +1 layer is the hidden layer, and Softmax when the l +1 layer is the output layer.
In this embodiment, l is the number of layers that are not fully connected in the fully connected layer. The purpose of the activation function Softmax is to transform the input neurons into a probability distribution with a sum of 1:
Figure BDA0002452641740000073
in the formula yjThe output of the j-th neuron in the output layer is shown, n1 shows the final classification number, and if the classification is carried out in three classes, n1 is 3.
As an example, the activation function adopted by the activation layer in the convolution unit is a Re L U function.
The purpose of the activation layer is to activate the output result of the previous layer by using an activation function, wherein the activation function is a function which runs on a neuron of the artificial neural network and is responsible for mapping the input of the neuron to an output end, and the three functions are commonly used, namely a Sigmoid function, a Tanh function and a Re L U function, and the former two functions are limited by the functions in the process of deepening the number of network layers, so that the gradient diffusion phenomenon can occur, and the training effect is reduced, so that the Re L U function is used in the embodiment:
ReLU(x)=max(0,x),
where x is the active layer input signal and Re L u (x) is the active layer output signal.
Comparing fig. 4 with fig. 5, it can be known that the acquired original signal has more noise and is not suitable for direct processing and analysis, and then the original signal is firstly filtered to obtain fig. 6; it can be seen that, in fig. 6, most of the noise is filtered after the preliminary filtering, so that a better effect is achieved. And further filtering out fine noise through other modes, and performing classification training of the next step.
And (3) neural network classification training:
neural networks are widely interconnected parallel networks of simple units with adaptability, organized to simulate the interactive response of the biological nervous system to real-world objects. The neural network system in the present embodiment is composed of a plurality of simple modules, as shown in fig. 7.
xiIs an input from the ith neuron, wiRepresents the connection weight of the ith neuron, theta is the threshold, the circle pointed to by the arrow is the current neuron (simple block), and y is the output, where
Figure BDA0002452641740000081
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that features described in different dependent claims and herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (9)

1. A lung sound signal processing method of electronic auscultation is characterized by comprising the following steps:
sequentially performing band-pass filtering, down-sampling and normalization processing on the acquired original lung sound signals to obtain lung sound signals to be trained;
processing the lung sound signal to be trained by adopting a plurality of convolution units to obtain a lung sound signal feature vector; the connection mode of the convolution units comprises sequential connection and jump connection;
and processing the lung sound signal feature vectors finally output by the convolution unit by adopting a full connection layer to obtain a classification result.
2. The method for processing electronically auscultated lung sound signals according to claim 1,
the performing band-pass filtering includes:
performing 20Hz to 1800Hz band-pass filtering on the original lung sound signal by adopting a Chebyshev band-pass filter to obtain a filtered lung sound signal Ha(jΩ):
Figure FDA0002452641730000011
In the formula, the pass band ripple coefficient, CN(. for) represents Chebyshev polynomial, omega is the original lung sound signal frequency, omegapuCut-off frequency on pass band, ΩplIs the cut-off frequency below the pass-band,
Ω0is the intermediate variable(s) of the variable,
Figure FDA0002452641730000012
3. the method for processing electronically auscultated lung sound signals according to claim 2,
the down-sampling frequency is 4000Hz, and the original lung sound signal frequency is 44100 Hz.
4. The method for processing electronically auscultated lung sound signals according to claim 3,
the normalization processing of the lung sound signal obtained after the down sampling comprises the following steps:
and increasing the number of the lung sound signals obtained after down-sampling by using a data set enhancing means to obtain a plurality of sections of samples, and then performing 0-1 normalization processing on each section of sample to obtain the lung sound signals to be trained.
5. The method for processing electronically auscultated lung sound signals according to claim 4,
the convolution unit sequentially comprises a convolution layer, a pooling layer, a batch normalization layer and an activation layer;
for the first convolution unit, the input signal of the first convolution unit is the lung sound signal to be trained, and the input signals of other convolution units comprise the output signal of any preceding stage convolution unit;
the convolution layer uses convolution kernel to carry out convolution operation on the input signal section by section to obtain corresponding lung sound signal characteristics;
the pooling layer performs down-sampling on the lung sound signal characteristics to obtain down-sampled characteristics;
the batch normalization layer performs batch normalization on the down-sampled features to obtain lung sound signal data with a mean value of 0 and a variance of 1;
and the activation layer activates the lung sound signal data by adopting an activation function to obtain a lung sound signal characteristic vector as an output signal of the current convolution unit.
6. The method for processing electronically auscultated lung sound signals according to claim 5,
the sequential connection comprises that the adjacent convolution units sequentially transmit the lung sound signal characteristic vectors to process;
the jump connection comprises that the activation layers of all the post-stage convolution units except the first-stage convolution unit can receive the lung sound signal characteristic vector output by any pre-stage convolution unit; and the activation layer adds the received lung sound signal characteristic vector output by the preceding stage convolution unit and the lung sound signal data obtained by the batch normalization layer and then adopts an activation function for activation.
7. The method for processing electronically auscultated lung sound signals according to claim 6,
the full connection layer processes the lung sound signal feature vectors finally output by the convolution units, and the full connection layer spreads the lung sound signal feature vectors into one-dimensional feature vectors; the full connection layer comprises an input layer, a hidden layer and an output layer, and the one-dimensional characteristic vector is input to the input layer and then passes through the hidden layer to the output layer; and the output layer and the input layer are fully connected, and finally, a classification result of the lung sound signal feature vector is obtained.
8. The method for processing electronically auscultated lung sound signals according to claim 7,
the forward propagation formula of the full connection layer is as follows:
Figure FDA0002452641730000021
in the formula zl+1(j)The logits value of the jth neuron at the l +1 layer is obtained, and the l +1 layer is the layer below the random layer l; n represents the total number of neurons in the random layer l, j represents the number of neurons in the layer l +1,
Figure FDA0002452641730000022
is the weight between the ith neuron at the l < th > layer and the jth neuron at the l +1 < th > layer, al(i)Represents the activation value of the ith neuron of the random layer l,
Figure FDA0002452641730000023
representing the bias value of all neurons of a random layer l to a jth neuron of a layer l + 1;
the activation function is Re L U when the l +1 layer is the hidden layer, and Softmax when the l +1 layer is the output layer.
9. The method for processing electronically auscultated lung sound signals according to claim 8,
the activation function adopted by the activation layer in the convolution unit is a Re L U function.
CN202010297231.9A 2020-04-15 2020-04-15 Electronic auscultation lung sound signal processing method Pending CN111466947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010297231.9A CN111466947A (en) 2020-04-15 2020-04-15 Electronic auscultation lung sound signal processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010297231.9A CN111466947A (en) 2020-04-15 2020-04-15 Electronic auscultation lung sound signal processing method

Publications (1)

Publication Number Publication Date
CN111466947A true CN111466947A (en) 2020-07-31

Family

ID=71753498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010297231.9A Pending CN111466947A (en) 2020-04-15 2020-04-15 Electronic auscultation lung sound signal processing method

Country Status (1)

Country Link
CN (1) CN111466947A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668556A (en) * 2021-01-21 2021-04-16 广州联智信息科技有限公司 Breath sound identification method and system
CN113679413A (en) * 2021-09-15 2021-11-23 北方民族大学 VMD-CNN-based lung sound feature identification and classification method and system
CN112668556B (en) * 2021-01-21 2024-06-07 广东白云学院 Breathing sound identification method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818366A (en) * 2017-10-25 2018-03-20 成都力创昆仑网络科技有限公司 A kind of lungs sound sorting technique, system and purposes based on convolutional neural networks
US20190008475A1 (en) * 2017-07-04 2019-01-10 Tata Consultancy Services Limited Systems and methods for detecting pulmonary abnormalities using lung sounds
CN109389584A (en) * 2018-09-17 2019-02-26 成都信息工程大学 Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN110532424A (en) * 2019-09-26 2019-12-03 西南科技大学 A kind of lungs sound tagsort system and method based on deep learning and cloud platform
CN110970042A (en) * 2019-12-13 2020-04-07 苏州美糯爱医疗科技有限公司 Artificial intelligent real-time classification method, system and device for pulmonary rales of electronic stethoscope and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190008475A1 (en) * 2017-07-04 2019-01-10 Tata Consultancy Services Limited Systems and methods for detecting pulmonary abnormalities using lung sounds
CN107818366A (en) * 2017-10-25 2018-03-20 成都力创昆仑网络科技有限公司 A kind of lungs sound sorting technique, system and purposes based on convolutional neural networks
CN109389584A (en) * 2018-09-17 2019-02-26 成都信息工程大学 Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN110532424A (en) * 2019-09-26 2019-12-03 西南科技大学 A kind of lungs sound tagsort system and method based on deep learning and cloud platform
CN110970042A (en) * 2019-12-13 2020-04-07 苏州美糯爱医疗科技有限公司 Artificial intelligent real-time classification method, system and device for pulmonary rales of electronic stethoscope and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐豪: "基于残差神经网络的手势图像分类", 《2018 3RD INTERNATIONAL CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (IEA 2018)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668556A (en) * 2021-01-21 2021-04-16 广州联智信息科技有限公司 Breath sound identification method and system
CN112668556B (en) * 2021-01-21 2024-06-07 广东白云学院 Breathing sound identification method and system
CN113679413A (en) * 2021-09-15 2021-11-23 北方民族大学 VMD-CNN-based lung sound feature identification and classification method and system
CN113679413B (en) * 2021-09-15 2023-11-10 北方民族大学 VMD-CNN-based lung sound feature recognition and classification method and system

Similar Documents

Publication Publication Date Title
Wu et al. A study on arrhythmia via ECG signal classification using the convolutional neural network
Ma et al. Lungbrn: A smart digital stethoscope for detecting respiratory disease using bi-resnet deep learning algorithm
Shi et al. Lung sound recognition algorithm based on vggish-bigru
CN111329445B (en) Atrial fibrillation identification method based on group convolution residual error network and long-term and short-term memory network
CN112508110A (en) Deep learning-based electrocardiosignal graph classification method
CN105841961A (en) Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network
CN110755108A (en) Heart sound classification method, system and device based on intelligent stethoscope and readable storage medium
CN110970042B (en) Pulmonary ralated artificial intelligence real-time classification method, system and device of electronic stethoscope and readable storage medium
CN111956208B (en) ECG signal classification method based on ultra-lightweight convolutional neural network
CN108567418A (en) A kind of pulse signal inferior health detection method and detecting system based on PCANet
CN111291727B (en) Method and device for detecting signal quality by using photoplethysmography
CN111222498A (en) Identity recognition method based on photoplethysmography
CN113749657A (en) Brain wave emotion recognition method based on multitask capsules
CN112101096A (en) Suicide emotion perception method based on multi-mode fusion of voice and micro-expression
CN115919330A (en) EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution
CN111466947A (en) Electronic auscultation lung sound signal processing method
Wu et al. A novel approach to diagnose sleep apnea using enhanced frequency extraction network
CN110192864B (en) Cross-domain electrocardiogram biological characteristic identity recognition method
CN113990303A (en) Environmental sound identification method based on multi-resolution cavity depth separable convolution network
CN113627391A (en) Cross-mode electroencephalogram signal identification method considering individual difference
CN113128353A (en) Emotion sensing method and system for natural human-computer interaction
CN117158997A (en) Deep learning-based epileptic electroencephalogram signal classification model building method and classification method
CN113349801A (en) Imaginary speech electroencephalogram signal decoding method based on convolutional neural network
Chinmayi et al. Emotion Classification Using Deep Learning
Tiwari et al. Deep lung auscultation using acoustic biomarkers for abnormal respiratory sound event detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200731