CN112998690B - Pulse wave multi-feature fusion-based respiration rate extraction method - Google Patents

Pulse wave multi-feature fusion-based respiration rate extraction method Download PDF

Info

Publication number
CN112998690B
CN112998690B CN202110344562.8A CN202110344562A CN112998690B CN 112998690 B CN112998690 B CN 112998690B CN 202110344562 A CN202110344562 A CN 202110344562A CN 112998690 B CN112998690 B CN 112998690B
Authority
CN
China
Prior art keywords
pulse wave
pulse
channel
feature
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110344562.8A
Other languages
Chinese (zh)
Other versions
CN112998690A (en
Inventor
王一歌
邓伟芬
韦岗
曹燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202110344562.8A priority Critical patent/CN112998690B/en
Publication of CN112998690A publication Critical patent/CN112998690A/en
Application granted granted Critical
Publication of CN112998690B publication Critical patent/CN112998690B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pulmonology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a pulse wave multi-feature fusion-based respiration rate extraction method, which comprises the following steps: acquiring signals of wrists and fingers of a user by using an array pulse wave collector to obtain a multi-channel pulse wave file, and preprocessing data of the pulse wave file to obtain multi-channel fused pulse waves; extracting various time-frequency characteristics of the fused pulse waves; establishing credibility measures of the time-frequency characteristics through time domain analysis; using an attention mechanism to take the credibility measure of each feature as the learning weight of each feature in the neural network, and extracting a respiratory feature map by using the convolutional neural network; and fusing the extracted breathing characteristic diagrams, and inputting the fused breathing characteristic diagrams into a VGG regression model to obtain the final breathing rate. The method is based on the credibility measure of multiple features, and simultaneously better extracts the respiratory feature map by combining the attention mechanism of the neural network, so that the accuracy of pulse wave respiratory rate extraction under the condition of big data is improved.

Description

Pulse wave multi-feature fusion-based respiration rate extraction method
Technical Field
The invention relates to the technical field of respiratory rate monitoring, in particular to a respiratory rate extraction method based on pulse wave multi-feature fusion.
Background
The respiration rate is an important parameter for representing the respiratory function and is an important physiological index reflecting the physical health condition of an individual. Chronic respiratory disease is one of four major chronic diseases with the highest global burden, and the respiratory center of a human body can be affected after the human body is developed to a certain extent no matter whether the respiratory system or other important organs are affected. Therefore, continuous and accurate respiratory rate monitoring is beneficial to preventing pathological changes of the lung, the respiratory tract and other parts.
The current methods for detecting the respiration rate mainly include an electric signal detection method, an acoustic signal detection method and a photoelectric signal detection method. At present, the clinical application mainly uses an electric signal detection method, an electric signal acquisition method comprises measurement methods such as a thermosensitive type, an impedance type and a capacitance type, and the respiration rate is measured through the transformation of electric signals. The electrical signal measurement method needs to stick an external wire of an electrode plate on a human body for measurement, and the electrical signal is easily influenced by limb movement. The acoustic signal detection method needs to place the collector in front of the nose of a human body, and can cause discomfort of a tested person, so that the electric signal and acoustic signal detection methods are not suitable for daily monitoring. The photoelectric signal detection method is a method for obtaining pulse waves based on photoelectric volumes and further extracting the respiration rate, does not need complex and huge hardware equipment, is easy to acquire data and is more beneficial to daily monitoring. At present, the extraction method of the respiration rate based on the photoplethysmography mainly utilizes the modulation effect of respiration on a pulse wave baseline, and comprises a low-pass filtering method, an EMD decomposition method, a wavelet decomposition method or a baseline signal processing method based on the pulse wave, but because the complexity of a human body system and the acquisition process of the pulse wave are easily interfered, the baseline component of the pulse wave is not single, and low-frequency signals related to temperature regulation and nervous system regulation are also included, the respiratory signal extracted by the method has certain uncertainty, and the accuracy of the respiration rate extraction is not high under the condition of large data volume.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a pulse wave multi-feature fusion-based respiration rate extraction method. The method extracts various time-frequency domain features containing respiratory information based on pulse waves, meanwhile, a calculation method of credibility measure is established for each feature, learning of different weights is carried out on each feature by combining a neural network with the credibility measure, a respiratory feature map is extracted, and accuracy of respiratory rate extraction is improved.
The purpose of the invention can be achieved by adopting the following technical scheme:
a respiratory rate extraction method based on pulse wave multi-feature fusion comprises the following steps:
s1, collecting signals of wrists and fingers of a user by using an array pulse wave collector to obtain a multi-channel pulse wave file, and carrying out data preprocessing on the multi-channel pulse wave file to obtain multi-channel fused pulse waves, wherein the data preprocessing comprises time slicing, high-frequency noise removal, channel screening, multi-channel signal fusion and normalization;
s2, extracting various pulse wave time-frequency characteristics containing respiratory information from the multi-channel fused pulse wave obtained in the step S1;
s3, establishing credible measures for the wave time frequency characteristics of each pulse wave obtained in the step S2;
s4, inputting the pulse wave time-frequency characteristics obtained in the step S2 and the credibility measure obtained in the step S3 into a respiratory characteristic map extraction network, wherein the respiratory characteristic map extraction network comprises an attention mechanism module and a convolution neural network module: the attention mechanism module sets different learning weights for the time-frequency characteristics of each pulse wave obtained in the step S2 according to the credibility measure obtained in the step S3; the convolutional neural network module is a convolutional neural network consisting of a plurality of convolutional layers, activation layers and pooling layers and is used for extracting a respiratory characteristic diagram from the pulse wave time-frequency characteristics with the learning weight obtained by the attention mechanism module;
and S5, performing feature fusion on the respiration feature map obtained in the step S4, and inputting the result obtained by fusion into a VGG regression model to obtain the respiration rate of the user.
Further, the step S1 process is as follows:
s101, collecting signals of wrists and fingers of a user by using an array pulse wave collector to obtain a multi-channel pulse wave file, performing data conversion on the multi-channel pulse wave file to obtain N-channel pulse waves, performing time slicing on the N-channel pulse waves at intervals of t, and taking the N-channel pulse waves in the same time slice as a group;
s102, sequentially passing the pulse wave signals of the N channels of each group through a cut-off frequency flThe FIR low-pass filter removes high-frequency noise of signals, carries out channel screening on N channel pulse waves after noise removal, judges whether each channel pulse wave has distortion and weak signal conditions according to the amplitude and slope change of the pulse waves, detects whether the signals have sudden change conditions by a sliding window method, detects whether the signals have non-periodic conditions by autocorrelation calculation, and removes the pulse waves of the channel if any one of the conditions exists;
and S103, carrying out channel fusion on the pulse waves of the channels obtained in the step S102.
Further, the step S2 process is as follows:
s201, obtaining a main wave peak point set for the multi-channel fused pulse wave obtained through the preprocessing of the step S1;
s202, obtaining envelope characteristics of the main wave crest by performing cubic spline interpolation on the main wave crest point set;
s203, sequentially traversing the main wave peak points, and solving the time difference between two adjacent peak points to obtain the pulse period change characteristic Tci=pi-pi-1Wherein p isiA time point corresponding to the ith main wave crest point of the pulse wave;
s204, sequentially traversing the main wave peak points, and obtaining the difference value of the amplitudes between two adjacent points to divide the time difference between the two points to obtain the pulse amplitude change rate characteristic
Figure BDA0002997070160000031
Wherein p isiIs the time point corresponding to the ith main wave crest point of the pulse wave, aiThe amplitude corresponding to the ith main wave crest point of the pulse wave is obtained;
s205, passing the multichannel fused pulse wave obtained through the preprocessing in the step S1 through a cut-off frequency f2The FIR low-pass filter of (1) obtains the low-frequency characteristics of the pulse wave.
Further, the process of establishing confidence measures for the respective features in step S3 is as follows:
s301, solving autocorrelation functions of envelope characteristics of main wave peaks, pulse period change characteristics, pulse amplitude change rate characteristics and low-frequency characteristics of pulse waves, and collecting maximum value points of the autocorrelation functions;
s302, setting a minimum interval threshold value between maximum value points as TminMaximum interval threshold of TmaxSequentially detecting the interval between each maximum value point by a sliding window method, wherein the calculation interval is less than TminOr greater than TmaxN, corresponding to a confidence measure of
Figure BDA0002997070160000041
Furthermore, the respiratory feature map extraction network comprises an attention mechanism module and a convolutional neural network module, wherein the attention mechanism module takes various pulse wave time-frequency features containing respiratory information as input, the learning weight of the convolutional neural network is taken according to the credibility R of each time-frequency feature, and the credibility R point of each time-frequency feature is multiplied by the corresponding time-frequency feature to obtain the pulse wave time-frequency feature with the learning weight; the convolutional neural network module inputs the pulse wave time-frequency features with the learning weights into a convolutional layer 1, a pooling layer 1, a convolutional layer 2, a pooling layer 2 and a full connection layer which are sequentially connected in sequence, and the full connection layer outputs a feature expression of respiratory information, namely a respiratory feature map.
Further, in the step S5, all the respiration feature maps are fused into a single-path feature and sent to the VGG regression model, the VGG regression model uses a dynamic learning rate, an optimizer adopted by the VGG regression model is a random gradient descent (SGD) optimizer, and a loss function loss of the VGG regression model uses a Mean Square Error (MSE).
Furthermore, the VGG regression model comprises a Dropout layer, a full connection layer, a ReLU layer, and a full connection layer in sequence from the input layer to the output layer, and the final output is the breathing rate of the user.
Compared with the prior art, the invention has the following advantages and effects:
1. according to the invention, based on the modulation effect of respiration on the frequency and amplitude of a pulse signal and the low-frequency characteristic of the respiration signal, various characteristics capable of describing the respiration signal, such as a main wave envelope characteristic, a pulse period change characteristic, a pulse amplitude change rate characteristic, a low-frequency characteristic and the like of the pulse wave, are extracted, so that a neural network can extract respiration information from multiple aspects, and the problem of uncertainty of a single characteristic caused by the influence of temperature regulation and nervous system regulation on the pulse wave is avoided;
2. based on the quasi-periodicity of the respiratory signal, the method constructs a credibility measure for the time-frequency characteristics of the pulse wave, and has quantitative evaluation indexes for each characteristic;
3. according to the invention, the credibility measure is used as the weight for learning each feature by the neural network by using the attention mechanism, the multidimensional breathing feature map is extracted, and the accuracy of breathing rate extraction is improved.
Drawings
Fig. 1 is a system block diagram of a pulse wave multi-feature fusion-based respiration rate extraction method disclosed in the embodiment of the present invention;
FIG. 2 is a diagram illustrating the effect of pulse wave time-frequency feature extraction according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating the process of constructing the confidence measure of the time-frequency characteristics of the pulse waves according to the embodiment of the present invention;
FIG. 4 is a block diagram of a respiratory feature map extraction network in an embodiment of the present invention;
FIG. 5 is a flow chart of a regression network for respiration rate extraction in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The method for extracting the respiratory rate based on the pulse wave multi-feature fusion extracts multi-dimensional pulse wave time-frequency features based on the amplitude modulation and frequency modulation effects of respiratory activities on pulse waves and the low-frequency characteristics of the respiratory waves, calculates the credibility measure for each feature, and extracts the respiratory feature graph from each feature according to different learning weights by combining a convolutional neural network so as to improve the accuracy of the respiratory rate.
As shown in fig. 1, the method comprises the following steps:
s1, preprocessing the pulse wave data, which comprises the following steps:
s101, an array pulse collector developed by a national mobile ultrasonic detection center is adopted in the embodiment to collect signals of wrists and fingers of a user to obtain a 13-channel pulse file, and meanwhile, a PC-3000 multi-parameter monitor manufactured by Shanghai Likang biomedical corporation is adopted to obtain a respiration rate as a label for learning a neural network model and a comparison standard of a respiration rate extraction result.
Firstly, format conversion is carried out on a pulse file to obtain 13-channel pulse waves, the 13-channel pulse waves are sequentially sliced according to the interval of 1 minute, the 13-channel pulse waves in the same time segment are used as a group, and the real respiration rate corresponding to the pulse waves in the time segment is obtained from data recorded by a PC-3000 multi-parameter monitor and used as a label for model training.
The pulse wave noise reduction method is characterized in that the signal acquisition process is influenced by factors such as photoelectric conversion, human body movement, power line interference and the like, and the acquired pulse wave is often accompanied by power frequency interference, myoelectricity noise, baseline drift noise and movement artifact, so that the signal needs to be subjected to de-noising processing. In this embodiment, the pulse waves of 13 channels in each group are sequentially passed through an FIR low-pass filter to remove the high-frequency noise of the pulse waves. As the sampling rate of pulse acquisition is 500Hz and the pulse frequency is 1-3Hz, the parameters of the FIR low-pass filter are set to have the cutoff frequency of 25Hz, the order of 300 and the window function of Hamming window according to the parameters;
s102, performing channel screening on each group of 13-channel pulse waves obtained in the step S101 after the high-frequency noise is removed, and removing channels with poor quality;
firstly, the distortion condition of the pulse wave of each channel is judged. The signal acquisition amplitude range of the pulse acquisition device adopted in the embodiment is 0-3.3, and when the amplitude is too large, the signal distortion exists, namely the phenomenon that the main wave crest is cut off. In the embodiment, the change of the slope in 40 sampling points at the left and the right of the main wave peak in each cardiac cycle is obtained to obtain the change S of the left slopeleftAnd right slope change SrightTo SleftAnd SrightRespectively solving the variance to obtain a left variance VleftAnd right difference VrightExperiments show that when V isright<0.1 and Vleft/Vright>There is distortion in the pulse wave at 10. In this embodiment, when the distortion condition in the pulse wave signal of each channel is detected, the number of pulse cycles M is obtained by calculating the number of main wave peaks, and the number of pulse cycles N with distortion is calculated by the above methodvIf the distortion ratio s is greater than the threshold thsWhen the channel pulse wave is seriously distorted, the channel pulse wave needs to be eliminated,the calculation formula is shown below.
Figure BDA0002997070160000071
Secondly, whether the pulse waves of all channels have weak signals or not is judged. By calculating the difference value between the average main wave peak value and the average starting point value of the pulse waves, when the difference value is less than 0.2, the signal-to-noise ratio of the pulse wave signals is too low, and the pulse waves of the channel need to be rejected.
Then, whether the pulse waves of the channels have sudden changes is judged. The detection of the sudden change condition of the pulse wave is mainly realized based on a sliding window method. Obtaining all main wave peak points and starting points of channel pulse waves, setting a rectangular window with the window length of winlen, sliding the step length along with the main wave peak set point to be step, obtaining the height difference from the starting point to the main wave peak of each pulse period in the window along with the sliding of the window to obtain a set H, obtaining the variance V of the HhTo find VhVariance V from previous window HhpreIf the ratio of (a) to (b) is greater than 20, the window has a sudden change, and the pulse wave of the channel needs to be eliminated.
And finally, judging the periodicity condition of the pulse waves of each channel. In the process of pulse signal acquisition, due to the conditions of human body activity and photoelectric interference, acquired pulse waves are disordered. In this embodiment, the autocorrelation function of the pulse wave is obtained, and it is detected whether there is an obvious peak value in the autocorrelation function in a normal pulse period, and if not, it is determined that the pulse wave of the channel has no obvious periodicity, and the pulse wave of the channel needs to be removed.
And S103, performing channel fusion on the pulse waves of the channels finally obtained in the step S102. Because each channel pulse wave is a signal received by different photosensitive sensors at the same time, the screened pulse waves are subjected to linear fusion, namely, the pulse waves of each channel are averaged after being summed. In addition, in order to improve the model precision, the pulse wave after channel fusion is normalized to a 0-1 interval.
S2, extracting the pulse wave time-frequency characteristics including the respiratory information, wherein the extraction result of each characteristic is shown in figure 2, and the specific process of the characteristic extraction is as follows:
s201, calculating a main wave peak point of each pulse period of each normalized pulse wave obtained through preprocessing in the step S1;
s202, obtaining a main wave crest enveloping characteristic by carrying out cubic spline interpolation on the main wave crest point obtained in the step S201;
s203, sequentially traversing the main wave peak points obtained in the step S201, and solving a time difference between two adjacent points to obtain a pulse period change characteristic;
s204, sequentially traversing the main wave peak points obtained in the step S201, and obtaining the difference value of the amplitude between two adjacent points to divide the time difference between the two points to obtain the pulse amplitude change rate characteristic;
s205, passing each normalized pulse wave obtained through the preprocessing in the step S1 through an FIR low-pass filter with the cut-off frequency of 1Hz to obtain low-frequency characteristics;
s3, establishing credibility measure process for each pulse wave time-frequency characteristic as shown in figure 3; since respiration is a quasi-periodic signal, the cyclostationarity of a feature is taken as a reliable measure of the feature. Firstly, solving an autocorrelation function of the wave-time-frequency characteristics of each pulse wave, and taking all maximum value points of the autocorrelation function; then setting the minimum interval between maximum value points as TminMaximum interval of TmaxSequentially detecting the interval between each maximum value point by a sliding window method, wherein the calculation interval is less than TminOr greater than TmaxIf the corresponding confidence measure formula is as follows:
Figure BDA0002997070160000081
and S4, extracting a breathing characteristic diagram, wherein the process is shown in figure 4.
The Convolutional Neural Network (CNN) is a feed-forward neural network, and artificial neurons of the convolutional neural network can respond to peripheral units in a part of coverage range, so that the convolutional neural network has natural advantages for the feature extraction of images, and a respiratory feature map can be extracted from pulse time-frequency features by using the convolutional neural network. In the process of pulse signal acquisition, the interference is changed, the extracted time-frequency characteristics contain rich respiratory information, and in order to better extract the respiratory characteristics, the learning weight can be set for each pulse wave time-frequency by using an attention mechanism.
The respiratory feature map extraction network shown in fig. 4 includes an attention mechanism module and a convolutional neural network module, wherein the attention mechanism module takes the pulse wave time-frequency features containing respiratory information as input, and takes the confidence measure R of each time-frequency feature as learning weight of the convolutional neural network, that is, the confidence measure R point of each time-frequency feature is multiplied by the corresponding time-frequency feature to obtain pulse wave time-frequency features with learning weight. Respectively inputting the pulse wave time-frequency characteristics with the learning weight into a convolutional neural network module, wherein the convolutional neural network module sequentially comprises a convolutional layer 1, a pooling layer 1, a convolutional layer 2, a pooling layer 2 and a full-connection layer from an input layer to an output layer, the convolutional layer 1, the pooling layer 1, the convolutional layer 2 and the pooling layer 2 are convolutional neural networks, and the full-connection layer outputs characteristic expression of respiratory information; the convolution layer 1 processes the output of the features in the attention mechanism module, the convolution kernel is 50 × 1, and the number of channels is 16. The pooling layer 1 processes the output of the convolutional layer 1 with maximum pooling, with a pooled kernel size of 50 x 1. Convolutional layer 2 processes the output of pooling layer 1 with a convolutional kernel size of 10 x 1 and a number of channels of 32. The pooling layer 2 processes the output of the convolutional layer 2, and a respiration characteristic map is obtained by using maximum pooling with a pooling kernel size of 6 x 1 and an activation function of a ReLU function as output.
S5, training a VGG regression model, and as shown in FIG. 5, extracting the respiration rate as follows:
s501, tiling and fusing all the respiratory feature maps obtained in the step S4 into a single-path feature, sending the single-path feature into a VGG model with the depth of 16, using the learning rate of 0.001 during initial training, and reducing the learning rate by 10 percent every time an epoch is passed when the loss is reduced to 10, wherein the optimizer is an SGD optimizer, and the loss uses MSE;
and S502, inputting the result obtained in the step S501 into a regression network to obtain the respiration rate, wherein the regression network enables the characteristic expression output in the step S501 to sequentially pass through a Dropout layer 1, a full connection layer 1, a ReLU layer 1, a Dropout layer 2, a full connection layer 2, a ReLU layer 2 and a full connection layer 3. The p of the Dropout layer is 0.5, the outputs of the full connection layer 1 and the full connection layer 2 are 2048, the output of the full connection layer 3 is the respiration rate, and the comparison of the result of the respiration rate extracted by the method and the result of the respiration rate extracted by the filtering extraction method and the envelope extraction method is shown in table 1 under the condition of 1 ten thousand orders of magnitude.
TABLE 1 comparison of respiration rate extraction results
Method of use Average relative error Root Mean Square Error (RMSE)
Envelope of the main wave 33.76% 7.063
Low pass filtering 20.31% 4.91
Feature fusion + VGG-training set 14.3% 2.8
Feature fusion + VGG-test set 11.4% 2.64
The present invention can be preferably realized and the aforementioned technical effects can be obtained as described above.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (6)

1. A respiration rate extraction method based on pulse wave multi-feature fusion is characterized by comprising the following steps:
s1, collecting signals of wrists and fingers of a user by using an array pulse wave collector to obtain a multi-channel pulse wave file, and performing data preprocessing on the multi-channel pulse wave file to obtain multi-channel fused pulse waves, wherein the data preprocessing comprises time slicing, high-frequency noise removal, channel screening, multi-channel signal fusion and normalization;
s2, extracting various pulse wave time-frequency characteristics containing respiratory information from the multi-channel fused pulse wave obtained in the step S1; the step S2 process is as follows:
s201, obtaining a main wave peak point set for the multi-channel fused pulse wave obtained through the preprocessing of the step S1;
s202, obtaining envelope characteristics of the main wave crest by performing cubic spline interpolation on the main wave crest point set;
s203, sequentially traversing the main wave peak points, and solving the time difference between two adjacent peak points to obtain the pulse period change characteristic Tci=pi-pi-1Wherein p isiThe time point corresponding to the ith main wave crest point of the pulse wave is taken as the time point;
s204, sequentially traversing the main wave peak points, and obtaining the difference value of the amplitudes between two adjacent points to divide the time difference between the two points to obtain the pulse amplitude change rate characteristic
Figure FDA0003446020070000011
Wherein p isiIs the time point corresponding to the ith main wave crest point of the pulse wave, aiThe amplitude corresponding to the ith main wave crest point of the pulse wave is obtained;
s205, passing the multichannel fused pulse wave obtained through the preprocessing in the step S1 through a cut-off frequency f2The FIR low-pass filter obtains the low-frequency characteristics of the pulse wave;
s3, establishing credible measures for the wave time frequency characteristics of each pulse wave obtained in the step S2;
s4, inputting the pulse wave time-frequency characteristics obtained in the step S2 and the credibility measure obtained in the step S3 into a respiratory characteristic map extraction network, wherein the respiratory characteristic map extraction network comprises an attention mechanism module and a convolution neural network module: the attention mechanism module sets different learning weights for the time-frequency characteristics of each pulse wave obtained in the step S2 according to the credibility measure obtained in the step S3; the convolutional neural network module is a convolutional neural network consisting of a plurality of convolutional layers, activation layers and pooling layers and is used for extracting a respiratory characteristic diagram from the pulse wave time-frequency characteristics with the learning weight obtained by the attention mechanism module;
and S5, performing feature fusion on the respiration feature map obtained in the step S4, and inputting the result obtained by fusion into a VGG regression model to obtain the respiration rate of the user.
2. The method for extracting respiratory rate based on pulse wave multi-feature fusion as claimed in claim 1, wherein the step S1 is as follows:
s101, collecting signals of wrists and fingers of a user by using an array pulse wave collector to obtain a multi-channel pulse wave file, performing data conversion on the multi-channel pulse wave file to obtain N-channel pulse waves, performing time slicing on the N-channel pulse waves at intervals of t, and taking the N-channel pulse waves in the same time slice as a group;
s102, sequentially passing the N-channel pulse wave signals of each group through a cut-off frequency flThe FIR low-pass filter removes high-frequency noise of signals, carries out channel screening on N channel pulse waves after noise removal, judges whether each channel pulse wave has distortion and weak signal conditions according to the amplitude and slope change of the pulse waves, detects whether the signals have sudden change conditions by a sliding window method, and calculates the conditions by autocorrelationDetecting whether the signal has non-periodic condition, if any one of the conditions exists, removing the pulse wave of the channel;
and S103, performing channel fusion on the pulse waves of the channels obtained in the step S102.
3. The method for extracting respiration rate based on pulse wave multi-feature fusion according to claim 1, wherein the step S3 of establishing confidence measure for each feature is as follows:
s301, solving autocorrelation functions of envelope characteristics of main wave peaks, pulse period change characteristics, pulse amplitude change rate characteristics and low-frequency characteristics of pulse waves, and collecting maximum value points of the autocorrelation functions;
s302, setting a minimum interval threshold value between maximum value points as TminMaximum interval threshold of TmaxSequentially detecting the interval between each maximum value point by a sliding window method, wherein the calculation interval is less than TminOr greater than TmaxN, corresponding to a confidence measure of
Figure FDA0003446020070000021
4. The method for extracting the respiration rate based on the pulse wave multi-feature fusion of claim 1, wherein the respiration feature map extraction network comprises an attention mechanism module and a convolutional neural network module, wherein the attention mechanism module takes a plurality of pulse wave time-frequency features containing respiration information as input, and multiplies the credibility measure R point of each time-frequency feature by the corresponding time-frequency feature according to the credibility measure R of each time-frequency feature as the learning weight of the convolutional neural network to obtain the pulse wave time-frequency feature with the learning weight; the convolutional neural network module inputs the pulse wave time-frequency characteristics with the learning weight into a convolutional layer 1, a pooling layer 1, a convolutional layer 2, a pooling layer 2 and a full-link layer which are sequentially connected in sequence, and the full-link layer outputs a characteristic expression of respiratory information, namely a respiratory characteristic diagram.
5. The pulse wave multi-feature fusion-based respiration rate extraction method of claim 1, wherein in the step S5, all the respiration feature maps are fused into a single feature and fed into a VGG regression model, the VGG regression model uses a dynamic learning rate, the VGG regression model uses an optimizer of random gradient descent (SGD) optimizer, and the loss function loss of the VGG regression model uses Mean Square Error (MSE).
6. The method for extracting respiration rate based on pulse wave multi-feature fusion of claim 1, wherein the VGG regression model comprises a Dropout layer, a full connection layer, a ReLU layer, and a full connection layer from the input to the output layer, and the final output is the respiration rate of the user.
CN202110344562.8A 2021-03-29 2021-03-29 Pulse wave multi-feature fusion-based respiration rate extraction method Expired - Fee Related CN112998690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110344562.8A CN112998690B (en) 2021-03-29 2021-03-29 Pulse wave multi-feature fusion-based respiration rate extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110344562.8A CN112998690B (en) 2021-03-29 2021-03-29 Pulse wave multi-feature fusion-based respiration rate extraction method

Publications (2)

Publication Number Publication Date
CN112998690A CN112998690A (en) 2021-06-22
CN112998690B true CN112998690B (en) 2022-05-24

Family

ID=76409491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110344562.8A Expired - Fee Related CN112998690B (en) 2021-03-29 2021-03-29 Pulse wave multi-feature fusion-based respiration rate extraction method

Country Status (1)

Country Link
CN (1) CN112998690B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113892939A (en) * 2021-09-26 2022-01-07 燕山大学 Method for monitoring respiratory frequency of human body in resting state based on multi-feature fusion
CN114052675B (en) * 2021-11-18 2023-08-22 广东电网有限责任公司 Pulse abnormality judging method and system based on fused attention mechanism
CN115624322B (en) * 2022-11-17 2023-04-25 北京科技大学 Non-contact physiological signal detection method and system based on efficient space-time modeling

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013103072A (en) * 2011-11-16 2013-05-30 Ntt Docomo Inc Device, system, method and program for mental state estimation and mobile terminal
CN104665768B (en) * 2013-10-03 2019-07-23 塔塔咨询服务有限公司 The monitoring of physiological parameter
CN106073783B (en) * 2016-06-23 2024-02-20 桂林航天工业学院 Method for extracting respiration rate from photoplethysmography wave
CN106333648A (en) * 2016-09-18 2017-01-18 京东方科技集团股份有限公司 Sleep asphyxia monitoring method based on wearable device and wearable device
CN106983501A (en) * 2017-03-29 2017-07-28 汪欣 Pulse wave and respiratory wave diagnostic device and method

Also Published As

Publication number Publication date
CN112998690A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN112998690B (en) Pulse wave multi-feature fusion-based respiration rate extraction method
CN108416367B (en) Sleep staging method based on multi-sensor data decision-level fusion
CN109907752B (en) Electrocardiogram diagnosis and monitoring system for removing motion artifact interference and electrocardio characteristic detection
CN109124610B (en) Anti-interference method and device for non-invasive blood pressure measurement
Castro et al. Heart sound segmentation of pediatric auscultations using wavelet analysis
CN108388912A (en) Sleep stage method based on multisensor feature optimization algorithm
CN110811647B (en) Multi-channel hidden lie detection method based on ballistocardiogram signal
CN104173043A (en) Electrocardiogram (ECG) data analysis method suitable for mobile platform
CN113349752B (en) Wearable device real-time heart rate monitoring method based on sensing fusion
CN113066502B (en) Heart sound segmentation positioning method based on VMD and multi-wavelet
CN108577834A (en) A method of it is detected automatically for phase spike between epilepsy
WO2018149147A1 (en) Method and apparatus for extracting respiration rate
CN105433931A (en) Processing device and method for describing waveform by light volume change
CN111528821A (en) Method for identifying characteristic points of counterpulsation waves in pulse waves
CN100586367C (en) Apparatus for testing gastric electricity of body surface
CN110680307A (en) Dynamic blood pressure monitoring method based on pulse wave conduction time in exercise environment
Frei et al. Least squares acceleration filtering for the estimation of signal derivatives and sharpness at extrema [and application to biological signals]
CN113729653A (en) Human body pulse wave signal acquisition method
CN106419884A (en) Heart rate calculating method and system based on wavelet analysis
CN116211308A (en) Method for evaluating body fatigue under high-strength exercise
CN110881958A (en) Pulse signal non-physiological signal removing method for traditional Chinese medicine pulse diagnosis instrument
CN115251845A (en) Sleep monitoring method for processing brain wave signals based on TB-TF-BiGRU model
CN112043256A (en) Radar-based multi-target heart rate real-time measurement method
CN113100727B (en) Method for analyzing and identifying pulse wave crest in real time
Shama et al. Hamming filter design for ECG signal detection and processing using co-simulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220524

CF01 Termination of patent right due to non-payment of annual fee