CN114913587B - Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning - Google Patents

Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning Download PDF

Info

Publication number
CN114913587B
CN114913587B CN202210669910.3A CN202210669910A CN114913587B CN 114913587 B CN114913587 B CN 114913587B CN 202210669910 A CN202210669910 A CN 202210669910A CN 114913587 B CN114913587 B CN 114913587B
Authority
CN
China
Prior art keywords
heart rate
uncertainty
data
predicted
color difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210669910.3A
Other languages
Chinese (zh)
Other versions
CN114913587A (en
Inventor
宋仁成
王晗
夏豪杰
成娟
李畅
陈勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202210669910.3A priority Critical patent/CN114913587B/en
Publication of CN114913587A publication Critical patent/CN114913587A/en
Application granted granted Critical
Publication of CN114913587B publication Critical patent/CN114913587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Computational Linguistics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Fuzzy Systems (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)

Abstract

The invention discloses a non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning, which comprises the following steps: 1, processing video to obtain color difference signals and obtaining data; 2, applying a Monte Carlo dropouout method to non-contact heart rate measurement, and constructing a heart rate uncertainty quantization network based on the Monte Carlo dropouout method; determining a loss function, training a network, and obtaining an optimal model; 4 quantifying uncertainty in predicting heart rate; and 5, evaluating and calibrating the obtained uncertainty. The method can directly obtain the uncertainty of prediction on the basis of obtaining the predicted heart rate, and the uncertainty is highly related to the real error, so that the predicted heart rate can be quantitatively evaluated without reference under the condition of no reference value, and a solution is provided for realizing the practical application of video-based long-term physiological parameter monitoring.

Description

Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning
Technical Field
The invention relates to the technical field of non-contact physiological signal detection and analysis, in particular to a non-contact heart rate measurement uncertainty quantification method based on Bayesian deep learning.
Background
Heart rate is an important indicator of human health, and existing heart rate measurement methods include contact and non-contact, where contact methods measure accurately, but relying on sensors in contact with the subject's skin may cause discomfort to the patient and are not suitable for long-term monitoring. The noncontact heart rate measurement method is a technique of obtaining heart rate-related information by recording a change in facial skin color caused by a heartbeat with a camera. In recent years, various measurement methods have been proposed based on remote photoplethysmography (remote photoplethysmography, rpg), such as conventional methods based on blind source separation and based on models. With the wide application of the deep learning method in various fields, a plurality of rpg measurement methods based on deep learning have been proposed, and although the deep learning method achieves better results than the conventional method, there are some limitations. Deep learning methods often lack uncertainty predictions and blindly assume that the results are reliable, however, this is not always the case, especially in high risk applications where safety requirements are strict, such as in the field of medical diagnostics, autopilot, etc., relying entirely on deep models for decision making can lead to catastrophic results. Uncertainty quantization of these deep learning methods is also of little concern when solving the rpg problem. The estimation uncertainty is mainly to model the data uncertainty and the model uncertainty respectively. Where data uncertainty (noise inherent in the captured data, such uncertainty cannot be eliminated by adding training data, etc., model uncertainty is due to insufficient knowledge of the model, is uncertainty of model parameters, and can be reduced or even eliminated by enough data.
Because the rPPG signal is easy to be interfered by noise, physiological parameter information obtained by the rPPG signal is important to the effectiveness of subsequent diagnosis and treatment, so that the quality evaluation of the obtained physiological index is necessary; the conventional rPPG waveform and physiological index evaluation generally depends on the comparison between the conventional rPPG waveform and a reference value, but in practical application scenes such as long-term monitoring, a contact PPG reference signal is generally absent, an existing non-reference evaluation study is just developed, the conventional non-reference evaluation study depends on the characteristics of a predicted waveform, and a related method is qualitative evaluation. Therefore, the research is that the quantitative evaluation of the obtained prediction result is one of the difficulties to be solved in realizing the long-term physiological monitoring of the video rPPG under the condition of not depending on a reference signal.
Disclosure of Invention
The invention aims to solve the technical defects, and provides a non-contact heart rate measurement uncertainty quantification method based on Bayesian deep learning, so that uncertainty of heart rate prediction can be directly obtained on the basis of realizing accurate heart rate value prediction, and therefore, under the condition of reference signal loss, quality of rPPG prediction results is quantitatively evaluated by utilizing the uncertainty, and a solution is provided for practical application of realizing video rPPG long-term physiological monitoring.
The invention adopts the following scheme for solving the technical problems:
the invention discloses a non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning, which is characterized by comprising the following steps of:
step one, data generation:
step 1.1, acquiring a J frame face video image, defining the left cheek and the right cheek of a face in the image as an interested region, averaging pixels of all channels in the interested region in the J frame face video image, thus obtaining an R channel signal, a G channel signal and a B channel signal of the J frame face video image, extracting the R channel signal, the G channel signal and the B channel signal by using a non-contact heart rate measurement algorithm, obtaining a color difference signal X with the length of J, and inputting the color difference signal X as a network;
step 1.2, obtaining a reference heart rate signal with the length of J corresponding to a J frame face video image, calculating the main frequency of the reference heart rate signal, thus obtaining a reference heart rate Y, and forming a sample by a color difference signal X and the reference heart rate Y, thus obtaining a plurality of samples and dividing the samples into a training data set D, a test data set S and a calibration data set C;
step two, constructing an uncertainty quantization network U (W) of heart rate measurement based on a Monte Carlo dropout method, which comprises the following steps: the system comprises an encoder, a decoder, a full connection layer and two output channels;
step 2.1, each of the encoder and the decoder has a P layer, wherein the P layer of the encoder consists of a one-dimensional convolution layer, a PReLU activation function and a dropout layer, and the P layer of the decoder consists of a one-dimensional deconvolution layer, a PReLU activation function and a dropout layer; the output layer of the P-th layer of the encoder is connected with the input layer of the P-th layer of the decoder in a jump connection mode, and P is E [1, P ];
step 2.2 ith color difference signal X in training data set D i After passing through the uncertainty quantization network U (W), the ith color difference signal X is output i Is the heart rate of (a)Data uncertainty sigma i ,i∈[1,N]N represents the number of training data;
step three, determining a loss function:
establishing a loss function L using (1) U
L U =αL NLL +βL 1 (1)
In the formula (1), L NLL Represents a negative log-likelihood loss and is obtained by (2) for learning the data uncertainty sigma i ;L 1 Representing the average absolute error loss and obtained by the formula (3), wherein alpha is the weight of the negative log likelihood loss, and beta is the weight of the average absolute error loss;
in the formula (2) and the formula (3), y i Respectively represent the ith color difference signal X i Is included in the reference heart rate value of (a), I.I 1 Represents L 1 A norm;
based on the ith color difference signal X i Learning uncertainty quantization network U (W) and utilizing loss function L U Continuously optimizing the network parameters W to ensure that the network learns heart rate and data uncertainty thereof so as to obtain an optimal heart rate prediction model U (W) *
Fourth, the mth test color difference signal X 'in the test data set S is processed' m Input optimal heart rate prediction model U (W) * And K times of prediction by using a dropout layer, so as to obtain a final predicted heart rate and uncertainty of the predicted heart rate, wherein the final predicted heart rate comprises total uncertainty, data uncertainty and model uncertainty:
step 4.1, the mth test color difference signal X' m The kth time inputs the optimal heart rate prediction model U (W) * And obtain the kth heart rateAnd data uncertainty->Thereby calculating the mth test color difference signal X 'using the formula (4) and the formula (5), respectively' m The corresponding mth predicted heart rate +.>And its model uncertainty->Respectively calculating the mth predicted heart rate by using the formula (6)Data uncertainty +.>Obtaining the mth predicted heart rate +.>Is +.>
In the formulas (4) - (7), M is [1, M ], M represents the number of test data, K is [1, K ], and K represents the number of repeated tests;
fifthly, evaluating and calibrating uncertainty of the predicted heart rate:
step 5.1 obtaining a predicted heart rate for all samples in the test dataset S according to step fourIs +.>Data uncertainty +.>Model uncertainty->And evaluate the uncertainty:
step 5.1.1 calculating the predicted heart rate for all samples in the test dataset SReference heart rate { Y 'for all samples in test dataset S' m |m∈[1,M]True absolute error { ε } between m |m∈[1,M]-a }; wherein ε m Represents the mth test color difference signal X' m Corresponding predicted heart rate->And the mth test color difference signal X' m Corresponding reference heart rate Y m ' true absolute error between;
calculating the total uncertainty of all samples in the test dataset SFrom the true absolute error { ε } m |m∈[1,M]Correlation coefficient R of } (T)
Calculating the data uncertaintyFrom the true absolute error { ε } m |m∈[1,M]Correlation coefficient R of } (A)
Calculating the model uncertaintyFrom the true absolute error { ε } m |m∈[1,M]Correlation coefficient R of } (E)
Step 5.1.2 predicting heart rate from all samples in the test dataset SIs +.>Data uncertainty +.>Model uncertainty->Obtaining three corresponding heart rate prediction confidence degrees and heart rate prediction confidence intervals corresponding to the three heart rate prediction confidence degrees respectively;
calculating the reference heart rate { Y' m |m∈[1,M]True probabilities falling within respective corresponding heart rate prediction confidence intervals;
using a credibility map to evaluate whether the three heart rate prediction confidence coefficients are matched with the true probabilities corresponding to the confidence coefficients respectively; if they match, it is indicative of a predicted heart rate for all samples in the test dataset SIs the total uncertainty of (2)Data uncertainty +.>Model uncertainty->The final uncertainty is the final uncertainty; otherwise, executing the step 5.2;
step 5.2, calibrating the prediction uncertainty obtained by the test dataset S by the calibration dataset C:
step 5.2.1 inputting the calibration dataset C into the optimal heart rate prediction model U (W) * In the step four, the predicted heart rate of all samples in the calibration data set is obtained according to the process of the step fourTotal uncertainty->Data uncertainty +.>And model uncertainty->Wherein (1)>Represents the first calibration color difference signal X l The corresponding first predicted heart rate, +.>Represents the first predicted heart rate->Corresponding total uncertainty, +.>Represents the first predicted heart rate->Corresponding data uncertainty, +.>Represents the first predicted heart rate->The corresponding model uncertainty, L, represents the number of calibration data;
step 5.2.3 obtaining the total uncertainty of the test dataset S by using the formulas (8), (9) and (10)Is a calibration coefficient lambda of (1) T Data uncertainty->Calibration coefficient lambda A Degree of model uncertaintyIs a calibration coefficient lambda of (1) E
Step 5.2.3 obtaining the mth predicted heart rate in the test dataset S by the formula (11), the formula (12) and the formula (13) respectivelyData uncertainty after calibration +.>Model uncertainty +.>And total uncertainty->
Compared with the prior art, the invention has the beneficial effects that:
1. the invention provides an uncertainty quantization method for predicting a non-contact heart rate by using Monte Carlo dropouout, which directly obtains uncertainty of a predicted heart rate on the basis of obtaining the predicted heart rate, and comprises the steps of quantitatively evaluating quality of the predicted heart rate, namely total uncertainty, data uncertainty and model uncertainty, and providing a solution for realizing practical application of video rPPG long-term physiological monitoring;
2. the invention uses the correlation coefficient and the reliability map to evaluate the quality of the uncertainty and calibrate the uncertainty so as to improve the quality of the uncertainty;
3. the uncertainty obtained in the invention is highly correlated with the true absolute error of the predicted heart rate, so that the uncertainty quantization solves the problem of lack of reference signals in actual application scenes, and provides a method for non-reference evaluation of the predicted heart rate quality.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a diagram of an overall framework for uncertainty quantization based on the Monte Carlo dropout method of the present invention;
FIG. 3 is a diagram of a network architecture of the present invention;
FIG. 4a is a graph of predicted heart rate versus total uncertainty versus reference heart rate deviation in accordance with the present invention;
FIG. 4b is a graph of reference heart rate versus total uncertainty in accordance with the present invention;
FIG. 4c is a graph of reference heart rate versus data uncertainty in accordance with the present invention;
FIG. 4d is a graph of reference heart rate versus model uncertainty in accordance with the present invention;
FIG. 5 is a graph showing the trend of true absolute error and uncertainty of heart rate prediction according to the present invention.
Detailed Description
In this embodiment, a non-contact heart rate measurement uncertainty quantization method based on bayesian deep learning is to input a noisy color difference signal X into an uncertainty quantization network U (W) Heart rate will be predictedData uncertainty sigma as uncertainty quantizationNetwork U (W) An output of (2); the most widely applied Monte Carlo dropout method in the Bayesian method is used for obtaining the uncertainty; the specific flow is shown in fig. 1, and comprises the following steps:
step one, data generation:
step 1.1, the invention uses a VIPL-HR database as a training set, and a MAHNOB-HCI database as a test data set and a calibration data set; acquiring J=300 frames of face video images, defining the left cheek and the right cheek of the face in the images as interested areas, averaging pixels of all channels in the interested areas in the J=300 frames of face video images, so as to obtain R channel signals, G channel signals and B channel signals of the J=300 frames of face video images, extracting the R channel signals, the G channel signals and the B channel signals by using a non-contact heart rate measurement algorithm, and obtaining color difference signals X with the length of J as network input; the non-contact heart rate measurement algorithm in this example employs a CHROM algorithm;
step 1.2, obtaining a reference heart rate signal with the length of J=300 corresponding to J=300 frames of face video images, calculating the main frequency of the reference heart rate signal, thus obtaining a reference heart rate Y, forming a sample by a color difference signal X and the reference heart rate Y, thus obtaining a plurality of samples and dividing the samples into a training data set D, a test data set S and a calibration data set C;
the overall framework of the non-contact heart rate prediction uncertainty is shown in fig. 2;
step two, constructing an uncertainty quantization network U (W) of heart rate measurement based on a Monte Carlo dropouout method, wherein the network structure is shown in fig. 3 and comprises the following steps: the system comprises an encoder, a decoder, a full connection layer and two output channels;
step 2.1, each of the encoder and the decoder has a P layer, wherein the P layer of the encoder consists of a one-dimensional convolution layer, a PReLU activation function and a dropout layer, and the P layer of the decoder consists of a one-dimensional deconvolution layer, a PReLU activation function and a dropout layer; the output layer of the P-th layer of the encoder is connected with the input layer of the P-th layer of the decoder in a jump connection mode, and P is E [1, P ];
step 2.2 ith color difference signal X in training data set D i After passing through the uncertainty quantization network U (W), the ith color difference signal X is output i Is the heart rate of (a)Data uncertainty sigma i ,i∈[1,N]N represents the number of training data;
step three, determining a loss function:
establishing a loss function L using (1) U
L U =αL NLL +βL 1 (1)
In the formula (1), L NLL Representing a negative log-likelihood loss, obtained by equation (2) for learning the data uncertainty σ i ;L 1 Representing the average absolute error loss, obtained by the formula (3), α is the weight of the negative log likelihood loss, and β is the weight of the average absolute error loss;
in the formula (2) and the formula (3), y i Respectively represent the ith color difference signal X i Is included in the reference heart rate value of (a), I.I 1 Represents L 1 A norm; wherein sigma i No tags are required for learning;
training a network using a dropout layer based on the ith color difference signal X i Learning uncertainty quantization network U (W) and utilizing loss function L U Continuously optimizing network parameters W by quantifying network U at uncertainty (W) Modeling data uncertainty by Gaussian distribution, so that the network learns heart rate and its data uncertainty to obtain an optimal heart rate prediction model U (W) *
Step four, carrying out variation reasoning on posterior distribution of model parameters to obtain model uncertainty; but the variational reasoning is difficult to calculate,therefore, the Monte Carlo dropout is used for performing approximate variational reasoning to obtain approximate posterior distribution of model parameters; this procedure is equivalent to the mth test color difference signal X 'in the test data set S' m Input optimal heart rate prediction model U (W) * And K times of prediction by using a dropout layer, so as to obtain a final predicted heart rate and uncertainty of the predicted heart rate, wherein the final predicted heart rate comprises total uncertainty, data uncertainty and model uncertainty:
step 4.1, the mth test color difference signal X' m The kth time inputs the optimal heart rate prediction model U (W) * And obtain the kth heart rateAnd data uncertainty->Thereby calculating the mth test color difference signal X 'using the formula (4) and the formula (5), respectively' m The corresponding mth predicted heart rate +.>And its model uncertainty->Respectively calculating the mth predicted heart rate by using the formula (6)Data uncertainty +.>Obtaining the mth predicted heart rate +.>Is +.>
In the formulas (4) - (7), M is [1, M ], M represents the number of test data, K is [1, K ], and K represents the number of repeated tests;
fifthly, calibrating uncertainty of the predicted heart rate:
step 5.1 obtaining a predicted heart rate for all samples in the test dataset S according to step fourIs +.>Data uncertainty +.>Model uncertainty->And evaluate the uncertainty:
step 5.1.1, calculating the predicted heart rate for all samples in the test dataset SReference heart rate { Y 'for all samples in test dataset S' m |m∈[1,M]True absolute error { ε } between m |m∈[1,M]-a }; wherein ε m Representing the mth test color differenceSignal X' m Corresponding predicted heart rate->And the mth test color difference signal X' m Corresponding reference heart rate Y m ' true absolute error between;
calculating the total uncertainty of all samples in the test dataset SFrom the true absolute error { ε } m |m∈[1,M]Correlation coefficient R of } (T)
Calculating data uncertaintyFrom true absolute error { ε ] m |m∈[1,M]Correlation coefficient R of } (A)
Uncertainty of calculation modelFrom true absolute error { ε ] m |m∈[1,M]Correlation coefficient R of } (E)
Step 5.1.2 predicting heart rate from all samples in the test dataset SIs the total uncertainty of (2)Data uncertainty +.>Model uncertainty->Obtaining three corresponding heart rate prediction confidence degrees and heart rate prediction confidence intervals corresponding to the three heart rate prediction confidence degrees;
calculate the reference heart rate { Y' m |m∈[1,M]Fall on each ofThe true probability in the corresponding heart rate prediction confidence interval;
using a credibility map to evaluate whether the three heart rate prediction confidence coefficients are matched with the true probabilities corresponding to the confidence coefficients respectively; if they match, it means that all samples in the test dataset S predict heart rateIs the total uncertainty of (2)Data uncertainty +.>Model uncertainty->The final uncertainty is the final uncertainty; otherwise, executing the step 5.2;
step 5.2, calibrating the prediction uncertainty obtained by the test dataset S by the calibration dataset C:
step 5.2.1 inputting the calibration dataset C into the optimal heart rate prediction model U (W) * In the step four, the predicted heart rate of all samples in the calibration data set is obtained according to the process of the step fourTotal uncertainty->Data uncertainty +.>And model uncertainty->Wherein (1)>Represents the first calibration color difference signal X l Corresponding firstl predicted heart rate->Represents the first predicted heart rate->Corresponding total uncertainty, +.>Represents the first predicted heart rate->Corresponding data uncertainty, +.>Represents the first predicted heart rate->The corresponding model uncertainty, L, represents the number of calibration data;
step 5.2.2 obtaining the total uncertainty of the test dataset S by using the formula (8), the formula (9) and the formula (10), respectivelyIs a calibration coefficient lambda of (1) T Data uncertainty->Calibration coefficient lambda A Degree of model uncertaintyIs a calibration coefficient lambda of (1) E
Step 5.2.3 obtaining the mth predicted heart rate in the test dataset S by the formula (11), the formula (12) and the formula (13) respectivelyData uncertainty after calibration +.>Model uncertainty +.>And total uncertainty->
4 a-4 d, wherein FIG. 4a shows the relationship of the predicted heart rate to the reference heart rate, and when the sample points are far from the diagonal line, the error of the heart rate prediction is larger, and the points with large error are highly matched with the points with high uncertainty, as can be seen in FIG. 4 a; FIG. 4b shows heart rate versus total uncertainty, with samples with higher total uncertainty centered primarily in the range of reference heart rate >80 and reference heart rate < 55; FIG. 4c shows heart rate versus data uncertainty; FIG. 4d shows heart rate versus model uncertainty; wherein the total uncertainty in FIG. 4b is primarily derived from the data uncertainty in FIG. 4c, illustrating that noise in the data is the primary source of prediction error;
FIG. 5 shows the total uncertainty of all samples of the test dataset constructed of MAHNOB-HCIFrom true absolute error { ε ] m |m∈[1,M]Correlation between }; it can be seen that the total uncertainty is highly correlated with the true error, and that a higher uncertainty is correspondingly obtained at a location where the true error is higher.

Claims (1)

1. A non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning is characterized by comprising the following steps:
step one, data generation:
step 1.1, acquiring a J frame face video image, defining the left cheek and the right cheek of a face in the image as an interested region, averaging pixels of all channels in the interested region in the J frame face video image, thus obtaining an R channel signal, a G channel signal and a B channel signal of the J frame face video image, extracting the R channel signal, the G channel signal and the B channel signal by using a non-contact heart rate measurement algorithm, obtaining a color difference signal X with the length of J, and inputting the color difference signal X as a network;
step 1.2, obtaining a reference heart rate signal with the length of J corresponding to a J frame face video image, calculating the main frequency of the reference heart rate signal, thus obtaining a reference heart rate Y, and forming a sample by a color difference signal X and the reference heart rate Y, thus obtaining a plurality of samples and dividing the samples into a training data set D, a test data set S and a calibration data set C;
step two, constructing an uncertainty quantization network U (W) of heart rate measurement based on a Monte Carlo dropout method, which comprises the following steps: the system comprises an encoder, a decoder, a full connection layer and two output channels;
step 2.1, each of the encoder and the decoder has a P layer, wherein the P layer of the encoder consists of a one-dimensional convolution layer, a PReLU activation function and a dropout layer, and the P layer of the decoder consists of a one-dimensional deconvolution layer, a PReLU activation function and a dropout layer; the output layer of the P-th layer of the encoder is connected with the input layer of the P-th layer of the decoder in a jump connection mode, and P is E [1, P ];
step 2.2 ith color difference signal X in training data set D i After passing through the uncertainty quantization network U (W), the ith color difference signal X is output i Is the heart rate of (a)Data uncertainty sigma i ,i∈[1,N]N represents the number of training data;
step three, determining a loss function:
establishing a loss function L using (1) U
L U =αL NLL +βL 1 (1)
In the formula (1), L NLL Represents a negative log-likelihood loss and is obtained by (2) for learning the data uncertainty sigma i ;L 1 Representing the average absolute error loss and obtained by the formula (3), wherein alpha is the weight of the negative log likelihood loss, and beta is the weight of the average absolute error loss;
in the formula (2) and the formula (3), y i Respectively represent the ith color difference signal X i Is included in the reference heart rate value of (a), I.I 1 Represents L 1 A norm;
based on the ith color difference signal X i Learning uncertainty quantization network U (W) and utilizing loss function L U Continuously optimizing the network parameter W to ensure that the network learns the heart rate and the uncertainty of the data thereof so as to obtain the optimal heart rate predictionMeasuring model U (W) *
Fourth, the mth test color difference signal X 'in the test data set S is processed' m Input optimal heart rate prediction model U (W) * And K times of prediction by using a dropout layer, so as to obtain a final predicted heart rate and uncertainty of the predicted heart rate, wherein the final predicted heart rate comprises total uncertainty, data uncertainty and model uncertainty:
step 4.1, the mth test color difference signal X' m The kth time inputs the optimal heart rate prediction model U (W) * And obtain the kth heart rateAnd data uncertainty->Thereby calculating the mth test color difference signal X 'using the formula (4) and the formula (5), respectively' m The corresponding mth predicted heart rate +.>And its model uncertainty->Calculating the mth predicted heart rate +.using equation (6), respectively>Data uncertainty +.>Obtaining the mth predicted heart rate +.>Is +.>
In the formulas (4) - (7), M is [1, M ], M represents the number of test data, K is [1, K ], and K represents the number of repeated tests;
fifthly, evaluating and calibrating uncertainty of the predicted heart rate:
step 5.1 obtaining a predicted heart rate for all samples in the test dataset S according to step fourIs +.>Data uncertainty +.>Model uncertainty->And evaluate the uncertainty:
step 5.1.1 calculating the predicted heart rate for all samples in the test dataset SReference heart rate { Y 'for all samples in test dataset S' m |m∈[1,M]True absolute error { ε } between m |m∈[1,M]-a }; wherein ε m Represents the mth test color difference signal X' m Corresponding predicted heart rate->And the mth test color difference signal X' m Corresponding reference heart rate Y' m True absolute error between;
calculating the total uncertainty of all samples in the test dataset SFrom the true absolute error { ε } m |m∈[1,M]Correlation coefficient R of } (T)
Calculating the data uncertaintyFrom the true absolute error { ε } m |m∈[1,M]Correlation coefficient R of } (A)
Calculating the model uncertaintyFrom the true absolute error { ε } m |m∈[1,M]Correlation coefficient R of } (E)
Step 5.1.2 predicting heart rate from all samples in the test dataset SIs the total uncertainty of (2)Data uncertainty +.>Model uncertainty->Obtaining three corresponding heart rate prediction confidence degrees and heart rate prediction confidence intervals corresponding to the three heart rate prediction confidence degrees respectively;
calculating the reference heart rate { Y' m |m∈[1,M]True probabilities falling within respective corresponding heart rate prediction confidence intervals;
using a credibility map to evaluate whether the three heart rate prediction confidence coefficients are matched with the true probabilities corresponding to the confidence coefficients respectively; if they match, it is indicative of a predicted heart rate for all samples in the test dataset SIs the total uncertainty of (2)Data uncertainty +.>Model uncertainty->The final uncertainty is the final uncertainty; otherwise, executing the step 5.2;
step 5.2, calibrating the prediction uncertainty obtained by the test dataset S by the calibration dataset C:
step 5.2.1 inputting the calibration dataset C into the optimal heart rate prediction model U (W) * In the step four, the predicted heart rate of all samples in the calibration data set is obtained according to the process of the step fourTotal uncertainty->Data uncertainty +.>And model uncertainty->Wherein (1)>Represents the first calibration color difference signal X l The corresponding first predicted heart rate, +.>Represents the first predicted heart rate->Corresponding total uncertainty, +.>Represents the first predicted heart rate->Corresponding data uncertainty, +.>Represents the first predicted heart rate->The corresponding model uncertainty, L, represents the number of calibration data;
step 5.2.3 obtaining the total uncertainty of the test dataset S by using the formulas (8), (9) and (10)Is a calibration coefficient lambda of (1) T Data uncertainty->Calibration coefficient lambda A Model uncertainty->Is a calibration coefficient lambda of (1) E
Step 5.2.3 obtaining the mth predicted heart rate in the test dataset S by the formula (11), the formula (12) and the formula (13) respectivelyData uncertainty after calibration +.>Model uncertainty +.>And total uncertainty->
CN202210669910.3A 2022-06-14 2022-06-14 Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning Active CN114913587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669910.3A CN114913587B (en) 2022-06-14 2022-06-14 Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669910.3A CN114913587B (en) 2022-06-14 2022-06-14 Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning

Publications (2)

Publication Number Publication Date
CN114913587A CN114913587A (en) 2022-08-16
CN114913587B true CN114913587B (en) 2024-02-13

Family

ID=82769781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669910.3A Active CN114913587B (en) 2022-06-14 2022-06-14 Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning

Country Status (1)

Country Link
CN (1) CN114913587B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN113868957A (en) * 2021-10-11 2021-12-31 北京航空航天大学 Residual life prediction and uncertainty quantitative calibration method under Bayes deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3578100A1 (en) * 2018-06-05 2019-12-11 Koninklijke Philips N.V. Method and apparatus for estimating a trend in a blood pressure surrogate

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN113868957A (en) * 2021-10-11 2021-12-31 北京航空航天大学 Residual life prediction and uncertainty quantitative calibration method under Bayes deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王卫星 ; 李斌 ; .智慧校园下利用机器学习算法实现高校贫困生的预测.三门峡职业技术学院学报.2019,(01),全文. *

Also Published As

Publication number Publication date
CN114913587A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
Heo et al. Uncertainty-aware attention for reliable interpretation and prediction
CN111493935B (en) Artificial intelligence-based automatic prediction and identification method and system for echocardiogram
JP2021502650A (en) Time-invariant classification
TWI825240B (en) Method and system for determining concentration of an analyte in a sample of a bodily fluid, and method and system for generating a software-implemented module
CN110269605B (en) Electrocardiosignal noise identification method based on deep neural network
CN113096818B (en) Method for evaluating occurrence probability of acute diseases based on ODE and GRUD
KR102095959B1 (en) Artificial Neural Network Model-Based Methods for Analyte Analysis
CN114358435A (en) Pollution source-water quality prediction model weight influence calculation method of two-stage space-time attention mechanism
CN109325065B (en) Multi-sampling-rate soft measurement method based on dynamic hidden variable model
CN114913587B (en) Non-contact heart rate measurement uncertainty quantization method based on Bayesian deep learning
CN117598700A (en) Intelligent blood oxygen saturation detection system and method
CN110801228B (en) Brain effect connection measurement method based on neural network prediction
Xu et al. Personalized pain detection in facial video with uncertainty estimation
CN113080847B (en) Device for diagnosing mild cognitive impairment based on bidirectional long-short term memory model of graph
CN114504298B (en) Physiological characteristic discriminating method and system based on multisource health perception data fusion
Caicedo et al. Weighted LS-SVM for function estimation applied to artifact removal in bio-signal processing
CN115376638A (en) Physiological characteristic data analysis method based on multi-source health perception data fusion
CN115115038A (en) Model construction method based on single lead electrocardiosignal and gender identification method
CN111383764B (en) Correlation detection system for mechanical ventilation driving pressure and ventilator related event
CN115063349A (en) Method and device for predicting brain age based on sMRI (magnetic resonance imaging) multidimensional tensor morphological characteristics
Van der Pias et al. A novel reject option applied to sleep stage scoring
CN107731306A (en) A kind of contactless heart rate extracting method based on thermal imaging
Phetrittikun et al. Temporal Fusion Transformer for forecasting vital sign trajectories in intensive care patients
van Gorp et al. Aleatoric Uncertainty Estimation of Overnight Sleep Statistics Through Posterior Sampling Using Conditional Normalizing Flows
Chen et al. IoT-enabled intelligent dynamic risk assessment of acute mountain sickness based on data from wearable devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant