CN117898691A - Non-contact heart rate detection method based on KDN-WTRFA - Google Patents

Non-contact heart rate detection method based on KDN-WTRFA Download PDF

Info

Publication number
CN117898691A
CN117898691A CN202410208340.7A CN202410208340A CN117898691A CN 117898691 A CN117898691 A CN 117898691A CN 202410208340 A CN202410208340 A CN 202410208340A CN 117898691 A CN117898691 A CN 117898691A
Authority
CN
China
Prior art keywords
network
heart rate
rate detection
wtrfa
kdn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410208340.7A
Other languages
Chinese (zh)
Inventor
曹鸿博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202410208340.7A priority Critical patent/CN117898691A/en
Publication of CN117898691A publication Critical patent/CN117898691A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a non-contact heart rate detection method based on KDN-WTRFA. In the invention, the knowledge distillation network is adopted as a main network, so that the knowledge distillation network is prevented from being limited by hardware, in actual deployment and use, the requirements of delay, time and power consumption are met by adopting a relatively simple network structure, and the wavelet transformation mechanism is adopted, so that the wavelet transformation can decompose the high-frequency and low-frequency parts of the image, and the local characteristics of the data can be better understood. In non-contact heart rate detection, wavelet transformation can be used as a preprocessing means to extract important features in the image or signal for more efficient subsequent tasks. Using RFAConv structures, the receptive field attentiveness mechanism makes the network more focused on the characteristics of the local area, which is important for processing local structures in the sequence image data. By enhancing the perception of local features, the network can better capture the local mode and structure of the input data, and the accuracy of the network on BVP signal conversion is improved.

Description

Non-contact heart rate detection method based on KDN-WTRFA
Technical Field
The invention belongs to the technical field of non-contact heart rate detection, and particularly relates to a non-contact heart rate detection method based on KDN-WTRFA.
Background
In the prevention and treatment of respiratory diseases, the non-contact heart rate detection technology can effectively reduce the infection risk of medical staff and improve the comfort level of patients. Heart rate is an important physiological indicator that reflects the functional state of the cardiovascular system and is affected by a variety of factors. Therefore, the non-contact heart rate detection technique needs to have high accuracy and stability. In recent years, many researchers have made intensive researches and attempts on the technology, verkruysse et al have first proposed a method for acquiring facial skin information under natural light by using a color video collector and extracting physiological signals of a human body from the facial skin information. The method proves the feasibility of acquiring the human physiological index based on the remote optical physiological sensing (remote photoplethysmography, rPPG) technology, and lays a foundation for subsequent research and innovation. At present, a mature non-contact heart rate detection algorithm mainly comprises two steps: firstly, preprocessing video data to eliminate noise and interference; next, a blood volume pulse (Blood Volume Pulse, BVP) signal is extracted from the processed video, and the heart rate is calculated. The most important to realize rPPG is to accurately acquire the BVP signal and process it.
Some scholars improve the preprocessing of the video and propose various methods for selecting the ROI area of the human face, and the method based on the human face detection is the simplest and intuitive method initially, but is influenced by the performance of a human face detection algorithm and is easily interfered by factors such as the pose, the expression, the illumination, the shielding and the like of the human face. Then, the method based on the key points of the human face is a method which can adapt to different human face gestures and expressions, but higher calculation complexity is required, and the precision requirement on the key points of the human face is higher. Of course, there is also a deep learning-based method, which is a method capable of automatically learning the optimal features of the ROI region of the face, but a large amount of labeling data is required, and the training process is time-consuming.
But the performance of non-contact heart rate monitoring is mainly affected by three aspects: first, there are fewer training data sets and lack of high quality sample data in different scenarios, resulting in insufficient generalization capability of the deep learning network. Second, BVP signals reflect the periodic variation of the amount of absorption/reflection of light at different wavelengths in a specific part of the body due to heart beating, which is extremely sensitive to light, and also affects BVP signals because light variation is extremely easily generated in real video recordings. Preprocessing of the video is required to address this problem. Third, how to convert video information into an accurate BVP signal after video preprocessing.
Disclosure of Invention
The invention aims at: in order to solve the problems, a non-contact heart rate detection method based on KDN-WTRFA is provided.
The technical scheme adopted by the invention is as follows: a non-contact heart rate detection method based on KDN-WTRFA, the detection method comprising the steps of:
S1: and performing data preprocessing operation, which mainly comprises time phase selection, face detection and alignment and image size normalization. Generally, BVP representation is a physiologically constrained phased sequence. It contains a series of general temporal phases, namely start, vertex, offset and end. The facial blood changes of different people almost all show periodic changes, so other stages representing dynamic changes can be used to obtain distinguishing features from the video;
s2: the video is input into a teacher network to carry out preliminary feature extraction through convolution of 3 x 16;
S3: through wavelet transformation, haar wavelet is adopted as a reference function of two-dimensional discrete wavelet decomposition. Haar wavelet processing processes data by computing the sum and difference of adjacent elements. Firstly, carrying out one-dimensional wavelet transformation on each row of pixel values; this gives the average value and the detail coefficient for each row. Then, for these converted rows, one-dimensional wavelet transform is applied again for each column;
s4: through a residual network part in a teacher network, accurate feature extraction and identification are performed, so that accurate parameters are provided for subsequent student network learning;
S5: through the receptive field attention (RECEPTIVE-Field Attention convolutional operation, RFAConv), the importance of each feature in the receptive field is fully considered, and the airspace feature map is enhanced;
s6: two tasks are realized through the double-branch convolution layer. Task 1: accurate positioning of the facial AU units is performed. Task 2: completing the extraction of BVP signals;
s7: inputting the facial feature map into a student network, and performing preliminary training and feature learning;
s8: through the composite loss, the performance of the student network is gradually optimized, and finally, the non-contact heart rate detection with better effect is achieved, and then the whole non-contact heart rate detection process based on KDN-WTRFA can be finished.
In a preferred embodiment, in the step S3, a Haar wavelet is applied to adjacent horizontal elements, and then applied to adjacent vertical elements.
In a preferred embodiment, in the step S3, the discrete Haar wavelet transform formula is as follows:
Given A discrete signal f of value f 1,f2,…fN, the following method is used, and the formula for calculating A pair of sub-signals from the value of f is called average A and ripple D, where m=1, 2, … N/2.
In a preferred embodiment, in step S5, for RFAConv, the attention attempt is learned by interacting receptive field feature information to improve network performance. However, interaction with each receptive field feature may result in additional computational overhead.
In a preferred embodiment, avgPool is used to aggregate global information for each receptive field feature in order to minimize computational overhead and the number of parameters in step S5. Then, 1×1 sets of convolution operations are used to interact information.
In a preferred embodiment, in step S5, softmax is used to emphasize the importance of each of the receptive field features. The calculation formula of RFA is:
F=Softmax(g1×1(AvgPool(X)))×ReLu(Norm(gk×k(X)))
F=Arf×Frf
Where g 1×1 denotes a block convolution of size 1×1, k denotes the size of the convolution kernel, norm denotes normalization, X denotes the input feature map, and F is obtained by multiplying the attention map Arf by the transformed receptive field spatial feature Frf.
In a preferred embodiment, in the step S4, there are three sets of residual modules. In each group, there are four blocks. The residual block with unit mapping can be expressed by the following formula:
xl+1=xl+F(xl,wl)
Where xl and xl+1 are the input and output, respectively, of the first unit in the network, F is the residual function and wl is the parameter of the block. F represents the residual map to be learned. In the network, F has two consecutive 3*3 convolutions with batch normalization and ReLU activation functions.
In a preferred embodiment, in step S6, after the residual network, there is a two-branch convolution layer, which reshapes the high-dimensional input into a vector. The network has two branches: one for AU identification and the other for BVP signal extraction. The loss of the model is calculated as follows:
Lteacher=φ1L12L2
Where Lteacher is the main penalty in the proposed deep pre-trained teacher neural network, L1 is the penalty for AU recognition, L2 is the penalty for extracting BVP signals, and φ 1 and φ 2 are two manually set parameters for balancing the two subtasks. L1 is the multi-tag cross entropy penalty and L2 is the softmax cross entropy penalty. Wherein L1 is as follows:
Wherein, the number of samples is N, the number of AUs is C, and the basic fact is Y= { Y nc }; the predicted value is
The formula of L2 is as follows:
wherein, the number of samples is N, the BVP signal is K, and the basic fact is Y= { Y nk }; the predicted value is
In a preferred embodiment, in the step S8, a loss formula of the student model is as follows:
Lstudent=α1Loss12loss2
Where Lstudent is the primary Loss of student network, loss1 is the Loss of student network learning guided by the deep pretrained teacher network, loss2 is the Loss of identifying BVP signals. a1 and a2 are two manually set parameters for balancing the two subtasks.
Wherein the calculation formula of Loss1 is as follows:
Loss1 is a regression Loss that is enforced by punishing the difference between Fteacher and Fstudent in the sense of a smooth L2 Loss. It tends to equally weight all spatial locations in the feature map;
gamma is a manually set parameter that is the learning rate of knowledge extraction.
Wherein the calculation formula of Loss2 is as follows:
where N is the number of samples, K is the BVP signal, and the ground truth is y= { Y nk }; the predicted value is
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1. In the invention, the knowledge distillation network is adopted as a main network, a network model is simplified, the limitation of hardware is avoided, and in actual deployment and use, the requirements of delay, time and power consumption are met by adopting a relatively simple network structure, so that the requirements that the instrument is as small and simple as possible in non-contact heart rate detection application are solved.
2. In the invention, a wavelet transformation mechanism is used, and the wavelet transformation can decompose high-frequency and low-frequency parts of the image, so that local characteristics of data can be better understood. In non-contact heart rate detection, wavelet transformation can be used as a preprocessing means to extract important features in the image or signal for more efficient subsequent tasks. At the same time, wavelet transformation provides the capability of multi-scale analysis, enabling simultaneous capture of features at different scales, remaining sensitive to subtle color changes.
3. In the present invention, RFAConv structures are used, and the receptive field attention mechanism makes the network pay more attention to the characteristics of local areas, which is important for processing local structures in the sequence image data. By enhancing the perception of local features, the network can better capture the local mode and structure of the input data, and the accuracy of the network on BVP signal conversion is improved.
Drawings
FIG. 1 is a flowchart of an algorithm of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
With reference to figure 1 of the drawings,
A non-contact heart rate detection method based on KDN-WTRFA comprises the following steps:
and step 1, splitting a MAHNOB-HCI dataset and a face video in a UBFC-rPPG dataset into video frame pictures, cutting the pictures and aligning the faces, and extracting and tracking the faces by using a Openface tool kit. All video frames are aligned to the base plane by affine transformation. The frame is sized to 132 x 132 pixels.
And 2, inputting the video into a teacher network, and performing preliminary feature extraction through convolution of 3 x 16.
And step 3, performing wavelet transformation, and adopting Haar wavelet as a reference function of two-dimensional discrete wavelet decomposition. Haar wavelet processing processes data by computing the sum and difference of adjacent elements. Firstly, carrying out one-dimensional wavelet transformation on each row of pixel values; this gives the average value and the detail coefficient for each row. Then, for these converted rows, one-dimensional wavelet transform is applied again for each column.
And 4, passing through a residual network part of a teacher network, wherein three groups of residual modules are arranged. In each group, there are four blocks. The residual block with unit mapping can be expressed by the following formula:
xl+1=xl+F(xl,wl)
Where xl and xl+1 are the input and output, respectively, of the first unit in the network, F is the residual function and wl is the parameter of the block. F represents the residual map to be learned. In the network, F has two consecutive 3*3 convolutions with batch normalization and ReLU activation functions.
Step 5, by RFAConv, fully taking into account the importance of each feature in the receptive field, the channel and spatial attention are not performed in separate steps, because the channel and spatial attention are weighted simultaneously, allowing the attention patterns obtained on each channel to be different and enhancing the spatial and channel feature maps.
Step 6, after the residual network, there is a two-branch convolution layer that reshapes the high-dimensional input into a vector. The network has two branches: one for AU identification and the other for BVP signal extraction. The loss of the model is calculated as follows:
Lteacher=φ1L12L2
Where Lteacher is the main penalty in the proposed deep pre-trained teacher neural network, L1 is the penalty for AU recognition, L2 is the penalty for extracting BVP signals, and φ 1 and φ 2 are two manually set parameters for balancing the two subtasks. L1 is the multi-tag cross entropy penalty and L2 is the softmax cross entropy penalty. Wherein L1 is as follows:
Wherein, the number of samples is N, the number of AUs is C, and the basic fact is Y= { Y nc }; the predicted value is
The formula of L2 is as follows:
wherein, the number of samples is N, the BVP signal is K, and the basic fact is Y= { Y nk }; the predicted value is
And 7, inputting the image into a student network, and passing through a convolution layer and three complete connection layers. Similar to Fteacher, the output characteristics of the last two fully connected layers are used as Fstudent.
And step 8, gradually optimizing the performance of the student network through the composite loss, and finally achieving a better effect. The loss formula of the student model is as follows:
Lstudent=α1Loss12loss2
Where Lstudent is the primary Loss of student network, loss1 is the Loss of student network learning guided by the deep pretrained teacher network, loss2 is the Loss of identifying BVP signals. a1 and a2 are two manually set parameters for balancing the two subtasks.
Wherein the calculation formula of Loss1 is as follows:
Loss1 is a regression Loss that is enforced by punishing the difference between Fteacher and Fstudent in the sense of a smooth L2 Loss. It tends to weight all spatial locations in the feature map equally. In the above expression, γ is a manually set parameter, which is the learning rate of knowledge extraction.
Wherein the calculation formula of Loss2 is as follows:
where N is the number of samples, K is the BVP signal, and the ground truth is y= { Y nk }; the predicted value is
Analysis of the data in table 1 shows that there is a significant difference in the performance of the different methods in MAE, RMSE, R and SNR metrics. In terms of MAE index, meta-rPPG, SAMC and the methods herein perform best. The SAMC method uses hidden rPPG information to enhance and the attention network to improve rPPG quality, and can keep higher heart rate measurement accuracy under the condition of small noise interference such as facial expression change, video compression and the like. And the meta-rPPG uses transduction reasoning to carry out model adjustment on test data, so that the generalization capability of the model is improved. The KDN-WTRFA method solves the problem of insufficient samples by means of a knowledge distillation network, is beneficial to heart rate calculation, can capture the characteristics under different scales simultaneously by means of the multi-scale analysis capability of wavelet transformation, is sensitive to fine color change, promotes the two, improves the correlation between a generated signal and a real signal, and is closer to a real value. The method herein performed best in terms of RMSE index, with RMSE values of 2.83, followed by meta-rpg and SAMC, with RMSE values of 3.68 and 6.23, respectively. Meta-rPPG, STVEN-rPPGNet and the methods herein perform best in terms of R values, which are all over 0.85.
TABLE 1
It can be seen in table 2 that on UBFC-ppg dataset, the KDN-WTRFA method performs better than other advanced ppg signal processing methods. Meanwhile, the KDN-WTRFA method has better performance in the aspects of R index and SNR index, which means that the generated signal has higher quality and less noise, and is beneficial to improving the estimation precision of the physiological parameters. Again, this demonstrates the reliability and superiority of the method presented herein on UBFC-rpg database.
TABLE 2
According to the invention, the knowledge distillation network is adopted as a main network, so that a network model is simplified, the limitation of hardware is avoided, and in actual deployment and use, the requirements of delay, time and power consumption are met by adopting a relatively simple network structure, and the requirements that in non-contact heart rate detection application, the instrument is as small and simple as possible are solved.
In the invention, a wavelet transformation mechanism is used, and the wavelet transformation can decompose high-frequency and low-frequency parts of the image, so that local characteristics of data can be better understood. In non-contact heart rate detection, wavelet transformation can be used as a preprocessing means to extract important features in the image or signal for more efficient subsequent tasks. At the same time, wavelet transformation provides the capability of multi-scale analysis, enabling simultaneous capture of features at different scales, remaining sensitive to subtle color changes.
In the present invention, RFAConv structures are used, and the receptive field attention mechanism makes the network pay more attention to the characteristics of local areas, which is important for processing local structures in the sequence image data. By enhancing the perception of local features, the network can better capture the local mode and structure of the input data, and the accuracy of the network on BVP signal conversion is improved.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A non-contact heart rate detection method based on KDN-WTRFA is characterized in that: the detection method comprises the following steps:
S1: performing data preprocessing operation, which mainly comprises time phase selection, face detection and alignment and image size normalization; in general, BVP representation is a physiologically constrained phased sequence; it contains a series of general time phases, namely start, vertex, offset and end; the facial blood changes of different people almost all show periodic changes, so other stages representing dynamic changes can be used to obtain distinguishing features from the video;
s2: the video is input into a teacher network to carry out preliminary feature extraction through convolution of 3 x 16;
S3: through wavelet transformation, haar wavelet is adopted as a reference function of two-dimensional discrete wavelet decomposition; haar wavelet processing processes data by computing the sum and difference of adjacent elements; firstly, carrying out one-dimensional wavelet transformation on each row of pixel values; this operation gives the average value and the detail coefficient for each row; then, for these converted rows, one-dimensional wavelet transform is applied again for each column;
s4: through a residual network part in a teacher network, accurate feature extraction and identification are performed, so that accurate parameters are provided for subsequent student network learning;
S5: through the receptive field attention (RECEPTIVE-Field Attention convolutional operation, RFAConv), the importance of each feature in the receptive field is fully considered, and the airspace feature map is enhanced;
S6: two tasks are realized through the double-branch convolution layer; task 1: performing accurate positioning of the face AU unit; task 2: completing the extraction of BVP signals;
s7: inputting the facial feature map into a student network, and performing preliminary training and feature learning;
s8: through the composite loss, the performance of the student network is gradually optimized, and finally, the non-contact heart rate detection with better effect is achieved, and then the whole non-contact heart rate detection process based on KDN-WTRFA can be finished.
2. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in the step S3, a Haar wavelet is applied to adjacent horizontal elements, and then applied to adjacent vertical elements.
3. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in the step S3, the discrete Haar wavelet transform formula is as follows:
Given A discrete signal f of value f 1,f2,…fN, the following method is used, and the formula for calculating A pair of sub-signals from the value of f is called average A and ripple D, where m=1, 2, … N/2.
4. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in the step S5, for RFAConv, learning the attention map by interacting the receptive field feature information may improve the network performance; however, interaction with each receptive field feature may result in additional computational overhead.
5. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in the step S5, avgPool is used to aggregate global information of each receptive field property in order to minimize the computational overhead and the number of parameters; then, 1×1 sets of convolution operations are used to interact information.
6. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in said step S5, the importance of each of the receptive field features is emphasized using softmax; the calculation formula of RFA is:
F=Softmax(g1×1(AvgPool(X)))×ReLu(Norm(gk×k(X)))
F=Arf×Frf
Where g 1×1 denotes a block convolution of size 1×1, k denotes the size of the convolution kernel, norm denotes normalization, X denotes the input feature map, and F is obtained by multiplying the attention map Arf by the transformed receptive field spatial feature Frf.
7. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in the step S4, there are three sets of residual modules; in each group, there are four blocks; the residual block with unit mapping can be expressed by the following formula:
xl+1=xl+F(xl,wl)
wherein xl and xl+1 are the input and output of the first unit in the network, respectively, F is the residual function, wl is the parameter of the block; f represents the residual map to be learned; in the network, F has two consecutive 3*3 convolutions with batch normalization and ReLU activation functions.
8. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in said step S6, after the residual network, there is a two-branch convolution layer which reshapes the high-dimensional input into a vector; the network has two branches: one for AU identification and the other for BVP signal extraction; the loss of the model is calculated as follows:
Lteacher=φ1L12L2
Wherein Lteacher is the main loss in the proposed deep pre-training teacher neural network, L1 is the loss of AU recognition, L2 is the loss of extracting BVP signals, and phi 1 and phi 2 are two manually set parameters for balancing two subtasks; l1 is the multi-tag cross entropy penalty, L2 is the softmax cross entropy penalty; wherein L1 is as follows:
Wherein, the number of samples is N, the number of AUs is C, and the basic fact is Y= { Y nc }; the predicted value is
The formula of L2 is as follows:
wherein, the number of samples is N, the BVP signal is K, and the basic fact is Y= { Y nk }; the predicted value is
9. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 1, wherein: in the step S8, a loss formula of the student model is as follows:
Lstudent=α1Loss12loss2
Wherein Lstudent is the main Loss of the student network, loss1 is the Loss of student network learning guided by the deep pretraining teacher network, loss2 is the Loss of identifying BVP signals; a1 and a2 are two manually set parameters for balancing the two subtasks;
wherein the calculation formula of Loss1 is as follows:
Loss1 is a regression Loss that is enforced by punishing the difference between Fteacher and Fstudent in the sense of a smooth L2 Loss; it tends to weight all spatial locations in the feature map equally.
10. The non-contact heart rate detection method based on KDN-WTRFA as claimed in claim 9, wherein: the gamma is a manually set parameter which is the learning rate of knowledge extraction;
wherein the calculation formula of Loss2 is as follows:
where N is the number of samples, K is the BVP signal, and the ground truth is y= { Y nk }; the predicted value is
CN202410208340.7A 2024-02-26 2024-02-26 Non-contact heart rate detection method based on KDN-WTRFA Pending CN117898691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410208340.7A CN117898691A (en) 2024-02-26 2024-02-26 Non-contact heart rate detection method based on KDN-WTRFA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410208340.7A CN117898691A (en) 2024-02-26 2024-02-26 Non-contact heart rate detection method based on KDN-WTRFA

Publications (1)

Publication Number Publication Date
CN117898691A true CN117898691A (en) 2024-04-19

Family

ID=90694799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410208340.7A Pending CN117898691A (en) 2024-02-26 2024-02-26 Non-contact heart rate detection method based on KDN-WTRFA

Country Status (1)

Country Link
CN (1) CN117898691A (en)

Similar Documents

Publication Publication Date Title
CN110944577B (en) Method and system for detecting blood oxygen saturation
CN112914527B (en) Arterial blood pressure signal acquisition method based on pulse wave photoplethysmography
Macwan et al. Heart rate estimation using remote photoplethysmography with multi-objective optimization
CN111461176A (en) Multi-mode fusion method, device, medium and equipment based on normalized mutual information
CN111297349A (en) Machine learning-based heart rhythm classification system
WO2021057423A1 (en) Image processing method, image processing apparatus, and storage medium
Hu et al. Robust heart rate estimation with spatial–temporal attention network from facial videos
CN114628020A (en) Remote plethysmography signal detection model construction and detection method, device and application
Li et al. Non-contact ppg signal and heart rate estimation with multi-hierarchical convolutional network
CN111914925B (en) Patient behavior multi-modal perception and analysis system based on deep learning
CN115024706A (en) Non-contact heart rate measurement method integrating ConvLSTM and CBAM attention mechanism
CN116012916A (en) Remote photoplethysmograph signal and heart rate detection model construction method and detection method
Liu et al. rPPG-MAE: Self-supervised Pretraining with Masked Autoencoders for Remote Physiological Measurements
Lee et al. Lstc-rppg: Long short-term convolutional network for remote photoplethysmography
Jayasekara et al. Timecaps: Capturing time series data with capsule networks
CN115624322B (en) Non-contact physiological signal detection method and system based on efficient space-time modeling
Liu et al. Information-enhanced network for noncontact heart rate estimation from facial videos
CN115813409A (en) Ultra-low-delay moving image electroencephalogram decoding method
Wang et al. Pay attention and watch temporal correlation: a novel 1-D convolutional neural network for ECG record classification
CN117898691A (en) Non-contact heart rate detection method based on KDN-WTRFA
CN114387479A (en) Non-contact heart rate measurement method and system based on face video
Zhang et al. MSDN: A multi-stage deep network for heart-rate estimation from facial videos
CN115905819B (en) rPPG signal generation method and device based on generation countermeasure network
Jagtap et al. Improved Image Fusion Technique Using Convolutional Neural Networks and The Hybrid PCA-Guided Filter
CN117694845B (en) Non-contact physiological signal detection method and device based on fusion characteristic enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination