CN116012820A - Non-contact type driver heart rate detection method - Google Patents

Non-contact type driver heart rate detection method Download PDF

Info

Publication number
CN116012820A
CN116012820A CN202211623118.0A CN202211623118A CN116012820A CN 116012820 A CN116012820 A CN 116012820A CN 202211623118 A CN202211623118 A CN 202211623118A CN 116012820 A CN116012820 A CN 116012820A
Authority
CN
China
Prior art keywords
matrix
face image
heart rate
representing
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211623118.0A
Other languages
Chinese (zh)
Inventor
刘鹏泽
陈昌川
夏开羿
代少升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202211623118.0A priority Critical patent/CN116012820A/en
Publication of CN116012820A publication Critical patent/CN116012820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of remote photoelectric volume scanning, in particular to a non-contact type driver heart rate detection method; the method comprises the steps of obtaining a face image of a driver, and obtaining an rPPG signal matrix through a KTL algorithm, guided filtering and other modes; performing dimension reduction, ICA and fast Fourier transform on the rPPG signal matrix to finally obtain a heart rate measured value at the current moment; and calculating a heart rate predicted value at the previous moment by adopting a Kalman filter, and obtaining the heart rate predicted value at the current moment according to the heart rate measured value at the current moment and the heart rate predicted value at the previous moment.

Description

Non-contact type driver heart rate detection method
Technical Field
The invention relates to the field of remote photoelectric volume scanning, in particular to a non-contact type driver heart rate detection method.
Background
Heart rate refers to the number of heart beats per minute of the human heart, and is one of the detection indicators of physiological and psychological health of the human body. The heart rate of a person when resting without strenuous exercise is called resting heart rate. Heart rate detection can be divided into a contact type and a non-contact type according to different detection technologies; the contact detection adopts a photoelectric volume pulse wave (Photo Plenthysmo Graphy, PPG) method to measure through intelligent equipment such as ECG and the like, and needs to be in close contact with human skin, so that great discomfort can be brought, stimulation or pain can be generated, and the detection result is influenced; and simultaneously, the compound can generate larger stimulation to the groups of infants, burn patients and the like.
Along with the popularization of computer technology, cameras and the like, students put forward remote photoplethysmogram (rPPG), and rPPG is the same as the traditional PPG principle, and because periodic changes of blood and micro-blood vessels in human skin tissues can be caused by each heartbeat, the periodic changes can be formed by light absorption and reflection, and human eyes can not observe the changes, but can analyze the video images acquired by a high-definition camera, so that heart rate detection is realized. Thus, this method is also referred to as hypersensitive heart rate detection. The method has the advantages that the method is not limited to the condition that detected personnel do not need to wear any equipment, can be used for long-time physiological detection, is used for current society and frequent traffic accidents, besides the well-known drunk driving, overspeed and other reasons, the health state of a driver is also a big cause for frequent traffic accidents, the heart rate is used as a gold index of physiological health of people, the non-contact heart rate estimation based on video streaming can effectively detect the health of the driver, but the method is easily influenced by motion artifacts and illumination caused by head movement.
Disclosure of Invention
In order to solve the above problems, the present invention provides a non-contact method for detecting heart rate of a driver, which is characterized by comprising the following steps:
s1, installing a camera in front of the head of a driver, and acquiring a face image video of the driver by using the camera;
s2, detecting a face rectangle of each frame of face image in the face image video by adopting a dlib face detector in OpenCV, and simultaneously positioning coordinates of 68 facial feature points of the face in the face rectangle;
s3, tracking the head movement condition of a driver through the displacement condition of 68 facial feature points in the face image video by adopting a KLT algorithm, and outputting a stable face image video;
s4, conducting guide filtering on each frame of stable face image in the stable face image video to obtain a filtered face video;
s5, calculating average value vectors of the forehead, the left cheek, the right cheek and the nose of each frame of the filtered face image in the filtered face image video, and carrying out weighted average on the average value vectors of the four areas of each frame of the filtered face image to obtain an rPPG intensity signal of each frame of the filtered face image;
s6, summarizing all the rPPG intensity signals to obtain an rPPG signal matrix, and performing dimension reduction processing on the rPPG signal matrix to obtain a low-dimension rPPG signal matrix;
s7, separating the low-dimensional rPPG signal matrix through an ICA to obtain a blood static and dynamic signal vector, and performing fast Fourier transform on the blood dynamic signal vector to obtain a heart rate measured value at the current moment;
s8, calculating a heart rate predicted value at the previous moment through a Kalman filter, and according to the heart rate measured value at the current moment and the heart rate predicted value at the previous moment.
Further, in step S4, each frame of face image in the stable face image video is subjected to guided filtering to obtain a filtered face image, and the local linear relationship between the guided image and the filtered face image is expressed as:
q i =a k I i +b k
in order to determine the linear coefficients of the guide image and the filtered face image while satisfying the minimization of the difference between the stable face image and the filtered face image, the guide filtering process is converted into an optimization problem, expressed as:
Figure BDA0004003272630000021
solving the optimization problem to obtain:
Figure BDA0004003272630000031
Figure BDA0004003272630000032
wherein ,Ii Gray value, p, representing the ith pixel of the guiding image I in the local window i Gray value, q representing ith pixel point of stable face image p in local window i Gray value omega representing ith pixel point of filtered face image q in local window k Representing a partial window of length k, a k Representing a local window omega k A first constant linear coefficient of b k Representing a local window omega k ω represents the local window ω k The total number of pixels in, e represents the regularization coefficient,
Figure BDA0004003272630000033
representing a local window omega k Mean value, mu of all second constant linear coefficients in the matrix k Representing the local window omega by guiding the image I k Mean value calculated for all pixels in the region, +.>
Figure BDA0004003272630000034
Representing the local window omega by guiding the image I k Variance calculated for all pixels in the region, +.>
Figure BDA0004003272630000035
Represented by stabilizing the face image p in a local window omega k And calculating average gray values of all pixel points in the image.
Further, according to solving the optimization problem, a linear coefficient determined between the filtered face image and the guide image is obtained, which is expressed as:
Figure BDA0004003272630000036
wherein ,
Figure BDA0004003272630000037
representing the average gray value of the ith pixel point of the filtered face image q in the local window.
Further, in step S6, multiplying the rpg signal matrix with a dimension-reduction matrix to obtain a low-dimension rpg signal matrix, where the dimension-reduction matrix is expressed as:
Figure BDA0004003272630000038
where T represents the transpose of the matrix.
Further, the step S7 specifically includes:
s11, defining a low-dimensional rPPG signal matrix as S, and calculating a mean value removing low-dimensional rPPG signal matrix
Figure BDA0004003272630000039
The calculation formula is as follows: />
Figure BDA00040032726300000310
Wherein E { S } represents the identity matrix of the low-dimensional rPPG signal matrix S;
s12, calculating a covariance matrix of the mean value removing low-dimensional rPPG signal matrix, and carrying out eigenvalue decomposition on the covariance matrix to obtain a whitening matrix;
s13, adopting ICA to process the whitening matrix to obtain a blood static and dynamic signal vector; the hemodynamic signal vector includes two distinct components, a hemodynamic signal component and a hemodynamic signal component;
s14, performing fast Fourier transform on the blood static and dynamic signal vector, wherein a component with a low peak value in the transformed blood static and dynamic signal vector is a blood static signal component;
s15, processing the blood dynamic signal component by adopting a band-pass filter, and taking the peak frequency point of the processed blood dynamic signal component as a heart rate measurement value at the current moment.
Further, the step S12 of calculating the covariance matrix of the de-averaged low-dimensional rpg signal matrix includes:
Cov(S)=QΛQ T
Cov(S)=SS T =PRov(S)P T =E
Figure BDA0004003272630000041
wherein Q represents an orthogonal matrix composed of eigenvectors, Λ represents a symmetric matrix composed of eigenvalues, cov (S) represents a covariance matrix of the low-dimensional rpg signal matrix S, P represents a switching matrix, rov (S) represents an autocorrelation matrix of the low-dimensional rpg signal matrix S, and E represents a unit matrix.
Further, the formula for calculating the heart rate predicted value by adopting Kalman filtering is as follows:
Figure BDA0004003272630000042
E n =E n-1 +k n ·(M n -x n-1 )
P n =(1-k n )·P n +Q
wherein ,Pn Representing an error covariance matrix, k n Represents the Kalman gain, R represents the measurement noise covariance matrix, E n Represents the optimal estimated value at time n, M n Representing the measured value, x n-1 Representing a priori estimates, Q represents the measurement process covariance.
The invention has the beneficial effects that:
the accuracy of heart rate detection based on facial videos is susceptible to illumination changes, head rigidity movement and other factors, and for the problems, facial images are acquired by adopting a mode of combining facial detection with a video tracking algorithm, so that the influence of head rigidity is effectively reduced, when rPPG signals are extracted from faces, the strength of the rPPG signals is related to the selection of a facial region of interest and the illumination distribution of the region of interest, the influence of other factors on single region of interest is considered, the strength of the rPPG signals is reduced, for example, the selection of a forehead as the region of interest is easily influenced by hairs and glasses, meanwhile, the effective area selected by the region of interest also influences the strength of the rPPG signals, so that the influence of the selection of the single region of interest on heart rate calculation can be solved, the uneven concave-convex surface can lead to uneven skin pigment deposition, the signal to reduce the signal to be extracted from the rPPG signals, the problem of uneven distribution of the face is solved by adopting a guide filtering mode, and the smooth processing of the facial skin is effectively carried out, and the signal to the uneven distribution of the rPPG signals is effectively improved.
Drawings
FIG. 1 is a flow chart of a heart rate measurement method of the present invention;
FIG. 2 is a comparison of images before and after guided filtering in accordance with the present invention;
FIG. 3 is a chart showing the initial estimated heart rate obtained by performing fast Fourier transform on the blood dynamic signal vector;
FIG. 4 is a graph showing the estimated heart rate obtained by Kalman filtering according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a non-contact type driver heart rate detection method, which is shown in fig. 1 and comprises the following steps:
s1, installing a camera in front of the head of a driver, and acquiring a face image video of the driver by using the camera;
s2, detecting a face rectangle of each frame of face image in the face image video by adopting a dlib face detector in OpenCV, and simultaneously positioning coordinates of 68 facial feature points of the face in the face rectangle;
specifically, the face key point detection is implemented by adopting the 68 characteristic principle, namely the face characteristic point marking can be automatically found out on the basis of the detected face, namely the eyes, nose, mouth and face contour mark characteristic positions on the face.
S3, tracking the head movement condition of a driver through the displacement condition of 68 facial feature points in the face image video by adopting a KLT algorithm, and outputting a stable face image video;
in particular, early rpg-based heart rate detection studies were typically performed with the measurement subject stationary, as motion can cause changes in the distance and angle between the region of interest (region of interest, ROI) and the camera, light source, resulting in greater interference and noise; the invention aims at shooting of a moving driver, the acquired image is interfered by movement, and the KLT algorithm can effectively track 68 facial feature points of the human face, so that the influence of movement artifact on the rPPG signal is solved, and the rPPG signal with better effect in a movement state is obtained.
S4, conducting guide filtering on each frame of stable face image in the stable face image video to obtain a filtered face image video; as shown in fig. 2: and removing the concave area of the face through guided filtering to obtain a smooth face image.
S5, calculating average value vectors of the forehead, the left cheek, the right cheek and the nose of each frame of the filtered face image in the filtered face image video, and carrying out weighted average on the average value vectors of the four areas of each frame of the filtered face image to obtain an rPPG intensity signal of each frame of the filtered face image;
specifically, the human face has other tissues except the skin, and only the reflected light transmitted through the skin has physiological information, if the effective area of the skin area is selected to be too small, the accuracy of heart rate estimation is affected, and the skin area of the face part has uneven illumination distribution due to direct illumination, and the heart rate calculation accuracy is affected if the calculation is participated, so that the forehead, the left cheek, the right cheek and the nose are selected to calculate respective average value vectors.
S6, summarizing all the rPPG intensity signals to obtain an rPPG signal matrix, and performing dimension reduction processing on the rPPG signal matrix to obtain a low-dimension rPPG signal matrix;
s7, separating the low-dimensional rPPG signal matrix through an ICA to obtain a blood static and dynamic signal vector, and performing fast Fourier transform on the blood dynamic signal vector to obtain a heart rate measured value at the current moment; as shown in fig. 3: and obtaining a heart rate measured value of the human body at the current moment along with the acquisition of the image by performing fast Fourier transform on the blood static and dynamic signal vector.
S8, calculating a heart rate predicted value at the previous moment through a Kalman filter, and according to the heart rate measured value at the current moment and the heart rate predicted value at the previous moment. As shown in fig. 4: and obtaining a human heart rate predicted value which is accurate along with image acquisition through Kalr filtering.
Specifically, when the camera sensor receives reflected light from the face, irregular protrusions and pigments are formed because the face is not an ideal reflecting surface, such as scars and acne marks on the face. The pigments and the bulges can generate noise during rPPG signal acquisition, so that the accuracy of heart rate estimation is affected, other noise except illumination and motion artifact can be effectively reduced by adopting a guided filtering mode, and the rPPG signal extraction is facilitated.
In one embodiment, let the input image (stable face image) be p, the output image (filtered face image) be q, the guide image be I, subscript i Represented as one of the imagesAnd a pixel point.
In this embodiment, an image with feature points as boundaries is selected as the guiding image I, and the local linear model considers that a point on a certain function is in linear relation with points of adjacent parts of the function, so that a complex function can be represented by a plurality of local linear functions, and when the value of the certain point on the function is required, only the values of all the linear functions including the point need to be calculated and averaged. We can see the image as a two-dimensional function, then q and I are in a local window ω centered on pixel k k There is a local linear relationship:
q i =a k I i +b k
wherein ,ak Representing a local window omega k A first constant linear coefficient, b, of the inner kth pixel point k Representing a local window omega k And a second constant linear coefficient of the kth pixel point.
Meanwhile, the output image q is an input image p from which noise is removed, and the two have the following relationship:
q i =p i -n i
wherein ,ni Representing the noise of the output image q at the i-th pixel point.
In order to determine the linear coefficients of the output image and the guide image and to meet the minimization of the difference between the input image and the output image, it is converted into an optimization problem expressed as:
Figure BDA0004003272630000071
solving the optimization problem, yields:
Figure BDA0004003272630000081
Figure BDA0004003272630000082
according to the local linear relation between the filtered face image and the guide image, the method comprises the following steps of:
Figure BDA0004003272630000083
Figure BDA0004003272630000084
/>
Figure BDA0004003272630000085
wherein ,Ii Representing the gray value, p, of the ith pixel of the guiding image I in the local window i Representing gray value, q of ith pixel point of stable face image p in local window i Representing gray value omega of ith pixel point of filtered face image q in local window k Representing a partial window of length k, a k Representing a local window omega k A first constant linear coefficient of b k Representing a local window omega k ω represents the local window ω k The total number of pixels in the frame, E represents a regularization coefficient; in the guided filtering process, the size of the local window is changed continuously, and finally the local window is converged, and a plurality of b are generated in the process k The value of the sum of the values,
Figure BDA0004003272630000086
representing a local window omega k Mean value, mu of all second constant linear coefficients in the matrix k Representing the guiding image I at a local window omega k Mean value of interior->
Figure BDA0004003272630000087
Representing the guiding image I at a local window omega k Variance of interior>
Figure BDA0004003272630000088
Representing stable face image p in local window omega k Average gray values within.
The human skin can absorb and reflect visible light, the photosensitive sensor of the camera can receive reflected light from the skin, the photosensitive sensor converts the received reflected light into an electric signal, the electric signal is converted into a digital signal through AD, and the light is transmitted through skin tissues and then reflected to the sensor, so that certain attenuation can occur in illumination. The absorption of light by muscles, bones, veins and other connective tissues is basically unchanged, but the concentration of hemoglobin changes periodically with the movement of the heart due to the non-uniform flow of blood in arteries, and the absorption of light naturally changes periodically, so that heart rate information can be reflected.
Specifically, the illumination reflection model considers that the image is composed of illumination intensity, a blood static signal component and a blood dynamic signal component; heart rate information exists in the blood dynamic signal component, and the illumination intensity is incoherent with the blood static signal component and the blood dynamic signal component, so that the embodiment filters the illumination intensity through a dot product dimension reduction matrix, and dimension reduction processing is carried out on the rPPG signal matrix to obtain a low-dimension rPPG signal matrix; the dimension-reduction matrix is expressed as:
Figure BDA0004003272630000091
specifically, the ICA is used for processing the low-dimensional rPPG signal matrix, and the blood static signal component and the blood dynamic signal component are separated, and the specific process is as follows:
s11, defining a low-dimensional rPPG signal matrix as S, and calculating a mean value removing low-dimensional rPPG signal matrix
Figure BDA0004003272630000092
The calculation formula is as follows:
Figure BDA0004003272630000093
wherein E { S } represents the identity matrix of the low-dimensional rPPG signal matrix S;
s12, calculating a covariance matrix of the mean value removing low-dimensional rPPG signal matrix, and carrying out eigenvalue decomposition on the covariance matrix to obtain a whitening matrix;
specifically, the covariance matrix process of the de-averaged low-dimensional rpg signal matrix is calculated as:
Cov(S)=QΛQ T
Cov(S)=SS T =PRov(S)P T =E
Figure BDA0004003272630000094
wherein Q represents an orthogonal matrix composed of eigenvectors, Λ represents a symmetric matrix composed of eigenvalues, cov (S) represents a covariance matrix of the low-dimensional rpg signal matrix S, P represents a switching matrix, rov (S) represents an autocorrelation matrix of the low-dimensional rpg signal matrix S, and E represents a unit matrix.
Specifically, the specific process of obtaining the whitening matrix is as follows:
the data is first centered, i.e. the observation vector minus its mean:
C=Cov(S)-E1
wherein E1 is the mean value of covariance matrix Cov (S) of low-dimensional rPPG signal matrix S, and C is the matrix after centering of low-dimensional rPPG signal matrix S;
whitening refers to the fact that observation vectors are uncorrelated with each other by linear transformation and have unit variance
Assuming that the linearly transformed matrix is X, the whitening matrix V is expressed as:
V=XC
so that each vector in V
Figure BDA0004003272630000101
Is uncorrelated and E { VV T } =i, where x=e -1/2 U T
E=diag(d 1 ,d 2 ,....,d n ),d i For the correlation matrix R X =E{XX T The i-th eigenvalue of ICA, if the signals extracted from the mixed signal by ICA are independent of each other and neither follow gaussian distribution, then thisThe decimated-like signal is referred to as the target signal;
let the blood static signal be Y, according to the basic design requirement of ICA, the target signals are independent of each other and do not take Gaussian distribution, then there are:
Y=W T1 V
where V represents the whitening matrix, W represents the coefficient matrix of the mixed signal disassembly, and T1 represents the length of the periodicity of the disassembled data.
The kurtosis of the Gaussian distribution is as far as possible offset from the kurtosis of the Gaussian distribution, the further the kurtosis of the V value is from the kurtosis of the Gaussian distribution, the more the maximum non-Gaussian degree can be met, and then the target signal Y is obtained.
S14, performing fast Fourier transform on the blood static and dynamic signal vector, wherein a component with a low peak value in the transformed blood static and dynamic signal vector is a blood static signal component;
s15, processing the blood dynamic signal component by adopting a band-pass filter, and taking the peak frequency point of the processed blood dynamic signal component as a heart rate measurement value at the current moment.
Specifically, considering that the measurement value at a single moment is still interfered by slight actions such as head swing and shaking of the testee, a certain deviation can occur, so the heart rate value is gradually converged to an accurate value through Kalman filtering on the premise of ensuring that the heart rate of the testee does not suddenly change greatly.
The formula for calculating the heart rate predicted value by adopting Kalman filtering is as follows:
Figure BDA0004003272630000102
E n =E n-1 +k n ·(M n -x n-1 )
P n =(1-k n )·P n +Q
wherein ,Pn Representing an error covariance matrix, k n Represents the Kalman gain, R represents the measurement noise covariance, E n Represents the optimal estimated value at time n, M n Representing the measured value, x n-1 Representing a priori estimates, Q-tableThe measurement process covariance is shown.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "configured," "connected," "secured," "rotated," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intermediaries, or in communication with each other or in interaction with each other, unless explicitly defined otherwise, the meaning of the terms described above in this application will be understood by those of ordinary skill in the art in view of the specific circumstances.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A non-contact driver heart rate detection method, comprising the steps of:
s1, installing a camera in front of the head of a driver, and acquiring a face image video of the driver by using the camera;
s2, detecting a face rectangle of each frame of face image in the face image video by adopting a dlib face detector in OpenCV, and simultaneously positioning coordinates of 68 facial feature points of the face in the face rectangle;
s3, tracking the head movement condition of a driver through the displacement condition of 68 facial feature points in the face image video by adopting a KLT algorithm, and outputting a stable face image video;
s4, conducting guide filtering on each frame of stable face image in the stable face image video to obtain a filtered face video;
s5, calculating average value vectors of four areas of the forehead, the left cheek, the right cheek and the nose of each frame of the filtered face image video, and carrying out weighted average on the average value vectors of the four areas to obtain rPPG signal intensity of each frame of the filtered face image;
s6, summarizing all the rPPG intensity signals to obtain an rPPG signal matrix, and performing dimension reduction processing on the rPPG signal matrix to obtain a low-dimension rPPG signal matrix;
s7, separating the low-dimensional rPPG signal matrix through an ICA to obtain a blood static and dynamic signal vector, and performing fast Fourier transform on the blood dynamic signal vector to obtain a heart rate measured value at the current moment;
s8, calculating a heart rate predicted value at the previous moment through a Kalman filter, and according to the heart rate measured value at the current moment and the heart rate predicted value at the previous moment.
2. The method for detecting heart rate of a non-contact driver according to claim 1, wherein in step S4, each frame of stable face image in the stable face image video is guided and filtered by using a guiding image to obtain a filtered face image, and a local linear relationship between the guiding image and the filtered face image is expressed as:
q i =a k I i +b k
in order to determine the linear coefficients of the guide image and the filtered face image while satisfying the minimization of the difference between the stable face image and the filtered face image, the guide filtering process is converted into an optimization problem, expressed as:
Figure FDA0004003272620000021
solving the optimization problem to obtain:
Figure FDA0004003272620000022
Figure FDA0004003272620000023
wherein ,Ii Gray value, p, representing the ith pixel of the guiding image I in the local window i Gray value, q representing ith pixel point of stable face image p in local window i Gray value omega representing ith pixel point of filtered face image q in local window k Representing a partial window of length k, a k Representing a local window omega k A first constant linear coefficient of b k Representing a local window omega k ω represents the local window ω k The total number of pixels in, e represents the regularization coefficient,
Figure FDA0004003272620000027
representing a local window omega k Mean value, mu of all second constant linear coefficients in the matrix k Representing the local window omega by guiding the image I k Mean value calculated for all pixels in the region, +.>
Figure FDA0004003272620000028
Representing the local window omega by guiding the image I k Variance calculated for all pixels in the region, +.>
Figure FDA0004003272620000029
Represented by stabilizing the face image p in a local window omega k And calculating average gray values of all pixel points in the image. />
3. The method for detecting the heart rate of the non-contact driver according to claim 2, wherein the linear coefficient determined between the filtered face image and the guide image is obtained according to solving an optimization problem, and is expressed as:
Figure FDA0004003272620000024
wherein ,
Figure FDA0004003272620000025
representing the average gray value of the ith pixel point of the filtered face image q in the local window.
4. The method according to claim 1, wherein in step S6, the rpg signal matrix is multiplied by a dimension-reduction matrix to obtain a low-dimension rpg signal matrix, and the dimension-reduction matrix is expressed as:
Figure FDA0004003272620000026
where T represents the transpose of the matrix.
5. The non-contact type heart rate detection method for a driver according to claim 1, wherein the step S7 specifically includes:
s11, defining a low-dimensional rPPG signal matrix as S, and calculating a mean value removing low-dimensional rPPG signal matrix
Figure FDA0004003272620000031
The calculation formula is as follows:
Figure FDA0004003272620000032
wherein E { S } represents the identity matrix of the low-dimensional rPPG signal matrix S;
s12, calculating a covariance matrix of the mean value removing low-dimensional rPPG signal matrix, and carrying out eigenvalue decomposition on the covariance matrix to obtain a whitening matrix;
s13, adopting ICA to process the whitening matrix to obtain a blood static and dynamic signal vector; the hemodynamic signal vector includes two distinct components, a hemodynamic signal component and a hemodynamic signal component;
s14, performing fast Fourier transform on the blood static and dynamic signal vector, wherein a component with a low peak value in the transformed blood static and dynamic signal vector is a blood static signal component;
s15, processing the blood dynamic signal component by adopting a band-pass filter, and taking the peak frequency point of the processed blood dynamic signal component as a heart rate measurement value at the current moment.
6. The method according to claim 5, wherein the step S12 of calculating the covariance matrix of the de-averaged low-dimensional rpg signal matrix comprises:
Cov(S)=QΛQ T
Cov(S)=SS T =PRov(S)P T =E
Figure FDA0004003272620000033
wherein Q represents an orthogonal matrix composed of eigenvectors, Λ represents a symmetric matrix composed of eigenvalues, cov (S) represents a covariance matrix of the low-dimensional rpg signal matrix S, P represents a switching matrix, rov (S) represents an autocorrelation matrix of the low-dimensional rpg signal matrix S, and E represents a unit matrix.
7. The non-contact type heart rate detection method for a driver according to claim 1, wherein the formula for calculating the heart rate prediction value by using the kalman filter is:
Figure FDA0004003272620000034
E n =E n-1 +k n ·(M n -x n-1 )
P n =(1-k n )·P n +Q
wherein ,Pn Representing an error covariance matrix, k n Represents the Kalman gain, R represents the measurement noise covariance matrix, E n Represents the optimal estimated value at time n, M n Representing the measured value, x n-1 Representing a priori estimates, Q represents the covariance matrix of the measurement process.
CN202211623118.0A 2022-12-16 2022-12-16 Non-contact type driver heart rate detection method Pending CN116012820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211623118.0A CN116012820A (en) 2022-12-16 2022-12-16 Non-contact type driver heart rate detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211623118.0A CN116012820A (en) 2022-12-16 2022-12-16 Non-contact type driver heart rate detection method

Publications (1)

Publication Number Publication Date
CN116012820A true CN116012820A (en) 2023-04-25

Family

ID=86020224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211623118.0A Pending CN116012820A (en) 2022-12-16 2022-12-16 Non-contact type driver heart rate detection method

Country Status (1)

Country Link
CN (1) CN116012820A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758066A (en) * 2023-08-14 2023-09-15 中国科学院长春光学精密机械与物理研究所 Method, equipment and medium for non-contact heart rate measurement

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758066A (en) * 2023-08-14 2023-09-15 中国科学院长春光学精密机械与物理研究所 Method, equipment and medium for non-contact heart rate measurement
CN116758066B (en) * 2023-08-14 2023-11-14 中国科学院长春光学精密机械与物理研究所 Method, equipment and medium for non-contact heart rate measurement

Similar Documents

Publication Publication Date Title
McDuff et al. Remote detection of photoplethysmographic systolic and diastolic peaks using a digital camera
CN109977858B (en) Heart rate detection method and device based on image analysis
CN110647815A (en) Non-contact heart rate measurement method and system based on face video image
Fan et al. Non-contact remote estimation of cardiovascular parameters
Bobbia et al. Remote photoplethysmography based on implicit living skin tissue segmentation
EP3440996A1 (en) Device, system and method for determining a physiological parameter of a subject
KR102285999B1 (en) Heart rate estimation based on facial color variance and micro-movement
Blöcher et al. An online PPGI approach for camera based heart rate monitoring using beat-to-beat detection
CN110866498B (en) Heart rate monitoring method
Huang et al. A motion-robust contactless photoplethysmography using chrominance and adaptive filtering
CN111281367A (en) Anti-interference non-contact heart rate detection method based on face video
Li et al. An improvement for video-based heart rate variability measurement
CN116012820A (en) Non-contact type driver heart rate detection method
Schrumpf et al. Exploiting weak head movements for camera-based respiration detection
CN113591769B (en) Non-contact heart rate detection method based on photoplethysmography
Zeng et al. Infrared video based non-invasive heart rate measurement
Cho et al. Reduction of motion artifacts from remote photoplethysmography using adaptive noise cancellation and modified HSI model
Rongt et al. Respiration and cardiac activity sensing using 3-d cameras
CN114246570B (en) Near-infrared heart rate detection method by fusing peak signal-to-noise ratio and Peerson correlation coefficient
Nagamatsu et al. PPG3D: Does 3D head tracking improve camera-based PPG estimation?
CN113361526B (en) Non-contact respiration rate monitoring method fusing shoulder and chest area information
CN114387479A (en) Non-contact heart rate measurement method and system based on face video
Bajraktari et al. Methods of Contactless Blood Pressure Measurement: A Systematic Review
Wang et al. KLT algorithm for non-contact heart rate detection based on image photoplethysmography
Xiao et al. Combination of denoising algorithms for video-based non-contact heart rate measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination