CN113657345B - Non-contact heart rate variability feature extraction method based on realistic application scene - Google Patents

Non-contact heart rate variability feature extraction method based on realistic application scene Download PDF

Info

Publication number
CN113657345B
CN113657345B CN202111011120.8A CN202111011120A CN113657345B CN 113657345 B CN113657345 B CN 113657345B CN 202111011120 A CN202111011120 A CN 202111011120A CN 113657345 B CN113657345 B CN 113657345B
Authority
CN
China
Prior art keywords
face
heart rate
image
point
extracting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111011120.8A
Other languages
Chinese (zh)
Other versions
CN113657345A (en
Inventor
戴敏
王存栋
许阔达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
Tianjin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology filed Critical Tianjin University of Technology
Priority to CN202111011120.8A priority Critical patent/CN113657345B/en
Publication of CN113657345A publication Critical patent/CN113657345A/en
Application granted granted Critical
Publication of CN113657345B publication Critical patent/CN113657345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

The invention discloses a non-contact heart rate variability feature extraction method based on a realistic application scene, which comprises the following steps: 1) Collecting an image containing a human face; 2) Acquiring an image face area by using a strategy combining face detection and tracking; 3) Euler amplification enhances skin color variation; 4) Color space conversion is separated from the channel; 5) Adaptive threshold skin detection; 6) Extracting a source signal; 7) EEMD denoising; 8) Five-point sliding smooth filtering; 9) Peak value point detection and abnormal peak value point correction; 10 RR interval calculation and time domain, frequency domain, nonlinear HRV feature extraction. The method integrates a strategy for improving the feature extraction speed, a strategy for overcoming the influence of shaking and illumination and improving the feature extraction accuracy into the feature extraction method, can rapidly extract the HRV features in a non-contact mode in a practical application environment, and ensures consistency with a contact type extraction result.

Description

Non-contact heart rate variability feature extraction method based on realistic application scene
Technical Field
The invention relates to the technical field of computer vision and signal processing, in particular to a non-contact heart rate variability feature extraction method.
Background
Heart Rate Variability (HRV) is an important index for assessing autonomic nervous activity and the intrinsic kinetic mechanisms of the heart. The phenomenon of heart rate variability results from the regulation of heart rate by the autonomic nervous system, with periodic changes in heart rate caused by interaction between the sympathetic and parasympathetic nervous systems. Heart rate variability is widely used in research of heart diseases, mental diseases, emotion recognition, and the like.
Heart rate variability features are mainly extracted by electrocardiosignals acquired by a contact method. In a real application scene, a testee is required to actively cooperate with and wear related acquisition equipment to acquire heart rate variability characteristics in a contact mode, so that a lot of inconvenience is brought to the real application. Particularly, when heart rate variability features are used for emotion recognition, the heart rate variability features are acquired in a contact mode, and the extracted heart rate variability features of the tested person are also influenced due to the influence of artificial contact factors in the acquisition process.
The non-contact heart rate detection mode mainly comprises a laser Doppler technology, a microwave or millimeter wave Doppler radar, a thermal imaging technology and the like. However, the above technology generally has high equipment cost, and the long-term use of the equipment has influence on human body, which is not suitable for wide use in practical application. In recent years, imaging photoplethysmography (IPPG) based on the development of photoplethysmography (PPG) has enabled more accurate acquisition of human heart rate information, which also makes possible non-contact heart rate variability measurement by this principle.
The general process of extracting heart rate variability features through IPPG comprises the steps of firstly collecting video containing human faces through a camera, carrying out human face detection on the video to define human face regions, selecting specific regions from the human face regions as ROI regions, carrying out channel separation on the regions through a color space, calculating pixel mean values of single channels or multiple channels of each frame, and obtaining heart rate values by applying a signal processing method according to differences among the pixel mean values. However, in a real application scenario, the problem that the HRV features are extracted by directly applying the above-mentioned process is that the face detection needs to be performed frame by frame, so that the extraction speed is slow, and the requirements of the practical application cannot be met. In addition, in a real application scene, because the measured person shakes, environmental factors such as different illumination conditions can influence the accuracy of heart rate variability feature extraction, the above extraction process can not solve the problem well.
Disclosure of Invention
In order to solve the problems, the invention provides a non-contact heart rate variability feature extraction method based on a realistic application scene, which has consistency with the extraction result of contact type, from the two aspects of low non-contact heart rate variability feature extraction speed, different illumination conditions and large influence of shaking on the extraction accuracy of the non-contact heart rate variability feature, and the method is based on an IPPG principle comprehensive image processing technology, a signal processing technology and a feature extraction technology.
The invention provides a non-contact heart rate variability feature extraction method, which provides a solution strategy for acquiring a face region by combining face detection and face tracking in order to improve the extraction speed of heart rate variability features, acquires a face position through the face detection and then tracks the face position, simultaneously relocates the face position through the face detection according to a fixed time interval in the tracking process to prevent tracking offset, and continuously corrects the face offset generated by shaking to prevent incomplete extraction of the face region, thereby ensuring that the accuracy of acquiring the face region is not affected while improving the speed. In addition, the invention also respectively puts the image acquired from the camera and the image processing in two threads to realize simultaneous processing by means of one shared queue, thereby better improving the extraction speed.
In order to reduce the influence of different illumination conditions on the extraction result, the invention provides a solving strategy combining channel separation, self-adaptive skin detection and EEMD filtering. Firstly, converting a face region image into a LUV color space, separating an L channel reflecting brightness variation to obtain a U channel image reflecting chromaticity variation, converting the face region image into a YCrCb color space, carrying out skin detection according to brightness components Y and Cb components under different illumination conditions in combination with self-adaptive determination thresholds to obtain a skin part of the face region image, obtaining an original heart rate signal by calculating a skin detection result and the U channel image, primarily reducing the influence of illumination intensity variation, carrying out noise reduction on the original heart rate signal by using an EEMD method, and further reducing the influence of illumination variation.
In order to reduce the influence of the shaking of a tested person on the extraction result, the invention provides a peak point extraction strategy capable of correcting the peak point influenced by the shaking. Firstly calculating signal peak points, judging whether each extracted peak point is affected by shaking to generate extraction abnormality or not according to whether the slope between adjacent signal peak points and the distance between the peak points are in a threshold range, and averaging and correcting the abnormal peak points according to the positions of normal peak points before the current abnormal peak point, so that relatively accurate signal peak points are obtained, and the influence of shaking on feature extraction is further reduced.
In order to achieve the above purpose, the acquisition process provided by the invention is as follows:
step one, acquiring an image containing a human face:
the face images are collected by the testee facing the camera according to the fixed frame rate of the camera of 30FPS, and the relatively accurate heart rate variability characteristics can be extracted only by continuously collecting the face images of the testee for 30 s.
The storage of the collected images containing the human face needs to be performed in one sub-thread, that is, the process of collecting the images and reading the images in the program should be performed in two threads simultaneously, and the two threads share one image queue.
Step two, acquiring an image face area:
the face area is extracted by using a libfacedetection open source face detection library to perform face detection and a KLT (Kandade-Lucas-Tomasi) tracking method, the face detection obtains a face position and then tracks the face position, and meanwhile, the face position is re-used for positioning according to the 10s fixed time interval in the tracking process and then tracking is continued.
In the face tracking process, the four vertexes of the face area determined after face tracking are used for determining a minimum circumscribed rectangle, and the face affected by shaking is corrected through the center point and the deflection angle of the rectangle, so that incomplete extraction of the face area is prevented.
Step three, euler amplification:
and (3) carrying out skin color change enhancement on the face region by using an Euler amplification method, and enhancing information of a part related to physiological signals in the face image.
Step four, channel separation:
the face image obtained after Euler amplification in the third step is converted from an RGB color space to a LUV color space, so that an L channel reflecting brightness change is separated, and a U channel reflecting chromaticity change is used for extracting an original heart rate signal.
Step five, self-adaptive threshold skin detection:
and providing an adaptive threshold value for skin detection, adaptively determining the threshold value according to the luminance component Y and the Cb component under different illumination conditions, wherein the range of the threshold value is a skin pixel, the pixel point is set to 255 white, and the rest is set to 0 black. And performing AND operation on the skin detection image and the U channel, so as to remove the non-skin area and obtain the U channel face image with the non-skin area removed.
Step six, extracting source signals:
and in the process of extracting the HRV features of one round, calculating the pixel mean value of the U-channel face image of which the non-skin area is completely removed, which is obtained in the step five, obtaining a series of pixel mean value points, and then carrying out standardized calculation on the series of pixel points to form the original heart rate signal.
Step seven, EEMD denoising:
the EEMD (Ensemble Empirical Mode Decomposition) method is used for reducing noise of the original heart rate signal to further reduce the influence of different illumination conditions.
Step eight, five-point sliding:
removing high-frequency noise still contained in signal by five-point sliding smoothing filtering method
Step nine, extracting and correcting peak points:
and calculating signal peak points, and finding out and correcting the peak points affected by shaking by setting the distance between the two peak points and the threshold range of the slope, so as to obtain relatively accurate signal peak points.
Step ten, HRV feature extraction:
and D, calculating RR intervals and R point time by using the corrected peak points obtained in the step nine so as to extract HRV features, and extracting 27 HRV features in total from time domain features, frequency domain features and nonlinear features.
Wherein the time domain features include: max, min, mean, median, SDNN, RMSSD, hr-mean, hr-sd, NN40, pNN40 or HRVti;
the frequency domain features are obtained by performing spectrum analysis and extraction by using a Lomb-Scargle periodic chart, and the frequency domain features comprise: aVLF, aLF, aHF, aTotal, pVLF, pLF, pHF, nLF, nHF, LFHF, peakVLF, peakLF or peakHF;
wherein the nonlinear feature comprises: SD1, SD2 or SD1/SD2.
The invention has the advantages and positive effects that:
the invention provides a non-contact heart rate variability feature extraction method based on a real application scene, which is specifically characterized in that a strategy for improving the extraction speed of the non-contact heart rate variability feature and a strategy for overcoming different influences of shaking and illumination conditions and improving the extraction accuracy of the non-contact heart rate variability feature are integrated into the extraction method. Under the actual application scene, the invention can realize the actual application functions of automatic switching, automatic detection, automatic calculation of HRV characteristics and the like of testers, and has strong actual application significance.
Drawings
Fig. 1 is a flow chart of the technical scheme of the invention.
Fig. 2 is a flow chart of an enhanced feature extraction speed strategy for acquiring a face region in combination with face detection and face tracking.
FIG. 3 is a flow chart of different impact strategies for reducing illumination conditions in combination with channel separation, adaptive skin detection, EEMD filtering.
FIG. 4 is a flowchart of a peak point extraction strategy that may correct the wobble-affected peak point.
Fig. 5 is a face correction rotational coordinate system.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention provides a non-contact heart rate variability feature extraction method based on a real application scene, which can be used for rapidly and accurately extracting heart rate variability features in the real application scene, can be used for emotion recognition to reflect psychological pressure level of a tested person, and can be applied to assist customs staff in screening real scenes such as suspicious passers-by and the like.
Referring to fig. 2, the invention provides a strategy for acquiring the feature extraction speed of a face region by combining face detection and face tracking. The method comprises the steps of acquiring a face position through face detection, tracking the face position, repositioning the face position through face detection according to a fixed time interval in the tracking process to prevent tracking offset, and continuously correcting the face offset generated by shaking to prevent incomplete extraction of a face region, so that the accuracy of acquiring the face region is not affected while the speed is improved.
Referring to fig. 3, the present invention proposes a strategy for reducing the effects of different illumination conditions in combination with channel separation, adaptive skin detection, EEMD filtering. Firstly, converting a face region image into a LUV color space, separating an L channel reflecting brightness change to obtain a U channel image reflecting chromaticity change, converting the face region image into a YCrCb color space, carrying out skin detection according to brightness components Y and Cb components under different illumination conditions, adaptively determining a threshold value, obtaining a skin part of the face region image, carrying out operation on a skin detection result and the U channel image to obtain an original heart rate signal, primarily reducing the influence of different illumination conditions, carrying out noise reduction on a source signal by using an EEMD method, and further reducing the influence of different illumination conditions.
Referring to fig. 4, the present invention proposes a peak point extraction strategy that can correct the peak point affected by shaking. Firstly calculating signal peak points, judging whether each extracted peak point is affected by shaking to generate extraction abnormality or not according to whether the slope between adjacent signal peak points and the distance between the peak points are in a threshold range, and averaging and correcting the abnormal peak points according to the positions of normal peak points before the current abnormal peak point, so that relatively accurate signal peak points are obtained, and the influence of shaking on feature extraction is further reduced.
Referring to fig. 1, the non-contact heart rate variability feature extraction method based on the practical application scenario proposed by the present invention in combination with the above strategy mainly comprises 4 major parts: the data acquisition part, the image processing part, the signal processing part and the HRV feature extraction part can be subdivided into 10 steps: collecting an image containing a human face, acquiring an image face area, euler amplification, channel separation, self-adaptive threshold skin detection, source signal extraction, EEMD denoising, five-point sliding, peak point extraction and correction and HRV feature extraction.
The method comprises the following specific steps:
step one, acquiring an image containing a human face:
the tested person acquires face images facing the camera according to the fixed frame rate of the camera, and the frame rate of a common USB camera on the market is 30FPS at present, so the invention assumes that the face images are acquired according to 30 FPS. The testee can extract relatively accurate heart rate variability features only by continuously collecting face images for 30 seconds.
Because the invention is based on the actual application scene, it is not satisfactory to advance the recording of the face video as the general flow and then extract the heart rate variability feature from the face video. Therefore, three threads are needed to be performed simultaneously when the method is applied, one thread is responsible for collecting the face image, one thread is responsible for processing the face image, and one thread is responsible for signal processing and feature extraction.
Since the processing is performed by the thread separation process, one thread collects the face image and one thread processes the face image, a shared space is needed to ensure that one thread stores the collected face image in the shared space in sequence, and the other thread reads the face image from the shared space in sequence for processing, and because the access and the reading operations need to be performed in sequence and are required to be fast, the shared space is most suitable to be implemented by using a data structure of a queue.
Step two, acquiring an image face area:
the traditional non-contact heart rate variability feature extraction flow has a relatively slow extraction speed. The traditional non-contact feature extraction can perform face detection once on each frame of image, and the mode can stably extract a face area, but has low speed and low efficiency. Facial region extraction is critical in the general flow of HRV feature extraction, and the facial region extraction is to process images, and compared with signal processing, emotion classification and other pure numerical calculation, the facial region extraction itself takes most of the time for program operation, so that the overall operation speed of the system can be greatly improved in the stage of optimization.
Based on the problem, the invention uses the libface detection open source face detection library to extract the face region in a mode of combining face detection and KLT (Kandade-Lucas-Tomasi) tracking, the extraction speed is high, the extraction of the face region is stable, other functions are not affected (such as the key functions of automatic switching of a detected person in a practical application scene are needed to be considered), the face detection can be regarded as the face tracking in the extraction mode to provide a tracking template, and in order to prevent the tracking position from deviating relative to the face position, the face detection is also carried out again according to fixed time intervals in the tracking process to relocate the face position for carrying out the face tracking.
In the practical application scene, the shaking situation is also considered, and the situation that the face area is extracted incompletely can appear under the shaking situation by using the mode of face detection and face tracking, so that face correction is required, and the face correction is generally carried out by extracting facial feature points, then locating pupil positions and carrying out face correction through the deflection angle of connecting lines between two pupils in the traditional face correction. Although the face position can be effectively corrected in the method, the correction speed is slow, and if the face position is put in the non-contact heart rate variability feature extraction process, the extraction speed is severely slowed down. Therefore, the invention determines a minimum circumscribed rectangle through four vertexes of the face area determined after face tracking, and corrects the rectangle through the center point and the deflection angle of the rectangle. Referring to fig. 5, the deflection angle of the rectangle is determined by a coordinate system, and when the rectangle is shifted to the right, the deflection angle is referenced to 0 degrees on the positive half axis of the x-axis in the first quadrant of the coordinate system. When the rectangle is shifted to the left, the angle is determined by the second quadrant, the shift angle being based on 0 degrees on the positive half axis of the y-axis. The rotation angle is determined to be clockwise when the angle is between 0 and 45 degrees with the current degree, and counterclockwise when the angle is between 45 and 90 degrees with the degree subtracted from 90 degrees. Through the center point of the rectangle and the deflection angle of the rectangle, an affine transformation matrix is constructed, affine transformation is carried out on the whole image according to the affine transformation matrix, and the image is intercepted again according to the length and width of the rectangle and the center point of the rectangle, so that the corrected face can be obtained.
Step three, euler amplification:
according to the invention, the Euler amplification method is applied to enhance the skin color change of the face region, and the information of the part related to the physiological signal in the face image is enhanced.
The number of the spatial decomposition layers in the Euler amplification method is 6, the time domain filtering frequency band is 1-2Hz, and the image amplification factor is 200.
Step four, channel separation:
in order to reduce the influence of different illumination conditions on HRV feature extraction, the face image obtained through Euler amplification in the third step is converted from an RGB color space to a LUV color space, so that an L channel reflecting brightness change is separated, and a U channel reflecting chromaticity change is used for extracting an original heart rate signal.
Step five, self-adaptive threshold skin detection:
because the face area image obtained through face detection and face tracking also has face parts of non-skin areas, the parts affect the accuracy of HRV feature extraction, so that the face non-skin areas need to be screened out, and the skin detection effect of a single threshold is poor due to different illumination conditions in a real application scene. And performing AND operation on the skin detection image and the U channel, so as to remove the non-skin area and obtain the U channel face image with the non-skin area removed.
The dynamic configuration rules are defined in the YcrCb color space as follows:
θ 3 =6;θ 4 =-8
if(Y≤128)θ 1 =6;θ 2 =12;
the pixel is a skin pixel if the Cr value of the pixel satisfies the following condition
c r ≥-2(c b +24);c r ≥-(c b +17);
c r ≥-4(c b +32);c r ≥2.5(c b1 );
c r ≥θ 3 ;c r ≥0.5(θ 4 -c b );
Wherein Y is a luminance component, cb is a blue chrominance component, cr is a red chrominance component, and θ1 to θ4 are intermediate variables.
Step six, extracting source signals:
in the invention, a series of pixel mean points are obtained by calculating the pixel mean value of the U-channel face image which is obtained in the fifth step and completely removes the non-skin area in a round of HRV feature extraction process, and then the series of pixel mean points are subjected to standardization processing to form an original heart rate signal.
Step seven, EEMD denoising:
under a realistic application scene, different illumination conditions have great influence on the extraction of the HRV features, and experimental results show that the lower the illumination is, the larger the extracted HRV feature error is. In order to reduce errors caused by different illumination conditions on HRV extraction, the method adopts a EEMD (Ensemble Empirical Mode Decomposition) method to perform noise reduction treatment. The EEMD method is applied to adaptively obtain IMF components with different resolutions under each scale, instantaneous frequency is calculated through Hilbert transformation, the IMF components with noise dominance and the IMF components with signal dominance are distinguished according to the instantaneous frequency, the IMF components with noise dominance are abandoned, and the reconstructed signals of the IMF components with signal dominance are reserved, so that the influence on HRV feature extraction under different illumination conditions is reduced.
Step eight, five-point sliding:
the five-point moving average filtering belongs to low-pass filtering, and can effectively remove high-frequency noise still contained in the signal. And (3) filtering the signal subjected to EEMD filtering in the step seven by applying five-point moving average filtering to enable the signal to be smoother.
The calculation formula of the i new data of the five-point moving average is as follows:
where N is the signal length, f (j) is the signal value within the five-point sliding window, and y (i) is the new signal value determined by the five-point sliding average.
Step nine, extracting and correcting peak points:
when the head of a tested person shakes, the heart rate variability curve shakes, so that the peak value point of the heart rate variability curve is extracted inaccurately. Therefore, the invention provides an extraction peak point strategy capable of correcting the peak point affected by shaking, which comprises the steps of calculating signal peak points for the smooth signals obtained after the step eight, judging whether the extraction of the peak points is affected by shaking or not in a threshold range by the slope between adjacent peak points and the distance between the peak points, correcting the abnormal peak points by averaging the positions of normal peak points before the current abnormal peak points, and correcting all the abnormal peak points, thereby obtaining relatively accurate signal peak points.
The formula for determining whether the slope between adjacent peak points is within the threshold value range in the invention is as follows:
wherein h is i Represents the height of the i (i=1, 2, …, n) th peak point, t i The time corresponding to the i (i=1, 2, …, n) th peak point is represented, and if the formula is not satisfied, the i-th peak point is represented as an abnormal peak point.
The formula for determining whether the distance between adjacent peak points is within the threshold value range in the invention is as follows:
60/(HR-14)≤t i -t i-1 <60/(HR+14),(i=2,3,…,n)
wherein t is i Representing the time corresponding to the ith (i=1, 2, …, n) peak point, HR representing the heart rate average, and if the formula is not satisfied, representing the ith peak point as an abnormal peak point, wherein the calculation formula of the average heart rate HR is as follows:
wherein t is all Representing the total duration of the detection, and count representing the total number of peak points in the detection.
For the detected abnormal peak point, the calculation formula for correcting the abnormal peak point is as follows:
t F_new =t F-1 +[(t F-1 -t F-2 )+…+(t 2 -t 1 )]/(F-2)
wherein t is F_new To the corrected result, (t) F-1 ,…,t 1 ) The method is a normal peak point before an abnormal peak point to be corrected or a corrected peak point.
Step ten, HRV feature extraction:
and D, calculating RR intervals and R point time by using the corrected peak points obtained in the step nine so as to extract HRV features, and extracting 27 HRV features in total from time domain features, frequency domain features and nonlinear features.
Wherein the time domain features include: max, min, mean, median, SDNN, RMSSD, hr-mean, hr-sd, NN40, pNN40 or HRVti;
the frequency domain features are obtained by performing spectrum analysis and extraction by using a Lomb-Scargle periodic chart, and the frequency domain features comprise: aVLF, aLF, aHF, aTotal, pVLF, pLF, pHF, nLF, nHF, LFHF, peakVLF, peakLF or peakHF;
wherein the nonlinear feature comprises: SD1, SD2 or SD1/SD2.
The foregoing is a specific embodiment of the present invention.

Claims (9)

1. The non-contact heart rate variability feature extraction method based on the realistic application scene is characterized by comprising the following steps of:
step one, acquiring an image containing a human face: acquiring images of the person to be detected including the face through a camera according to a fixed frame rate;
step two, acquiring an image face area by using a mode of face detection and face tracking;
thirdly, performing skin color change enhancement on the face region by using an Euler amplification method, and enhancing information of a part related to physiological signals in the face image;
step four, performing skin detection according to luminance component Y and Cb component self-adaptive determination threshold values under different illumination conditions to obtain a U-channel image with non-skin areas removed;
step five, converting the face image amplified by Euler from RGB color space to LUV color space, in order to separate L channel reflecting brightness change, and extracting original heart rate signal by using U channel reflecting chromaticity change;
step six, pixel mean value is calculated for the U-channel image and standardized to obtain an original heart rate signal;
step seven, denoising the original heart rate signal by using an EEMD method;
step eight, removing high-frequency noise still contained in the signal by applying five-point sliding smoothing filtering;
step nine, calculating signal peak points, and finding out and correcting the peak points affected by shaking by setting the distance between the two peak points and the threshold range of the slope, so as to obtain relatively accurate signal peak points;
step ten, carrying out HRV feature extraction by using the corrected RR interval and R point time, and extracting 27 HRV features in total of time domain features, frequency domain features and nonlinear features;
step nine further comprises:
the formula for determining whether the slope between adjacent peak points is within the threshold range is as follows:
wherein h is i The height of the ith peak point, t i Representing the time corresponding to the ith peak point, and representing the ith peak point as an abnormal peak point if the formula is not satisfied; i=1, 2, …, n;
the formula for determining whether the distance between adjacent peak points is within the threshold value range is as follows:
60/(HR-14)≤t i -t i-1 <60/(HR+14),i=2,3,…,n
wherein t is i The time corresponding to the ith peak point is represented, i=1, 2, …, n and HR represent heart rate average values, and if the formula is not satisfied, the ith peak point is represented as an abnormal peak point, wherein the calculation formula of the average heart rate HR is as follows:
wherein t is all Representing the total duration of detection, wherein count represents the total number of peak points in detection;
for the detected abnormal peak point, the calculation formula for correcting the abnormal peak point is as follows:
t F_new =t F-1 +[(t F-1 -t F-2 )+…+(t 2 -t 1 )]/(F-2)
wherein t is F_new To correct the result, t F-1 ,…,t 1 The method is a normal peak point before an abnormal peak point to be corrected or a corrected peak point.
2. The method for extracting the non-contact heart rate variability feature based on the realistic application scenario as set forth in claim 1, wherein the step two of acquiring the image face area by using the face detection and face tracking method comprises the specific steps of:
1) Extracting a face region by combining face detection with a KLT tracking method by using a libfacedetection open source face detection library;
2) Face detection obtains a face position and then tracks the face position;
3) Correcting the face affected by shaking in the tracking process;
4) During the tracking process, the face detection is reused for positioning the face position according to the fixed time interval of 10s, and then the tracking is continued.
3. The method for extracting the non-contact heart rate variability feature based on the realistic application scenario as set forth in claim 2, wherein the specific step of correcting the face affected by the shaking comprises the following steps:
1) Determining a minimum circumscribed rectangle through four vertexes of a face area determined after face tracking, and correcting the face through the center point and the deflection angle of the rectangle;
2) The deflection angle of the rectangle is determined by a coordinate system, when the rectangle is deflected rightwards, the deflection angle takes an x-axis positive half axis as a 0 degree reference in a first quadrant of the coordinate system, and when the rectangle is deflected leftwards, the angle is determined by a second quadrant, and the deflection angle takes a y-axis positive half axis as a 0 degree reference;
3) The rotation angle is determined to be clockwise when the deflection angle of the rectangle is between 0 and 45 degrees by using the current degree, and the rotation angle is determined to be anticlockwise when the deflection angle of the rectangle is between 45 and 90 degrees by subtracting the degree from 90;
4) Constructing an affine transformation matrix through the center point of the rectangle and the deflection angle of the rectangle, and carrying out affine transformation on the whole image according to the affine transformation matrix;
5) And re-intercepting the image according to the rectangular center point and the rectangular length and width to obtain the corrected face.
4. The method for extracting the non-contact heart rate variability feature based on the realistic application scenario as set forth in claim 1, wherein the step one and the step two are implemented by:
the storage of the acquired images containing the human face needs to be performed in one sub-thread, that is to say, the processing of the acquisition and the reading of the images in the program should be performed in two threads simultaneously, and the two threads share one image queue.
5. The method for extracting the non-contact heart rate variability characteristics based on the realistic application scene as claimed in claim 1, wherein the third step is to apply the euler amplification method to enhance the skin color change of the face region, and the specific application parameters of the information about the physiological signal in the enhanced face image are as follows: the number of the space decomposition layers is 6, the time domain filtering frequency band is 1-2Hz, and the image magnification is 200.
6. The method for extracting the non-contact heart rate variability feature based on the realistic application scenario as set forth in claim 1, wherein the fourth step further comprises:
according to the brightness component Y combined with the Cb component under different illumination conditions, a threshold is determined in a self-adaptive mode, the range of the threshold is a skin pixel, the pixel point is 255 white, the rest is 0 black, and the skin detection image and the U channel are subjected to AND operation, so that a non-skin area is removed, and a U channel face image with the non-skin area removed is obtained;
the dynamic configuration rules are defined in the YcrCb color space as follows:
if(Y>128)
θ 3 =6;θ 4 =-8;
if(Y≤128)θ 1 =6;θ 2 =12;
a pixel is a skin pixel if its Cr value satisfies the following condition:
c r ≥-2(c b +24);c r ≥-(c b +17);
c r ≥-4(c b +32);c r ≥2.5(c b1 );
c r ≥θ 3 ;c r ≥0.5(θ 4 -c b );
wherein Y is a luminance component, cb is a blue chrominance component, cr is a red chrominance component, and θ1 to θ4 are intermediate variables.
7. The method for extracting the non-contact heart rate variability feature based on the realistic application scenario as claimed in claim 1, wherein the step seven of applying the EEMD method to reduce the noise of the original heart rate signal comprises the following specific steps:
1) Calculating the instantaneous frequency through Hilbert transformation;
2) Distinguishing between noise-dominant IMF components and signal-dominant IMF components according to instantaneous frequency;
3) Discarding the noise dominant IMF component obtained in 2), reconstructing the signal dominant IMF component.
8. The method for extracting the non-contact heart rate variability feature based on the realistic application scenario as set forth in claim 1, wherein the eighth step further comprises:
the calculation formula of the five-point sliding average is as follows:
where N is the signal length, f (j) is the signal value within the five-point sliding window, and y (i) is the new signal value determined by the five-point sliding average.
9. The method for extracting the non-contact heart rate variability feature based on the realistic application scenario as set forth in claim 1, wherein the step ten further includes:
the time domain features include: max, min, mean, median, SDNN, RMSSD, hr-mean, hr-sd, NN40, pNN40 or HRVti;
the frequency domain features are obtained by carrying out frequency spectrum analysis and extraction by using a Lomb-Scargle periodic chart, and the frequency domain features comprise: aVLF, aLF, aHF, aTotal, pVLF, pLF, pHF, nLF, nHF, LFHF, peakVLF, peakLF or peakHF;
the nonlinear features include: SD1, SD2 or SD1/SD2.
CN202111011120.8A 2021-08-31 2021-08-31 Non-contact heart rate variability feature extraction method based on realistic application scene Active CN113657345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111011120.8A CN113657345B (en) 2021-08-31 2021-08-31 Non-contact heart rate variability feature extraction method based on realistic application scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111011120.8A CN113657345B (en) 2021-08-31 2021-08-31 Non-contact heart rate variability feature extraction method based on realistic application scene

Publications (2)

Publication Number Publication Date
CN113657345A CN113657345A (en) 2021-11-16
CN113657345B true CN113657345B (en) 2023-09-15

Family

ID=78493317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111011120.8A Active CN113657345B (en) 2021-08-31 2021-08-31 Non-contact heart rate variability feature extraction method based on realistic application scene

Country Status (1)

Country Link
CN (1) CN113657345B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114403838A (en) * 2022-01-24 2022-04-29 佛山科学技术学院 Portable raspberry pi-based remote heart rate detection device and method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102217931A (en) * 2011-06-09 2011-10-19 李红锦 Method and device for acquiring heart rate variation characteristic parameter
CN103824420A (en) * 2013-12-26 2014-05-28 苏州清研微视电子科技有限公司 Fatigue driving identification system based on heart rate variability non-contact measuring
EP2745770A1 (en) * 2012-12-18 2014-06-25 Werner Wittling Method and device for determining the variability of a creature's heart rate
CN104127194A (en) * 2014-07-14 2014-11-05 华南理工大学 Depression evaluating system and method based on heart rate variability analytical method
CN106333658A (en) * 2016-09-12 2017-01-18 吉林大学 Photoelectric volume pulse wave detector and photoelectric volume pulse wave detection method
CN109044322A (en) * 2018-08-29 2018-12-21 北京航空航天大学 A kind of contactless heart rate variability measurement method
CN111429345A (en) * 2020-03-03 2020-07-17 贵阳像树岭科技有限公司 Method for visually calculating heart rate and heart rate variability with ultra-low power consumption
CN111714144A (en) * 2020-07-24 2020-09-29 长春理工大学 Mental stress analysis method based on video non-contact measurement
CN112200099A (en) * 2020-10-14 2021-01-08 浙江大学山东工业技术研究院 Video-based dynamic heart rate detection method
CN112656393A (en) * 2020-12-08 2021-04-16 山东中科先进技术研究院有限公司 Heart rate variability detection method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7869631B2 (en) * 2006-12-11 2011-01-11 Arcsoft, Inc. Automatic skin color model face detection and mean-shift face tracking

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102217931A (en) * 2011-06-09 2011-10-19 李红锦 Method and device for acquiring heart rate variation characteristic parameter
EP2745770A1 (en) * 2012-12-18 2014-06-25 Werner Wittling Method and device for determining the variability of a creature's heart rate
CN103824420A (en) * 2013-12-26 2014-05-28 苏州清研微视电子科技有限公司 Fatigue driving identification system based on heart rate variability non-contact measuring
CN104127194A (en) * 2014-07-14 2014-11-05 华南理工大学 Depression evaluating system and method based on heart rate variability analytical method
CN106333658A (en) * 2016-09-12 2017-01-18 吉林大学 Photoelectric volume pulse wave detector and photoelectric volume pulse wave detection method
CN109044322A (en) * 2018-08-29 2018-12-21 北京航空航天大学 A kind of contactless heart rate variability measurement method
CN111429345A (en) * 2020-03-03 2020-07-17 贵阳像树岭科技有限公司 Method for visually calculating heart rate and heart rate variability with ultra-low power consumption
CN111714144A (en) * 2020-07-24 2020-09-29 长春理工大学 Mental stress analysis method based on video non-contact measurement
CN112200099A (en) * 2020-10-14 2021-01-08 浙江大学山东工业技术研究院 Video-based dynamic heart rate detection method
CN112656393A (en) * 2020-12-08 2021-04-16 山东中科先进技术研究院有限公司 Heart rate variability detection method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
非侵入式胎儿心电信号的R波检测方法;田文龙;戴敏;;天津理工大学学报(第02期);全文 *

Also Published As

Publication number Publication date
CN113657345A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
US11229372B2 (en) Systems and methods for computer monitoring of remote photoplethysmography based on chromaticity in a converted color space
Wang et al. A comparative survey of methods for remote heart rate detection from frontal face videos
CN109820499B (en) High anti-interference heart rate detection method based on video, electronic equipment and storage medium
Bobbia et al. Remote photoplethysmography based on implicit living skin tissue segmentation
Bousefsaf et al. Automatic selection of webcam photoplethysmographic pixels based on lightness criteria
CN111938622B (en) Heart rate detection method, device and system and readable storage medium
CN111281367A (en) Anti-interference non-contact heart rate detection method based on face video
CN113920119B (en) Heart rate and respiration analysis processing method based on thermal imaging technology
Ernst et al. Optimal color channel combination across skin tones for remote heart rate measurement in camera-based photoplethysmography
US11701015B2 (en) Computer-implemented method and system for direct photoplethysmography (PPG) with multiple sensors
CN113657345B (en) Non-contact heart rate variability feature extraction method based on realistic application scene
Kurihara et al. Non-contact heart rate estimation via adaptive rgb/nir signal fusion
US20200359922A1 (en) Computer-implemented method and system for contact photoplethysmography (ppg)
Tabei et al. A novel diversity method for smartphone camera-based heart rhythm signals in the presence of motion and noise artifacts
Bai et al. Real-time robust noncontact heart rate monitoring with a camera
Gupta et al. Serial fusion of Eulerian and Lagrangian approaches for accurate heart-rate estimation using face videos
Panigrahi et al. Non-contact HR extraction from different color spaces using RGB camera
Gu et al. Automatic Tongue Image Segmentation Based on Thresholding and an Improved Level Set Model
Park et al. Direct-global separation for improved imaging photoplethysmography
Geng et al. Motion resistant facial video based heart rate estimation method using head-mounted camera
Wang et al. KLT algorithm for non-contact heart rate detection based on image photoplethysmography
Gong et al. Heart Rate Estimation in Driver Monitoring System Using Quality-Guided Spectrum Peak Screening
Liu et al. A new approach for face detection based on photoplethysmographic imaging
Babušiak et al. Eye-blink artifact detection in the EEG
Wu et al. Roi selected by pixel2pixel for measuring heart rate based on ippg

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant