CN112507930A - Method for improving human face video heart rate detection by using illumination balancing method - Google Patents
Method for improving human face video heart rate detection by using illumination balancing method Download PDFInfo
- Publication number
- CN112507930A CN112507930A CN202011489672.5A CN202011489672A CN112507930A CN 112507930 A CN112507930 A CN 112507930A CN 202011489672 A CN202011489672 A CN 202011489672A CN 112507930 A CN112507930 A CN 112507930A
- Authority
- CN
- China
- Prior art keywords
- heart rate
- human face
- image
- face video
- illumination
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 238000001914 filtration Methods 0.000 claims abstract description 39
- 238000012937 correction Methods 0.000 claims abstract description 11
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000000926 separation method Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 7
- 239000008280 blood Substances 0.000 claims description 6
- 210000004369 blood Anatomy 0.000 claims description 6
- 230000000737 periodic effect Effects 0.000 claims description 6
- 238000012886 linear function Methods 0.000 claims description 4
- 238000001228 spectrum Methods 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 3
- 238000012880 independent component analysis Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000012417 linear regression Methods 0.000 claims description 2
- 230000002087 whitening effect Effects 0.000 claims description 2
- 238000005259 measurement Methods 0.000 abstract description 4
- 230000003287 optical effect Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000036541 health Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013186 photoplethysmography Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009532 heart rate measurement Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/15—Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Geometry (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a method for improving human face video heart rate detection by using an illumination balancing method, which comprises the following steps: s1, acquiring a human face video image by using a visible light camera; s2, detecting and positioning the human face by utilizing a multitask convolutional neural network; s3, selecting a human face video interesting region; s4, extracting scene illumination components by using a rapid guide filtering algorithm, constructing an improved two-dimensional gamma function, and balancing the illumination components of the face video image; s5, separating independent source signals from the mixed signals by using a FastICA algorithm; and S6, performing fast Fourier transform by using the independent source signal, and calculating a heart rate value. According to the invention, the illumination component is extracted through a rapid guiding filtering algorithm, and the brightness of the over-bright and over-dark areas of the face image is improved by utilizing an improved light balancing method of self-adaptive correction of illumination unevenness of a two-dimensional gamma function, so that the average error and standard deviation of a heart rate measured value are reduced, and the measurement precision is improved.
Description
Technical Field
The invention relates to the technical field of image processing and non-contact heart rate detection, in particular to a method for improving human face video heart rate detection by using an illumination balancing method.
Background
With the improvement of modern living standard, people will pay more attention to their health condition, and heart rate is one of the most important vital signs of human body, and the detection of heart rate will also receive more attention. In recent years, heart rate detection devices on the market are developed rapidly, are small in size and convenient to measure, are the inevitable trend of development, but all the devices need to be in direct physical contact with a measurer, are complicated in a contact type sensor recording mode, can cause discomfort to patients, and are particularly uncomfortable for people such as babies who are born. Therefore, the non-contact heart rate detection method using the principle of the photoplethysmography has a wide application prospect in the medical field, family health prevention and the like. At present, the non-contact heart rate detection is more accurate only when the result obtained by detection is needed in a more stable environment, and factors such as light change can generate negative effects on the detection result.
Therefore, a method capable of adaptively correcting the illumination non-uniformity phenomenon of the face image, reducing noise generated by light fluctuation, and ensuring the accuracy of the measurement result is needed to be found.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a method for improving human face video heart rate detection by using an illumination equalization method.
The method is realized by adopting the following technical scheme: a method for improving human face video heart rate detection by using an illumination balancing method comprises the following steps:
s1, acquiring a human face video image by using a visible light camera;
s2, detecting and positioning the face, eyes, nose and mouth corner by utilizing a multitask convolution neural network;
s3, selecting a human face video image region of interest ROI according to the positioning information of the human face, the eyes, the nose and the mouth corner;
s4, decomposing each frame of ROI into hue H, saturation S and brightness V colors to form a space according to the face video ROI; for a brightness V channel, extracting illumination components of a scene by using a rapid guiding filtering algorithm, constructing an improved two-dimensional gamma function, and balancing the illumination components of the face video image;
s5, performing blind source separation, and separating independent source signals from R, G, B three-primary-color channel observation mixed signals of each frame ROI by utilizing an independent component analysis FastICA algorithm;
s6, performing fast Fourier transform by using the separated independent source signals, deducing the periodic change of the blood volume according to the periodic change of the skin reflected light intensity so as to obtain heart rate information, selecting the independent source signal with the maximum power spectrum amplitude value in the independent source signals as a pulse source signal, and calculating the heart rate value according to the pulse source signal amplitude value.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention extracts the illumination component by fast guiding the filtering algorithm, improves the brightness of the face image in the over-bright and over-dark areas by utilizing the improved light balance method of the two-dimensional gamma function for self-adaptive correction of the illumination unevenness, realizes the adjustment of the illumination unevenness, reduces the average error and the standard deviation of the heart rate measured value and improves the measurement precision.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a luminance histogram before correction by the light equalization scheme;
FIG. 3 is a luminance histogram after correction by the light equalization scheme;
fig. 4 is a graph of the spectrum of the independent source signal with the strongest pulse wave signal.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
As shown in fig. 1, the method for improving the heart rate detection of the face video by using the illumination balancing method of the present invention comprises the following steps:
s1, acquiring a face video image by using a visible light camera, keeping the face video image still in front of the camera to reduce shaking and keep the brightness of light rays of an acquisition environment, wherein the camera uses a high-definition camera with 1200 ten thousand pixels and 1920 x 1080 maximum resolution;
s2, according to the face video image collected in the step S1, detecting and positioning a face, eyes, a nose and a mouth corner are completed by utilizing a Multi-task convolutional neural network (MTCNN);
s3, selecting a human face video image region of interest ROI with strong pulse original signal periodicity and low noise according to the positioning information of the human face, the eyes, the nose and the mouth corner acquired in the step S2;
s4, decomposing each ROI into hue H, saturation S and brightness V colors to form a space according to the region of interest of the face video image acquired in the step S3; for a brightness V channel, quickly and accurately extracting illumination components of a scene through a quick guide filtering algorithm, constructing an improved two-dimensional gamma function, reducing the brightness value of an over-bright area and improving the brightness value of an over-dark area so as to balance the illumination components of a face image, thereby eliminating the influences of uneven illumination and light fluctuation;
s5, performing blind source separation, and separating independent source signals from R, G, B three-primary-color channel observation mixed signals of each frame ROI by utilizing an independent component analysis FastICA algorithm;
s6, according to the signals obtained in the step S5 after blind source separation, performing fast Fourier transform on the separated independent source signals; according to the principle of photoplethysmography, the change of light intensity and the change of blood volume are in a direct proportional relation, namely the periodic change of the blood volume can be deduced by utilizing the periodic change of the intensity of skin reflected light, so that heart rate information can be indirectly acquired; and selecting the independent source signal with the maximum power spectrum amplitude value from the independent source signals as a pulse source signal, and calculating the current heart rate value according to the pulse source signal amplitude value.
In this embodiment, the multitask convolutional neural network in step S2 adopts three cascaded networks, and the network structure includes three types, namely P-Net, R-Net, and O-Net, where P-Net is an area suggestion network for face detection, and is composed of three convolutional layers, so that a face candidate window can be quickly generated; R-Net has an extra full connection layer than P-Net, is used for further selecting and adjusting the candidate human face regional window that P-Net produced, O-Net is more complicated structurally, it has a convolution layer than R-Net, can discern the facial area through extracting more characteristic, and go on regressing to the facial feature point of people, finally output the facial feature point of human face. The multitask convolutional neural network gives consideration to the face detection performance and the accuracy rate, and compared with the existing sliding window and classifier, the multitask convolutional neural network can reduce a large amount of performance consumption.
In this embodiment, the step of selecting the face video region of interest ROI in step S3 includes the following steps:
s31, because the human face may incline when the video image is collected, in order to obtain the standardized human face, the deflection angle of the human face needs to be adjusted; let the coordinates of the pixels of the left and right eyes be (x)1,y1),(x2,y2) If left deflection occurs, i.e. y1>y2Then the formula of the deflection angle is as follows:
α=-[arctan((y1-y2)/(x2-x1))/π]
if a right deflection occurs, i.e. y2>y1Then the formula of the deflection angle is as follows:
α=[arctan((y1-y2)/(x2-x1))/π];
s32, after the deflection angle is obtained, the left eye and the right eye are located at the same horizontal position by adjusting the deflection angle, the distance between the left eye and the right eye is set to be 4d, and a rectangular area with the length and width of 8d and 3d respectively at the position 0.5d below the left eye and the right eye in the face image is selected as an interested area.
In this embodiment, the specific step of extracting the illumination component of the scene by using the fast boot filtering algorithm in step S4 includes:
s401, recalculating each pixel value in a filtering window according to a local linear relation between a guide image and an input image by using a rapid guide filtering algorithm, wherein a filtering output image is a local linear transformation of the guide image, and further extracting an illumination component; let the input image be p, the guide image be I, the filtered output image be q, and for any pixel point k in the image, in the filtering window omega with radius r using it as centerkThe online memory transformation relationship is as follows:
wherein ,ak and bkFor linear transformation of coefficients, in a filter window omegakIs a constant in; q. q.siOutputting an image for the ith filtering; i isiIs the ith guide image;
s402, utilizing a filter window omegakCalculating a linear transformation factor (a)k,bk) Obtaining the minimum difference value of the filtering output image q and the input image p; wherein, in the filtering window omegakThe expression of the cost function used in (1) is:
wherein ,E(ak,bk) Is a cost function; epsilon is a linear conversion factor akThe regularization parameter of the value range is solved by a linear regression method to obtain ak and bkThe optimal values of (a) are:
wherein, | ω | is the filtering window ωkThe number of inner pixels; mu.skFor filtering window omegakMean of the middle guide image I; p is a radical ofiIs the ith input image; sigmakFor filtering window ω kkVariance of the middle guide image I; p is a radical ofkFor filtering window omegakThe input image of (1);for filtering window omegakIn (c) pkThe mean value of (a); due to ak and bkIn different filter windows omegakThe values may be different and these different filter windows omegakWill contain the same pixel point, and take different filtering windows omega centered on the pixel pointkInner ak and bkMean value is used as parameter to solve qiExpression (c):
wherein ,aiFor different filtering windows omegakInner akThe mean value of (a);for different filtering windows omegakInner bkIs measured.
In this embodiment, the specific steps of constructing the improved two-dimensional gamma function in step S4 include:
s411, carrying out self-adaptive correction processing on the image with uneven illumination through an improved two-dimensional gamma function by utilizing the illumination component of the human face video image; the expression of the improved two-dimensional gamma function is as follows:
wherein I (x, y) is the luminance of the input image; o (x, y) is an output image; l (x, y) is the value of the illumination component on the extracted current pixel point (x, y); gamma is a gamma correction parameter that determines the effect of image enhancement; eta is the illumination coefficient, and eta is set as m/255; m is the mean value of the illumination map of the whole human face image;
and S412, converting the saturation S channel into the color space of RGB through color space conversion by using the corrected brightness V channel and the rest unchanged hue H.
As shown in fig. 2-3, the illumination component is obtained by using the fast-guided filtering algorithm, and then the illumination component of the face image is corrected by using the improved two-dimensional gamma function, so that the corrected histogram has obvious changes in the brightest and darkest areas of light, the low-brightness area in the original image is enhanced, the low-brightness area in the original image is reduced, the whole histogram is distributed at the middle brightness position, and the adjustment of uneven illumination is effectively realized.
In this embodiment, the specific step of separating the independent source signal in step S5 includes:
s51, respectively forming three groups of time sequences by R, G, B three primary color channels of each frame ROI as an observation mixed signal x0(t),x1(t),x2(t) three independent source signals are provided, s0(t),s1(t),s2(t), the observed mixed signal X (t) is a linear combination of the independent source signals S (t), i.e.
X(t)=A·S(t)
Wherein A is a mixing matrix;
s52, the FastICA algorithm takes the negative entropy of the observed mixed signal as an objective function, and the expression of the objective function is as follows:
J(W)=[E{G(WTZ)}-E{G(V)}]2
wherein J (w) is an objective function; g (V) is a non-linear function, G (V) is V3(ii) a In the same way, G (W)TZ) is a non-linear function, G (W)TZ)=(WTZ)3(ii) a Z is the whitened observation mixed signal, and Z is VX (t); v is a whitening matrix; w is a separation matrix; wTIs a transpose of the separation matrix W; e { G (V) } is the mathematical expectation of the nonlinear function G (V); e { G (W)TZ) -E { G (V) } is a mathematical expectation;
s53, maximizing the objective function, solving the separation matrix W to make it approximately equal to A-1Approximating Y (t) ═ W × X (t) to the independent source signal S (t); where Y (t) is an independent source signal approximation signal.
In this embodiment, the separated independent source signal is subjected to fast fourier FFT, and the formula is as follows:
wherein, F (w) is an independent source signal on a frequency domain, F (t) represents the independent source signal on a time domain, j represents an imaginary unit, t represents time, and w represents an angular frequency.
As shown in fig. 4, the current heart rate value is calculated according to the signal amplitude, and the heart rate calculation formula is as follows:
HR=Fmax*60
wherein HR is the heart rate value, FmaxIs the frequency corresponding to the maximum peak.
According to the embodiment, the illumination component is extracted through a rapid guiding filtering algorithm, and the phenomenon of illumination imbalance of the face picture is corrected by using the improved two-dimensional gamma function. Five testers are selected, heart rate detection is carried out on the blood sample with or without optical equalization correction through respective testing, a heart rate reference value is obtained by using a finger-clip type blood sample heart rate detector, the heart rate detection result corrected by the optical equalization scheme is shown in a table 1, and the heart rate detection result corrected by the optical equalization scheme is shown in a table 2:
TABLE 1 Heart Rate test results without light equalization scheme correction
Table 2 heart rate test results with optical equalization scheme correction
As can be seen from the measurement results in tables 1 and 2, the average error value and the standard deviation of the heart rate detection result obtained by the optical equalization scheme are obviously reduced, which shows that the optical equalization scheme has an obvious effect on improving the heart rate measurement accuracy, and thus the effectiveness of the method provided by the invention is verified.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (8)
1. A method for improving human face video heart rate detection by using an illumination balancing method is characterized by comprising the following steps:
s1, acquiring a human face video image by using a visible light camera;
s2, detecting and positioning the face, eyes, nose and mouth corner by utilizing a multitask convolution neural network;
s3, selecting a human face video image region of interest ROI according to the positioning information of the human face, the eyes, the nose and the mouth corner;
s4, decomposing each frame of ROI into hue H, saturation S and brightness V colors to form a space according to the face video ROI; for a brightness V channel, extracting illumination components of a scene by using a rapid guiding filtering algorithm, constructing an improved two-dimensional gamma function, and balancing the illumination components of the face video image;
s5, performing blind source separation, and separating independent source signals from R, G, B three-primary-color channel observation mixed signals of each frame ROI by utilizing an independent component analysis FastICA algorithm;
s6, performing fast Fourier transform by using the separated independent source signals, deducing the periodic change of the blood volume according to the periodic change of the skin reflected light intensity so as to obtain heart rate information, selecting the independent source signal with the maximum power spectrum amplitude value in the independent source signals as a pulse source signal, and calculating the heart rate value according to the pulse source signal amplitude value.
2. The method for detecting the heart rate of the human face video according to claim 1, wherein the multitask convolutional neural network in the step S2 adopts a plurality of cascaded networks, and the network structure of the cascaded networks comprises P-Net, R-Net and O-Net.
3. The method for detecting the heart rate of the human face video according to claim 1, wherein the step of selecting the region of interest ROI in the human face video in step S3 includes the following steps:
s31, adjusting the face deflection angle, wherein the coordinates of the pixel points of the left eye and the right eye are (x) respectively1,y1),(x2,y2) If left deflection occurs, i.e. y1>y2Then the formula of the deflection angle is as follows:
α=-[arctan((y1-y2)/(x2-x1))/π]
if a right deflection occurs, i.e. y2>y1Then the formula of the deflection angle is as follows:
α=[arctan((y1-y2)/(x2-x1))/π];
s32, after the deflection angle is obtained, the left eye and the right eye are located at the same horizontal position by adjusting the deflection angle, the distance between the left eye and the right eye is 4d, and a rectangular area with the length and width of 8d and 3d respectively at the position of 0.5d below the left eye and the right eye in the face image is selected as an interested area.
4. The method for detecting the heart rate of the human face video according to claim 1, wherein the step of extracting the illumination component of the scene by using the fast-oriented filtering algorithm in step S4 includes:
s401, recalculating each pixel value in a filtering window according to a local linear relation between a guide image and an input image by using a rapid guide filtering algorithm, and extracting an illumination component, wherein the input image is p, the guide image is I, a filtering output image is q, a pixel point in the image is k, and a filtering window omega with radius r and taking the pixel point k as a center is providedkThe online memory transformation relationship is as follows:
wherein ,ak and bkFor linear transformation of coefficients, in a filter window omegakIs a constant in; q. q.siOutputting an image for the ith filtering; i isiIs the ith guide image;
s402, utilizing a filter window omegakCalculating a linear transformation factor (a)k,bk) Obtaining the minimum difference value of the filtering output image q and the input image p; wherein the filter window ωkThe expression of the cost function used in (1) is:
wherein ,E(ak,bk) Is a cost function; epsilon is a linear conversion factor akThe regularization parameter of the value range is solved by a linear regression method to obtain ak and bkThe optimal values of (a) are:
wherein, | ω | is the filtering window ωkThe number of inner pixels; mu.skFor filtering window omegakMean of the middle guide image I; p is a radical ofiIs the ith input image; sigmakFor filtering window omegakVariance of the middle guide image I; p is a radical ofkFor filtering window omegakThe input image of (1);for filtering window omegakIn (c) pkThe mean value of (a); taking a filter window omegakMultiple filtering windows omega with internal pixel point as centerkInner ak and bkSolving q by taking the mean value as parameteriExpression (c):
5. The method for detecting the heart rate of human face videos as claimed in claim 4, wherein the specific step of constructing the improved two-dimensional gamma function in step S4 includes:
s411, carrying out self-adaptive correction processing on the video image through an improved two-dimensional gamma function by using the illumination component of the human face video image, wherein the expression of the improved two-dimensional gamma function is as follows:
wherein I (x, y) is the luminance of the input image; o (x, y) is an output image; l (x, y) is the value of the illumination component on the extracted current pixel point (x, y); gamma is a gamma correction parameter; eta is the illumination coefficient; m is the mean value of the illumination map of the whole human face image;
s412, the corrected brightness V channel and the corrected hue H are utilized, and the saturation S channel is converted back to the color space of RGB through color space conversion.
6. The method for detecting human face video heart rate according to claim 1, wherein the specific step of separating the independent source signal in step S5 includes:
s51, respectively forming three groups of time sequences by R, G, B three primary color channels of each frame ROI as an observation mixed signal x0(t),x1(t),x2(t) three independent source signals are s0(t),s1(t),s2(t), the observation mixed signal X (t) is a linear combination of the independent source signals S (t), and the specific expression is as follows:
X(t)=A·S(t)
wherein A is a mixing matrix;
s52, the FastICA algorithm takes the negative entropy of the observed mixed signal as an objective function, and the expression of the objective function is as follows:
J(w)=[E{G(wTZ)}-E{G(V)}]2
wherein J (w) is an objective function; g (V) is a non-linear function, G (V) is V3;G(WTZ) is a non-linear function, G (W)TZ)=(WTZ)3(ii) a Z is the whitened observation mixed signal, and Z is VX (t); v is a whitening matrix; w is a separation matrix; wTIs a transpose of the separation matrix W; e { G (V) } is the mathematical expectation of the nonlinear function G (V); e { G (W)TZ) -E { G (V) } is a mathematical expectation;
and S53, maximizing the objective function and solving the separation matrix W.
7. The method for detecting the heart rate of the human face video according to claim 1, wherein in step S6, the separated independent source signals are subjected to fast fourier FFT, and the formula is as follows:
wherein, F (w) is an independent source signal on a frequency domain, F (t) represents the independent source signal on a time domain, j represents an imaginary unit, t represents time, and w represents an angular frequency.
8. The method for detecting human face video heart rate as claimed in claim 7, wherein the heart rate value is calculated according to the pulse source signal amplitude in step S6, and the calculation formula is as follows:
HR=Fmax*60
wherein HR is the heart rate value, FmaxIs the frequency corresponding to the maximum peak.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011489672.5A CN112507930B (en) | 2020-12-16 | 2020-12-16 | Method for improving human face video heart rate detection by utilizing illumination equalization method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011489672.5A CN112507930B (en) | 2020-12-16 | 2020-12-16 | Method for improving human face video heart rate detection by utilizing illumination equalization method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112507930A true CN112507930A (en) | 2021-03-16 |
CN112507930B CN112507930B (en) | 2023-06-20 |
Family
ID=74972783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011489672.5A Active CN112507930B (en) | 2020-12-16 | 2020-12-16 | Method for improving human face video heart rate detection by utilizing illumination equalization method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112507930B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255585A (en) * | 2021-06-23 | 2021-08-13 | 之江实验室 | Face video heart rate estimation method based on color space learning |
CN116823677A (en) * | 2023-08-28 | 2023-09-29 | 创新奇智(南京)科技有限公司 | Image enhancement method and device, storage medium and electronic equipment |
CN117455780A (en) * | 2023-12-26 | 2024-01-26 | 广东欧谱曼迪科技股份有限公司 | Enhancement method and device for dark field image of endoscope, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105989357A (en) * | 2016-01-18 | 2016-10-05 | 合肥工业大学 | Human face video processing-based heart rate detection method |
CN110384491A (en) * | 2019-08-21 | 2019-10-29 | 河南科技大学 | A kind of heart rate detection method based on common camera |
CN110532849A (en) * | 2018-05-25 | 2019-12-03 | 快图有限公司 | Multi-spectral image processing system for face detection |
CN111027485A (en) * | 2019-12-11 | 2020-04-17 | 南京邮电大学 | Heart rate detection method based on face video detection and chrominance model |
CN111936040A (en) * | 2018-03-27 | 2020-11-13 | 皇家飞利浦有限公司 | Device, system and method for extracting physiological information indicative of at least one vital sign of a subject |
-
2020
- 2020-12-16 CN CN202011489672.5A patent/CN112507930B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105989357A (en) * | 2016-01-18 | 2016-10-05 | 合肥工业大学 | Human face video processing-based heart rate detection method |
CN111936040A (en) * | 2018-03-27 | 2020-11-13 | 皇家飞利浦有限公司 | Device, system and method for extracting physiological information indicative of at least one vital sign of a subject |
CN110532849A (en) * | 2018-05-25 | 2019-12-03 | 快图有限公司 | Multi-spectral image processing system for face detection |
CN110384491A (en) * | 2019-08-21 | 2019-10-29 | 河南科技大学 | A kind of heart rate detection method based on common camera |
CN111027485A (en) * | 2019-12-11 | 2020-04-17 | 南京邮电大学 | Heart rate detection method based on face video detection and chrominance model |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255585A (en) * | 2021-06-23 | 2021-08-13 | 之江实验室 | Face video heart rate estimation method based on color space learning |
CN113255585B (en) * | 2021-06-23 | 2021-11-19 | 之江实验室 | Face video heart rate estimation method based on color space learning |
CN116823677A (en) * | 2023-08-28 | 2023-09-29 | 创新奇智(南京)科技有限公司 | Image enhancement method and device, storage medium and electronic equipment |
CN116823677B (en) * | 2023-08-28 | 2023-11-10 | 创新奇智(南京)科技有限公司 | Image enhancement method and device, storage medium and electronic equipment |
CN117455780A (en) * | 2023-12-26 | 2024-01-26 | 广东欧谱曼迪科技股份有限公司 | Enhancement method and device for dark field image of endoscope, electronic equipment and storage medium |
CN117455780B (en) * | 2023-12-26 | 2024-04-09 | 广东欧谱曼迪科技股份有限公司 | Enhancement method and device for dark field image of endoscope, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112507930B (en) | 2023-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112507930B (en) | Method for improving human face video heart rate detection by utilizing illumination equalization method | |
CN107735015B (en) | Method and system for laser speckle imaging of tissue using a color image sensor | |
Feng et al. | Motion-resistant remote imaging photoplethysmography based on the optical properties of skin | |
US10997701B1 (en) | System and method for digital image intensity correction | |
Jobson et al. | A multiscale retinex for bridging the gap between color images and the human observation of scenes | |
Zheng et al. | A new metric based on extended spatial frequency and its application to DWT based fusion algorithms | |
CN110619301B (en) | Emotion automatic identification method based on bimodal signals | |
CN105147274B (en) | A kind of method that heart rate is extracted in the face video signal from visible spectrum | |
CN107451969A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN109325922A (en) | A kind of image self-adapting enhancement method, device and image processing equipment | |
CN104091309B (en) | Balanced display method and system for flat-plate X-ray image | |
CN114972067A (en) | X-ray small dental film image enhancement method | |
CN116664462B (en) | Infrared and visible light image fusion method based on MS-DSC and I_CBAM | |
US20100302399A1 (en) | High linear dynamic range imaging | |
Shimonomura et al. | Wide-dynamic-range APS-based silicon retina with brightness constancy | |
Zeng et al. | High dynamic range infrared image compression and denoising | |
CN111667446B (en) | image processing method | |
US20100061656A1 (en) | Noise reduction of an image signal | |
CN110852977B (en) | Image enhancement method for fusing edge gray level histogram and human eye visual perception characteristics | |
CN101430787B (en) | Digital sternum heart calcification image intensification method | |
CN111076815B (en) | Hyperspectral image non-uniformity correction method | |
Wu et al. | Quality enhancement based on retinex and pseudo-HDR synthesis algorithms for endoscopic images | |
Gadia et al. | Tuning Retinex for HDR images visualization | |
CN109993690A (en) | A kind of color image high accuracy grey scale method based on structural similarity | |
Lenka et al. | A study on retinex theory and illumination effects–i |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |