CN109029779B - Real-time human body temperature rapid detection method - Google Patents

Real-time human body temperature rapid detection method Download PDF

Info

Publication number
CN109029779B
CN109029779B CN201810399016.2A CN201810399016A CN109029779B CN 109029779 B CN109029779 B CN 109029779B CN 201810399016 A CN201810399016 A CN 201810399016A CN 109029779 B CN109029779 B CN 109029779B
Authority
CN
China
Prior art keywords
line segment
value
infrared
visible light
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810399016.2A
Other languages
Chinese (zh)
Other versions
CN109029779A (en
Inventor
戴明郎
谢祯冏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaying Technology (group) Co Ltd
Original Assignee
Huaying Technology (group) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaying Technology (group) Co Ltd filed Critical Huaying Technology (group) Co Ltd
Priority to CN201810399016.2A priority Critical patent/CN109029779B/en
Publication of CN109029779A publication Critical patent/CN109029779A/en
Application granted granted Critical
Publication of CN109029779B publication Critical patent/CN109029779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01KMEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
    • G01K13/00Thermometers specially adapted for specific purposes
    • G01K13/20Clinical contact thermometers for use with humans or animals

Abstract

The invention relates to a real-time human body temperature rapid detection method. Firstly, after obtaining the simplified outline of the visible light and infrared image, the outline is crossed and compared by dynamic programming, then the infrared object is superposed on the visible light object with the highest similarity through perspective projection conversion, and the human body temperature is calculated. The invention enables the infrared and visible light images to be fused, can acquire the body temperature, can identify individual objects, can improve the identification efficiency and realize the rapid detection of the human body temperature.

Description

Real-time human body temperature rapid detection method
Technical Field
The invention relates to a real-time human body temperature rapid detection method.
Background
In the existing human body temperature detection mode based on infrared, because the captured image of the infrared camera is different from the visible light image, when the temperature is monitored through the infrared equipment in the past, the visible light image needs to be captured through the other camera, and then the human body temperature detection is carried out through artificial comparison and identification.
There are related documents for fusing visible light image and thermal image:
1. robust face recognition with dynamic adjustment of fusion ratio of multiple visible light images and thermal images for honoring the Master thesis
The method comprises the steps of using Discrete Wavelet Transform (DWT) to fuse visible light images and red hot images, adjusting the fusion proportion of the visible light images and the hot images according to the different illumination degrees of the visible light images, carrying out Linear identification (LDA) on each continuous fused image, converting the characteristic parameters of the fused images, and finally carrying out face identification by using Euclidean distance.
2. Research on fusion of infrared and visible images in a variable visual environment, Cheng Man Xian, Shixin university teletext and digital publishing Master thesis
Infrared and visible images were fused using Principal Component Analysis (PCA) embedded in the wavelet multi-resolution layer. Firstly, the infrared ray and the visible light image are fused by using PCA and DWT respectively, the two fused images are analyzed, the advantages of the two images are taken out and fused, and the embedded PCA wavelet multi-resolution layer fused image is obtained. Can keep better fusion result under different visual environments.
3.Fast and Accurate Registration of Visible and Infrared Videos,Socheat Sonn, Guillaume-Alexandre Bilodeau,Philippe Galinier,IEEE Conferenceon Computer Vision and Pattern Recognition Workshops,pp.308-313,2013
The problem of positioning infrared and visible light three-dimensional objects in a scene is solved by using the information based on key points and time. The outline of the object is first taken and converted into a polygon, and the vertex of the polygon is used as a key point for comparison. And calculating a conversion matrix according to the comparison key points, putting the conversion matrix in a time buffer area, accumulating the conversion matrix in the buffer area, and converting and positioning the infrared image to the visible image by using the conversion matrix.
4.Automatic Image Registration in Infrared-Visible Videos usingPolygon Vertices,Tanushri Chakravorty,Guillaume-Alexandre Bilodeau,EricGranger,2014
The object is extracted by using a background subtraction method, the shape of the target object is obtained by using polygon approximation, then key points on the shape are detected, and the object is compared according to Euclidean distance between the key points and convex hull parameters of the shape. After the comparison key points are obtained, the comparison key points are stored in a time buffer area, respective conversion matrixes of all the comparison key points in the buffer area are calculated, infrared rays are converted by using the conversion matrixes, the overlapping proportion of the infrared rays and the visible light images is calculated, and if the comparison key points are better than the current conversion matrixes, the conversion matrixes are updated.
In the prior art, when comparing the feature points, the angle and length of a line segment formed by linking the visible light feature points and the infrared characteristic points are used as comparison conditions, and finally, a homographic matrix suitable for the whole picture is calculated according to the compared points, and infrared rays are converted by the homographic matrix and fused with the visible light image; however, when these documents deal with automatic positioning and overlapping of visible light and infrared images, it is impossible to automatically compare more than one human body and position the infrared light to the correct position on the visible light image, and it is impossible to quickly realize human body identification and human body temperature detection.
Disclosure of Invention
The invention aims to provide a real-time human body temperature rapid detection method, which enables infrared images and visible light images to be fused, can acquire body temperature, can identify individual objects, can improve identification efficiency and realize rapid detection of human body temperature.
In order to achieve the purpose, the technical scheme of the invention is as follows: a real-time fast human body temperature detection method includes obtaining simplified outlines of visible light and infrared image objects, performing cross comparison of the outlines through dynamic programming, superposing the infrared object to the visible light object with the highest similarity through perspective projection conversion, and calculating human body temperature.
In an embodiment of the present invention, the method specifically includes the following steps:
step S1, shooting images through the infrared camera and the visible light camera to obtain an infrared image and a visible light image; then, obtaining the outline of the moving object by using a Gaussian back removing method for the visible light image; converting the color space of the infrared image from RGB to HSV, and obtaining the outline of the moving object according to the formula (1)
Wherein, F (x, y) is foreground image, H (x, y) is hue channel, x, y are image coordinates;
step S2, after obtaining the outline of each moving object in the infrared image and the visible image, approximating the outline by using a polygon approximation method, and then calculating the parameters of each line segment in the outline according to the reduced outline, including: the angle a of the line segment, the length L of the line segment, the rotation angle PR of the angle rotation of the previous line segment to the current line segment, and the rotation angle NR of the angle rotation of the current line segment to the next line segment, where the two end points of the line segment are defined as a and b, and the parameter calculation formulas are shown in (2) to (5):
Figure BDA0001644910970000022
Figure BDA0001644910970000031
PR=|Current Segment Angle-Previous Segment Angle| (4)
NR=|NextSegment Angle-Current Segment Angle| (5)
wherein, a and b are two end points forming a line segment, and x and y are image coordinates;
then, using dynamic programming DP to compare the outlines, establishing a two-dimensional data table, wherein the table size is the visible light moving object line segment number multiplied by the infrared moving object line segment number, the values in the table are the difference values after the corresponding line segments of the visible light moving object outline are compared with the corresponding line segments of the infrared moving object outline, and the comparison formula is as (6)
Figure BDA0001644910970000032
Wherein, formula (6) represents that the current calculation object is the value of the x-th column and the y-th column in the data table, that is, the difference of four values of angle a of the y-th line segment of the visible light moving object R and the line segment on the x-th line segment of the infrared moving object T, length L of the line segment, rotation angle PR of the angle of the previous line segment rotated to the angle of the current line segment, and rotation angle NR of the angle of the current line segment rotated to the angle of the next line segment is compared, and W1, W2, W3 and W4 are corresponding weights;
then, according to the data table, calculating the total difference value of the outlines by adopting a dynamic programming DP and taking the outlines of every two moving objects as a group;
step S3: and calculating a homography matrix for each moving object outline according to the comparison result, independently taking out the moving objects of the infrared rays, converting the moving objects of the infrared rays into moving objects of visible light by using perspective conversion, finally fusing the moving objects of the infrared rays to the moving objects of the visible light, and calculating the temperature of the human body.
In an embodiment of the present invention, in step S2, the specific implementation steps of calculating the total difference value of the contours by using the dynamic programming DP and taking the contours of two moving objects as a group are as follows:
step S21: initializing a DP table DPtable according to the following four condition definitions, where DPtable (X, y) represents the values of the column X and column y of the DPtable, the number of segments R of visible light moving objects is n, the number of segments T of infrared moving objects is m, and A, B, C represents three directions ↖, ←, °, X represents no tracking:
(1) if X is 0 and y is 0, the value of DPtable (0,0) is equal to the value of Datatable (0,0), and the value of Pathtable (0,0) is X;
(2) if x is 0, y is not equal to 0, the value of DPtable (0, y) is equal to the value of DPtable (0, y-1) + data (0, y), and the value of Pathtable (0, y) is C;
(3) if x ≠ 0 and y ≠ 0, the value of DPtable (x,0) is equal to the value of DPtable (x-1,0) + data (x,0), and the value of Pathtable (x,0) is B;
(4) if x ≠ 0, y ≠ 0, the value of DPtable (x, y) is calculated as follows, and records which direction each trellis originates from, recording the direction into the path table Pathtable:
first, when x is 1, go through y from 1 to n-1,
let the values of K1, K2, K3 be the values of DPtable (x-1, y-1), DPtable (x-1, y), DPtable (x, y-1), respectively, then the value of DPtable (x, y) is equal to the value of Min (K1, K2, K3) + Datable (x, y), and if K1/K2/K3 is the minimum value, then the value of PathTable (x, y) is A/B/C;
similarly, when x is 2, 3, … …, n-1, respectively, traversing y from 1 to m-1, and repeating the above process;
step S22: according to the path table Pathtable calculation result, the Pathtable path is obtained by the following calculation method:
traversal x is from n-1 to 0 and y is from m-1 to 0,
if the value of Pathtable (x, y) is A, then x is decreased by 1, y is decreased by 1, and the x, y value at this time is stored to (R)x,Ty) Performing the following steps;
if the value of Pathtable (x, y) is B, then x is decreased by 1;
if the value of Pathtable (x, y) is C, then y is decremented by 1.
Compared with the prior art, the invention has the following beneficial effects: during feature comparison, the length and angle of a line segment of an infrared ray and visible light object outline, the rotation angle of a front line segment to the line segment and the rotation angle of the line segment to a rear line segment are used as comparison conditions, a two-dimensional table is established by using dynamic programming, the similarity of the two outlines is calculated, finally, a homographic matrix suitable for each outline is respectively calculated according to the compared points, the homographic matrix is used for perspective projection of the image of the infrared ray object respectively, and finally, the converted image is positioned on the visible light object with the highest similarity, and the total is a complete image. The method is characterized in that the automatic positioning and superposition of the visible light and the infrared image are processed, so that the infrared image can automatically compare more than one human body, the infrared ray is positioned to the correct position on the visible light image, and the human body temperature is calculated.
Drawings
FIG. 1 is a defined background.
FIG. 2 shows the read visible and infrared images.
FIG. 3 is a schematic diagram of a visible light and infrared moving object.
Fig. 4 is a data table according to an embodiment of the invention.
Fig. 5 shows the results (a) DPtable and (b) Pathtable in fig. 4.
FIG. 6 is a path of the Pathtable.
FIG. 7 is a final alignment chart.
FIG. 8 is a graph showing the result of the superposition.
Detailed Description
The technical scheme of the invention is specifically explained below with reference to the accompanying drawings.
The invention relates to a real-time human body temperature rapid detection method, which comprises the following steps of firstly, obtaining a simplified outline of an object of visible light and infrared images, then carrying out cross comparison of the outlines through dynamic planning, superposing the infrared object to a visible light object with the highest similarity through perspective projection conversion, and calculating the human body temperature, wherein the method comprises the following specific implementation steps:
step S1, shooting images through the infrared camera and the visible light camera to obtain an infrared image and a visible light image; then, obtaining the outline of the moving object by using a Gaussian back removing method for the visible light image; converting the color space of the infrared image from RGB to HSV, and obtaining the outline of the moving object according to the formula (1)
Figure BDA0001644910970000051
Wherein, F (x, y) is foreground image, H (x, y) is hue channel, x, y are image coordinates;
step S2, after obtaining the outline of each moving object in the infrared image and the visible image, approximating the outline by using a polygon approximation method, and then calculating the parameters of each line segment in the outline according to the reduced outline, including: the angle a of the line segment, the length L of the line segment, the rotation angle PR of the angle rotation of the previous line segment to the current line segment, and the rotation angle NR of the angle rotation of the current line segment to the next line segment, where the two end points of the line segment are defined as a and b, and the parameter calculation formulas are shown in (2) to (5):
Figure BDA0001644910970000052
Figure BDA0001644910970000053
PR=|Current Segment Angle-Previous Segment Angle| (4)
NR=|Next Segment Angle-Current Segmemt Angle| (5)
wherein, a and b are two end points forming a line segment, and x and y are image coordinates;
then, using dynamic programming DP to compare the outlines, establishing a two-dimensional data table, wherein the table size is the visible light moving object line segment number multiplied by the infrared moving object line segment number, the values in the table are the difference values after the corresponding line segments of the visible light moving object outline are compared with the corresponding line segments of the infrared moving object outline, and the comparison formula is as (6)
Figure BDA0001644910970000054
Wherein, formula (6) represents that the current calculation object is the value of the x-th column and the y-th column in the data table, that is, the difference of four values of angle a of the y-th line segment of the visible light moving object R and the line segment on the x-th line segment of the infrared moving object T, length L of the line segment, rotation angle PR of the angle of the previous line segment rotated to the angle of the current line segment, and rotation angle NR of the angle of the current line segment rotated to the angle of the next line segment is compared, and W1, W2, W3 and W4 are corresponding weights;
then, according to the data table, calculating the total difference value of the outlines by adopting a dynamic programming DP and taking the outlines of every two moving objects as a group;
step S3: and calculating a homography matrix for each moving object outline according to the comparison result, independently taking out the moving objects of the infrared rays, converting the moving objects of the infrared rays into moving objects of visible light by using perspective conversion, finally fusing the moving objects of the infrared rays to the moving objects of the visible light, and calculating the temperature of the human body.
In step S2 of the present invention, the steps of calculating the total difference value of the contours by using the dynamic DP and using the contours of two moving objects as a group are as follows:
step S21: initializing a DP table DPtable according to the following four condition definitions, where DPtable (X, y) represents the values of the column X and column y of the DPtable, the number of segments R of visible light moving objects is n, the number of segments T of infrared moving objects is m, and A, B, C represents three directions ↖, ←, °, X represents no tracking:
(1) if X is 0 and y is 0, the value of DPtable (0,0) is equal to the value of Datatable (0,0), and the value of Pathtable (0,0) is X;
(2) if x is 0, y is not equal to 0, the value of DPtable (0, y) is equal to the value of DPtable (0, y-1) + data (0, y), and the value of Pathtable (0, y) is C;
(3) if x ≠ 0 and y ≠ 0, the value of DPtable (x,0) is equal to the value of DPtable (x-1,0) + data (x,0), and the value of Pathtable (x,0) is B;
(4) if x ≠ 0, y ≠ 0, the value of DPtable (x, y) is calculated as follows, and records which direction each trellis originates from, recording the direction into the path table Pathtable:
first, when x is 1, go through y from 1 to n-1,
let the values of K1, K2, K3 be the values of DPtable (x-1, y-1), DPtable (x-1, y), DPtable (x, y-1), respectively, then the value of DPtable (x, y) is equal to the value of Min (K1, K2, K3) + Datable (x, y), and if K1/K2/K3 is the minimum value, then the value of PathTable (x, y) is A/B/C;
similarly, when x is 2, 3, … …, n-1, respectively, traversing y from 1 to m-1, and repeating the above process;
step S22: according to the path table Pathtable calculation result, the Pathtable path is obtained by the following calculation method:
traversal x is from n-1 to 0 and y is from m-1 to 0,
if the value of Pathtable (x, y) is A, then x is decreased by 1, y is decreased by 1, and the x, y value at this time is stored to (R)x,Ty) Performing the following steps;
if the value of Pathtable (x, y) is B, then x is decreased by 1;
if the value of Pathtable (x, y) is C, then y is decremented by 1.
The following is a specific implementation of the present invention.
The method of the invention uses an infrared camera and a visible light camera to capture images, and calculates the similarity between the outlines of the objects by using dynamic programming after the moving objects in the images are obtained, and the method specifically comprises the following steps:
1. the background is defined as in fig. 1.
2. Reading the infrared and visible images, as shown in fig. 2, and then obtaining the outline of the moving object by using a gaussian back-removing method for the visible image; for the infrared image, the color space of the image is converted from RGB to HSV, and the channel (Hue) is extracted to obtain the outline of the moving object according to the formula (1), and the obtained result is shown in FIG. 3.
Figure BDA0001644910970000071
Wherein, F (x, y) is foreground image, H (x, y) is hue channel, x, y are image coordinates;
3. obtaining the outline of each moving object, approximating the outline by using a polygon approximation method Douglas-Peuckerlgorithm, and calculating the parameters of each line segment in the outline according to the simplified outline: the Angle (Angle, a) of a line segment, the Length (L) of a line segment, the rotation Angle (PR) of the Angle of the previous line segment rotated to the Angle of the line segment, the rotation Angle (NR) of the Angle of the line segment rotated to the Angle of the next line segment, the end points of the line segment are defined as a and b, and the parameter calculation formulas are as follows (2) to (5).
Figure BDA0001644910970000072
Figure BDA0001644910970000073
PR=|Current Segment Angle-Previous Segment Angle| (4)
NR=|Next Segment Angle-Current Segment Angle| (5)
Wherein, a and b are two end points forming a line segment, and x and y are image coordinates;
then, Dynamic Programming (DP) is used to compare the contours, and a two-dimensional array is created, where the size of the table is the number of visible light object line segments × the number of infrared object line segments, the values in the table are the difference values after comparing a certain line segment of the visible light object contour with a certain line segment of the infrared object contour, and the comparison formula is as (6), and the table is the Data table.
Figure BDA0001644910970000081
Formula (6) shows that the current calculation target is the value of the x-th column and the y-th column in the data, that is, the difference between the angle a of the y-th line segment of the visible light moving object R and the line segment on the x-th line segment of the infrared moving object T, the length L of the line segment, the rotation angle PR of the angle of the previous line segment rotated to the angle of the current line segment, and the rotation angle NR of the angle of the current line segment rotated to the angle of the next line segment is compared, and W1, W2, W3, and W4 are corresponding weights.
Then, according to the data table, the total difference value of the profiles is calculated by using DP in the following steps (Step 1-Step 3), wherein visible light is abbreviated as R and infrared light is abbreviated as T.
Step1. initialize the DP table (DPtable), defining the initial values of the data table according to the following four conditions:
Ifx=0andy=0,thenDPtable(0,0)←Datatable(0,0).
Ifx=0andy≠0,thenDPtable(0,y)←DPtable(0,y-1)+Datatable(0,y).
Ifx≠0andy=0,thenDPtable(x,0)←DPtable(x-1,0)+Datatable(x,0).
Ifx≠0andy≠0,thenDPtable(x,0)←followthenextrule(Step2).
the above "←" represents an assignment;
FIG. 4 is a hypothetical data table, assistance flow parsing:
step2. calculate DPtable, i.e. when x ≠ 0 and y ≠ 0, calculate DPtable (x, y) and record which direction each grid originates from according to the following algorithm, record the direction into path table (Pathtable). Wherein the visible light moving object R has n line segments, the infrared moving object T has m line segments, A, B, C here represents three directions ↖, ←, °, DPtable (x, y) represents the values in the x-th column and y-th column of the DPtable, respectively:
Figure BDA0001644910970000082
through the above process, the DPtable and Pathtable of fig. 5 can be obtained from the Datatable of fig. 4:
step3, obtaining the Pathtable path of fig. 6 according to Pathtable tracking comparison results, wherein the tracking algorithm is as follows:
Figure BDA0001644910970000092
the final alignment is shown in FIG. 7.
4. Calculating a homographyatrix for each contour according to the comparison result, taking out the infrared object separately, converting the infrared object into a visible light object by perspective conversion, finally fusing the infrared object to the visible light object, calculating the human body temperature, and superposing the results as shown in fig. 8.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.

Claims (2)

1. A real-time human body temperature rapid detection method is characterized in that firstly, after a simplified outline of an object of visible light and infrared images is obtained, the outline is crossed and compared by dynamic programming, then the infrared object is superposed on the visible light object with the highest similarity through perspective projection conversion, and the human body temperature is calculated; the method comprises the following concrete implementation steps:
step S1, shooting images through the infrared camera and the visible light camera to obtain an infrared image and a visible light image; then, obtaining the outline of the moving object by using a Gaussian back removing method for the visible light image; converting the color space of the infrared image from RGB to HSV, and obtaining the outline of the moving object according to the formula (1)
Figure FDA0002297921730000011
Wherein, F (x, y) is foreground image, H (x, y) is hue channel, x, y are image coordinates;
step S2, after obtaining the outline of each moving object in the infrared image and the visible image, approximating the outline by using a polygon approximation method, and then calculating the parameters of each line segment in the outline according to the reduced outline, including: the angle a of the line segment, the length L of the line segment, the rotation angle PR of the angle from the previous line segment to the current line segment, and the rotation angle NR of the angle from the current line segment to the next line segment, where the two end points of the line segment are defined as a and b, and the parameter calculation formulas are shown in (2) to (5):
Figure FDA0002297921730000012
PR=|Current Segment Angle-Previous Segment Angle| (4)
NR=|Next Segment Angle-Current Segment Angle| (5)
wherein, a and b are two end points forming a line segment, and x and y are image coordinates;
then, using dynamic programming DP to compare the outlines, establishing a two-dimensional data table, wherein the table size is the visible light moving object line segment number multiplied by the infrared moving object line segment number, the values in the table are the difference values after the corresponding line segments of the visible light moving object outline are compared with the corresponding line segments of the infrared moving object outline, and the comparison formula is as (6)
Figure FDA0002297921730000014
Wherein, formula (6) represents the current calculation object is the value of the x-th column and the y-th column in the Data table, i.e. the difference of four values of angle a of the line segment y of the visible light moving object R and the line segment on the x-th line segment of the infrared moving object T, length L of the line segment, rotation angle PR of the angle of the previous line segment rotated to the angle of the current line segment, and rotation angle NR of the angle of the current line segment rotated to the angle of the next line segment is compared, and W1, W2, W3 and W4 are corresponding weights;
then, according to the data table, calculating the total difference value of the outlines by adopting a dynamic programming DP and taking the outlines of every two moving objects as a group;
step S3: and calculating a homography matrix for each moving object outline according to the comparison result, independently taking out the moving objects of the infrared rays, converting the moving objects of the infrared rays into moving objects of visible light by using perspective conversion, finally fusing the moving objects of the infrared rays to the moving objects of the visible light, and calculating the temperature of the human body.
2. The method as claimed in claim 1, wherein the step S2 of calculating the total difference value of the moving object profiles by using the dynamic DP as a group comprises the following steps:
step S21: initializing a DP table according to the following four condition definitions, where DP table (X, y) represents values of the DPtable in the X-th column and the y-th column, the number of segments R of visible light moving objects is n, the number of segments T of infrared moving objects is m, and A, B, C represents three directions ↖, ←, °, and X represents no tracking:
(1) if X is 0 and y is 0, the value of DP table (0,0) is equal to the value of Data table (0,0) and the value of Path table (0,0) is X;
(2) if x is 0, y is not equal to 0, the value of DP table (0, y) is equal to the value of DP table (0, y-1) + Data table (0, y), and the value of Path table (0, y) is C;
(3) if x ≠ 0, y ≠ 0, then the value of DP table (x,0) equals to the value of DP table (x-1,0) + Data table (x,0), and the value of Path table (x,0) is B;
(4) if x ≠ 0, y ≠ 0, then the value of DP table (x, y) is calculated as follows, and records which direction each trellis originates from, recording the direction into the Path Table Path table:
first, when x is 1, go through y from 1 to n-1,
let the values of K1, K2, K3 be the values of DPtable (x-1, y-1), DP table (x-1, y), DPtable (x, y-1), respectively, then the value of DP table (x, y) is equal to the value of Min (K1, K2, K3) + Data table (x, y), and if K1/K2/K3 is the minimum value, then the value of Path table (x, y) is A/B/C;
similarly, when x is 2, 3, … …, n-1, respectively, traversing y from 1 to m-1, and repeating the above process;
step S22: according to the Path table calculation result, the Path of the Path table is obtained, and the calculation method is as follows:
traversal x is from n-1 to 0 and y is from m-1 to 0,
if the value of Path table (x, y) is A, then x is decreased by 1, y is decreased by 1, and the x, y value at this time is stored to (R)x,Ty) Performing the following steps;
if the value of Path table (x, y) is B, then x is subtracted by 1;
if the value of Path table (x, y) is C, then y is decremented by 1.
CN201810399016.2A 2018-04-28 2018-04-28 Real-time human body temperature rapid detection method Active CN109029779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810399016.2A CN109029779B (en) 2018-04-28 2018-04-28 Real-time human body temperature rapid detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810399016.2A CN109029779B (en) 2018-04-28 2018-04-28 Real-time human body temperature rapid detection method

Publications (2)

Publication Number Publication Date
CN109029779A CN109029779A (en) 2018-12-18
CN109029779B true CN109029779B (en) 2020-02-14

Family

ID=64629614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810399016.2A Active CN109029779B (en) 2018-04-28 2018-04-28 Real-time human body temperature rapid detection method

Country Status (1)

Country Link
CN (1) CN109029779B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111174937A (en) * 2020-02-20 2020-05-19 中国科学院半导体研究所 Scanning type infrared body temperature detection device and method based on photoelectric cabin

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10335370B4 (en) * 2002-07-31 2013-06-27 Volkswagen Ag Method for applicative adaptation of a motor control and motor control obtained by the method
CN100472193C (en) * 2003-03-17 2009-03-25 财团法人工业技术研究院 Thermometer with integrated image display
US20040208230A1 (en) * 2003-04-16 2004-10-21 Tzong-Sheng Lee Thermometer with image display
US8963845B2 (en) * 2010-05-05 2015-02-24 Google Technology Holdings LLC Mobile device with temperature sensing capability and method of operating same
JP6517499B2 (en) * 2014-11-28 2019-05-22 マクセル株式会社 Imaging system
US10687713B2 (en) * 2016-08-05 2020-06-23 Optim Corporation Diagnostic apparatus
CN106981077B (en) * 2017-03-24 2020-12-25 中国人民解放军国防科学技术大学 Infrared image and visible light image registration method based on DCE and LSS
CN106919806A (en) * 2017-04-27 2017-07-04 刘斌 A kind of human body monitoring method, device and system and computer readable storage devices
CN107595254B (en) * 2017-10-17 2021-02-26 黄晶 Infrared health monitoring method and system
CN108549874B (en) * 2018-04-19 2021-11-23 广州广电运通金融电子股份有限公司 Target detection method, target detection equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN109029779A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
Shi et al. A framework for learning depth from a flexible subset of dense and sparse light field views
Chen et al. Pedestrian detection for autonomous vehicle using multi-spectral cameras
Ma et al. A novel two-step registration method for remote sensing images based on deep and local features
Uittenbogaard et al. Privacy protection in street-view panoramas using depth and multi-view imagery
CN107993258B (en) Image registration method and device
Liu et al. Fast directional chamfer matching
Zhang et al. Detecting and extracting the photo composites using planar homography and graph cut
JP4868530B2 (en) Image recognition device
Yang et al. Progressively complementary network for fisheye image rectification using appearance flow
US20140212048A1 (en) System and Method for Identifying Scale Invariant Features of Object Outlines on Images
US9552532B2 (en) System and method for describing image outlines
CN112801870B (en) Image splicing method based on grid optimization, splicing system and readable storage medium
St-Charles et al. Online multimodal video registration based on shape matching
Lv et al. Automatic registration of airborne LiDAR point cloud data and optical imagery depth map based on line and points features
CN103533332B (en) A kind of 2D video turns the image processing method of 3D video
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN109029779B (en) Real-time human body temperature rapid detection method
CN113065506B (en) Human body posture recognition method and system
Cai et al. Improving CNN-based planar object detection with geometric prior knowledge
Zhang et al. Building a stereo and wide-view hybrid RGB/FIR imaging system for autonomous vehicle
CN115035281B (en) Rapid infrared panoramic image stitching method
Alzohairy et al. Image mosaicing based on neural networks
Song Optimization of the Progressive Image Mosaicing Algorithm in Fine Art Image Fusion for Virtual Reality
van de Wouw et al. Hierarchical 2.5-d scene alignment for change detection with large viewpoint differences
McCartney et al. Image registration for sequence of visual images captured by UAV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant