CN113706707A - Human body three-dimensional surface temperature model construction method based on multi-source information fusion - Google Patents
Human body three-dimensional surface temperature model construction method based on multi-source information fusion Download PDFInfo
- Publication number
- CN113706707A CN113706707A CN202110794691.7A CN202110794691A CN113706707A CN 113706707 A CN113706707 A CN 113706707A CN 202110794691 A CN202110794691 A CN 202110794691A CN 113706707 A CN113706707 A CN 113706707A
- Authority
- CN
- China
- Prior art keywords
- camera
- formula
- frame
- image
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000010276 construction Methods 0.000 title claims abstract description 31
- 230000004927 fusion Effects 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 claims description 67
- 238000005457 optimization Methods 0.000 claims description 52
- 239000011159 matrix material Substances 0.000 claims description 49
- 238000004422 calculation algorithm Methods 0.000 claims description 41
- 239000013598 vector Substances 0.000 claims description 38
- 238000004364 calculation method Methods 0.000 claims description 31
- 150000001875 compounds Chemical class 0.000 claims description 30
- 230000008569 process Effects 0.000 claims description 29
- 238000010586 diagram Methods 0.000 claims description 20
- 230000009466 transformation Effects 0.000 claims description 19
- 238000013519 translation Methods 0.000 claims description 19
- 238000013461 design Methods 0.000 claims description 10
- 238000009795 derivation Methods 0.000 claims description 6
- 238000005266 casting Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000011002 quantification Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 description 16
- 238000005259 measurement Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 11
- 238000001931 thermography Methods 0.000 description 10
- 238000009529 body temperature measurement Methods 0.000 description 8
- 238000005286 illumination Methods 0.000 description 8
- 230000036760 body temperature Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000006073 displacement reaction Methods 0.000 description 5
- 230000005855 radiation Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000036962 time dependent Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000002001 electrolyte material Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Radiation Pyrometers (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a human body three-dimensional surface temperature model construction method based on multi-source information fusion.
Description
Technical Field
The invention relates to the construction of a human body three-dimensional surface temperature model, in particular to a human body three-dimensional surface temperature model construction method based on multi-source information fusion.
Background
The surface temperature information of the object not only represents, but also reflects the internal state characteristics of the object to a certain extent. The infrared thermal imaging technology can accurately acquire temperature information of the surface of an object, and has stronger robustness in severe working environments such as smoke, low illumination and the like, so that the infrared thermal imaging technology is widely applied to military and civil fields such as security monitoring, electric power detection, fire prevention and fighting, medical diagnosis, crime tracking and the like.
However, the existing mainstream two-dimensional infrared thermal imaging technology has the problems of temperature information loss, difficult space positioning and the like, the object three-dimensional surface temperature model construction technology more completely retains the temperature information of the object surface by reconstructing the object surface three-dimensional temperature model, the rapid positioning of the surface temperature characteristic is realized, and the application range and the operation friendliness of the infrared thermal imaging technology are greatly improved.
The existing dense three-dimensional reconstruction algorithm based on the small displacement assumption is easy to fail when a camera moves rapidly; the existing three-dimensional reconstruction algorithm cannot solve the problem of multi-source information misregistration in a motion state, and the precision of a three-dimensional temperature model and the accuracy of temperature distribution are influenced; the existing three-dimensional reconstruction algorithm based on single-source information only has a good effect in a part of fixed scenes, and the robustness is insufficient.
Disclosure of Invention
In view of the above-mentioned defects of the prior art, the technical problem to be solved by the present invention is how to efficiently and accurately reconstruct a three-dimensional surface temperature model of a target object in a state where a camera is in a fast motion state.
In order to achieve the aim, the invention provides a human body three-dimensional surface temperature model construction method based on multi-source information fusion, which is characterized by comprising the following steps:
(1) inputting multi-source images obtained by a depth camera, a visible light camera and an infrared camera;
(2) initializing a spatial point pair matching relation based on an iterative nearest neighbor method, calculating a plane translation transformation relation between images by maximizing temperature consistency between the ith frame of infrared image and the (i-1) th frame of infrared image, and optimizing the spatial point pair matching relation;
(3) by maximizing the ith frame of infrared imageAnd the i-1 th frame infrared imageInitializing the spatial pose of the camera according to the temperature consistency;
(4) by maximizing the ith frame point cloud ViPoint cloud with frame i-1Estimating the spatial pose of the camera by the spatial geometric consistency of the camera;
(5) further adjusting the camera pose estimated in step (4) based on the visible light luminosity consistency;
(6) calibrating external parameters of the camera in real time based on temperature consistency in an interframe registration mode, returning to the step (4) until the precision meets the requirement, and entering the step (7);
(7) updating the existing temperature three-dimensional model by using the information of the input image;
(8) and acquiring model data under the current camera view angle by using a ray casting algorithm and taking the model data as reference data for calculating the next frame.
Further, in step (2), let the translation vector ω between the infrared image of the ith frame and the i-1 st framei=[tutv]TConstructing a temperature consistency loss function:
wherein u is the pixel coordinate in the infrared image plane omega, IiAnd Ii-1Respectively representing the infrared images of the ith frame and the (i-1) th frame; solving the solution by using a Gaussian-Newton method, and at the k +1 th iteration, translating the vector omegaiThe updating method comprises the following steps:
in the formula (I), the compound is shown in the specification,andrespectively representing the (k + 1) th and (k) th iterative translation vectors ωiThe calculation result of (2); combining the formula (1) and the formula (2) and performing a temperature consistency loss function ETLinearization:
in the formula (I), the compound is shown in the specification,infrared image I representing ith frameiCalculating the gradient of the image in the gradient maps of u and v by using a 3 multiplied by 3 Sobel operator; r isTIs to use the kth iteration translation vectorCalculating a vector formed by residual values of all pixel points in the image;is r obtained by the k-th iteration calculationTV ω is calculated by the loss function after linearization:
using a translation vector omegaiFor the ith frame point cloud ViAnd the (i-1) th frame point cloud Vi-1The spatial point in (3) optimizes the matching relation; wherein Vi(ui) Is a point cloud ViMiddle pixel uiThree-dimensional vertex, point cloud V, under current camera coordinate systemi-1The three-dimensional vertex matched with the V-shaped vertex isi-1(hi-1) (ii) a In the initialization process, firstly, the camera space pose T of the ith frame is determinediIs set to Ti-1Then through Ti-1Will Vi(ui) Converting the coordinate system of the camera of the i-1 th frame, and projecting the coordinate system of the camera of the i-1 th frame to the pixel coordinate system of the i-1 th frame to obtain pixel pointsIn a pixel coordinate systemAnd the translation vector omegaiAre superposed to obtain hi-1Then is equal to Vi(ui) The matched point is
Further, in step (3), a loss function of temperature consistency constraint is constructed:
in the formula (I), the compound is shown in the specification,a vertex in a point cloud picture generated in a world coordinate system for the i-1 th frame infrared image is matched with the vertex in the i-1 th frame infrared image, and the vertex matched with the vertex in the i-th frame infrared image is Vi(h) (ii) a M is the number of elements in the temperature effective point set P,for rigid transformation of three-dimensional vertices, κ (v) denotes that vertex v ═ vx,vy,vz) Projection from three-dimensional space to pixel space:
in the formula (f)xAnd fyIs the focal length of the infrared camera (c)x,cy) Is the principal point coordinate of the infrared camera;
representing delta VT of a spatial transformation matrix T as a 6-dimensional vector using lie group lie algebra
ξ=(α,β,λ,tx,ty,tz)T(ii) a The updating mode of the kth iteration is as follows:
Ti k=VTTi k-1≈(I+ξ^)Ti k-1 (8);
in the formula, Ti k-1And Ti kAnd xi ^ is an SE (3) Li algebraic form of a vector xi, which is the calculation result of the position and posture of the iterative camera at the k-1 th time and the k-th time respectively:
substituting equations (8) and (9) for equation (5), linearizing the loss function as:
according to the chain-type derivation method, the following methods are provided:
in the formula, JκThe Jacobian matrix with (psi) as function kappa (psi) is derived from equation (7), Jψ(xi) is a Jacobian matrix of the function ψ (xi) derived from the equations (6), (8) and (9); j. the design is a squarefiAnd rfiRespectively representing loss functions EfiThe Jacobian matrix and residual terms of (ξ), so solving ξ to achieve E according to equation (12)fiMinimization of (ξ);
updating T through continuous iterationiUntil after the kth and k-1 iterations EfiAnd (5) if the value is smaller than the preset threshold value, entering the step (4).
Further, in step (4), the ith frame of infrared image is processedAnd the i-1 th frame infrared imageThe temperature consistency of (2) is added into the optimization loss function in a weighting mode, and the optimization loss function is constructed according to the formula (13):
in the formula (I), the compound is shown in the specification,andrespectively representing a geometric consistency loss function and a temperature consistency optimization function,representing the weight that the temperature uniformity constraint occupies in the optimization loss function.
Further, the geometric consistency loss function is implemented by minimizing the point-to-plane distance of the matching point pairs between the current frame point cloud and the model point cloud.
Further, the space vertex Vi(h) To its matching pointThe distance between the model planes is as follows:
in the formula (I), the compound is shown in the specification,constructing a model vector diagram under the i-1 frame view angleAs shown in equation (15):
in the formula, M is the number of elements in the matching point pair set O; and vi=Vi(h) Respectively representing a pair of matching points in the current input point cloud and the i-1 th frame of model point cloud;
by substituting formula (8) and formula (9) for formula (15) to giveThe linearization results of (1):
in the formula (I), the compound is shown in the specification,andrespectively representThe Jacobian matrix and the residual terms of;
solving ξ to realize E by equation (18)crMinimization of (d):
updating T through continuous iterationiUntil after the kth and k-1 iterationsAnd (5) if the value is smaller than the preset threshold value, entering the step (5).
Further, in step (5), a multi-source joint optimization loss function as shown in formula (19) is constructed:
in the formula (I), the compound is shown in the specification,an objective function is optimized for spatial geometric consistency,the objective function is optimized for temperature consistency,is the weight to which it corresponds,an objective function is optimized for visible photometric uniformity,is its corresponding weight. Further, the collected visible light image is converted from a three-primary color space to a YUV color space, where components U and V represent chrominance and component Y represents luminance, and the calculation method is as follows:
Y=0.299R+0.587G+0.114B;
wherein R, G and B represent the three color components of red, green and blue, respectively, in the three primary color space;
after color conversion and brightness extraction, the average brightness B of the visible light brightness image is usedv(Y) and the maximum luminance difference Cv(Y) realizing the quantification of the visible light image quality, wherein the calculation mode is as follows:
Cv(Y)=max(Y(x))-min(Y(x))x∈Ω;
in the formula, Ω is a pixel space of the visible light brightness image;
wherein f (B) and g (C) are the average brightness B of the imagev(Y) and the maximum luminance difference Cv(Y) the weight component corresponding thereto is calculated as shown in the formulas (21) and (22); k is a radical ofBAnd kCCoefficients of f (B) and g (C), respectively;
in the formula, H is the maximum value of the pixel value of the luminance image, and is 255;
in the formula, N is the number of elements in the visible light luminosity effective point set Q; this is linearized by substituting formula (8) and formula (9) for formula (21):
according to the chain-type derivation rule:
in the formula, Jκ(ψ) is a Jacobian matrix of function κ (ψ), derived from equation (7), Jψ(xi) is a Jacobian matrix of the function ψ (xi) derived from the equations (6), (8) and (9);andrespectively representing loss functionsThe Jacobian matrix and the residual terms of;
substituting equations (24), (25) and (26) into equation (19), solving ξ to achieve EfrMinimization of (d):
updating T through continuous iterationiUntil after the kth and k-1 iterations EfrAnd (6) if the value is smaller than the preset threshold value, entering the step (6).
Further, in step (6), with the depth image as a reference, the point cloud generated from the depth image is transformed into the camera coordinate systems of the infrared camera and the visible light camera through a rigid spatial transformation by using the camera external parameters, and then is projected into the corresponding image coordinate systems to realize the correspondence between the images.
Further, extrinsic parameters of the infrared camera relative to the depth cameraAnd at tdiTo ttiPose transformation of depth cameraThen there are:
the coordinate of the vertex in the point cloud picture acquired in the ith frame in the current camera coordinate system is vi=Vi(u) v is represented by the formula (29)iConverting the coordinate system of the infrared camera of the i-1 th frame to obtain a point vi-1:
In the formula, Ti-1And TiThe camera poses of the i-1 th frame and the i-th frame respectively,representing real-time extrinsic parameters of the i-1 th frame infrared camera; v is adjusted according to internal parameters of the infrared camerai-1Projecting the image plane to obtain the sum v of the i-1 th frame infrared imageiA corresponding pixel point p;
according to the real-time external parameters of the thermal infrared imager of the ith frameV is to beiAfter the coordinate system of the camera of the infrared camera of the ith frame is converted, the coordinate system is projected to a pixel coordinate system through an internal reference matrix to obtain a point h, the point h and the point p form a pair of matching points in front and rear two frames of infrared images, and a set formed by all M pairs of matching points is marked as S; thus, the real-time optimization objective function of the external parameters of the infrared camera is obtained:
converting increment VT of infrared camera external parameter matrix into six-dimensional vector xi according to lie group lie algebrae=(α,β,λ,tx,ty,tz)T(ii) a Then redThe updating mode of the external parameters of the external camera for optimizing the kth iteration in real time is as follows:
in the formula (I), the compound is shown in the specification,andrespectively calculating the variation of the external parameters of the infrared camera obtained from the k-1 th iteration and the k-th iteration,is a vector xieSE (3) lie algebraic form of (a):
combining formula (29) and formula (30), with EetLinearization is as follows:
in the formula, Jκ(ψ1) As a function k (psi)1) The jacobian matrix of (a) is derived from equation (7),as a function ψ1(ξe) A Jacobian matrix of (1), derived from equation (6), equation (31), and equation (32); j. the design is a squareetAnd retRespectively representing loss functions EetThe Jacobian matrix and the residual terms of; solving ξ according to equation (34)eTo realize Eet(ξe) Minimization of (d);
updating the depth camera at t through continuous iterationdiTo ttiTemporal spatial pose transformationUntil after the k and k-1 iterations EetWhen the external parameter matrix of the infrared camera is less than the preset threshold value, the external parameter matrix of the infrared camera isMatching the temperature image and the depth image by using the optimized external parameters;
depth camera triggering time tdiTo the triggering moment t of the visible light cameraviTo the spatial pose ofThe real-time extrinsic parameters of the visible light camera are:
respectively marking the visible light brightness images of the ith frame and the (i-1) th frame asAndthe set mark W formed by N pairs of matching point pairs in the two images is used for constructing an external parameter real-time optimization objective function E of the visible light cameraevComprises the following steps:
the linearization result of equation (35) is:
solving xi according to equation (37)eTo realize Eev(ξe) Minimization of (d):
continuously iteratively updating the depth camera at tdiTo tviTemporal spatial pose transformationUntil after the kth and k-1 iterations a loss function EevWhen the external parameter matrix of the infrared camera is less than the preset threshold value, the external parameter matrix of the infrared camera isAnd matching the visible light image and the depth image by using the optimized external parameters.
The invention provides a set of three-dimensional reconstruction algorithm flow based on the multi-source information fusion technology and an iterative nearest neighbor method, and realizes the rapid and accurate reconstruction of a human body temperature three-dimensional model. The invention improves the precision and the robustness of the three-dimensional reconstruction algorithm. Aiming at the problem of external parameter change in the camera motion process, the invention adopts a frame-to-frame mode to optimize the external parameters of the camera in real time, and proposes to use an alternate iteration strategy to predict the position and the external parameters of the camera, thereby further improving the accuracy of a reconstruction model on the basis of ensuring the real-time performance of a reconstruction algorithm. The invention provides a temperature three-dimensional model on the basis of a TSDF (time-dependent dynamic distribution function) model, designs a model updating strategy for self-adaptive integer element weight according to temperature measurement distance and temperature measurement angle, avoids the problem of fuzzy temperature details easily generated in the process of multi-view data fusion, and realizes accurate construction and storage of a human body three-dimensional temperature field.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a flow chart of a human body three-dimensional reconstruction algorithm of multi-source information fusion in a preferred embodiment of the invention;
FIG. 2 is a diagram of a small displacement assumption in the prior art;
FIG. 3 is a diagram illustrating the optimization effect of spatial point pairs on matching relationships in a preferred embodiment of the present invention;
FIG. 4 is a flow chart of a multi-source information joint pose estimation algorithm in a preferred embodiment of the present invention;
FIG. 5 is a schematic diagram of the temperature uniformity constraint loss function construction in a preferred embodiment of the present invention;
FIG. 6 is a diagram illustrating the construction of a constrained loss function for geometric consistency in accordance with a preferred embodiment of the present invention;
FIG. 7 is a diagram illustrating an average luminance weight function according to a preferred embodiment of the present invention;
FIG. 8 is a diagram illustrating a maximum luminance difference weighting function according to a preferred embodiment of the present invention;
FIG. 9 is a schematic diagram of the visible light luminosity uniformity loss function construction in a preferred embodiment of the present invention;
FIG. 10 is a diagram of the iterative process residual variation in a preferred embodiment of the present invention;
FIG. 11 is a schematic diagram of a platform multisource data collection process in a preferred embodiment of the invention;
FIG. 12 is a comparison graph of the multi-source image registration results at different motion speeds in a preferred embodiment of the present invention;
FIG. 13 is a schematic diagram of a platform single frame multi-source data acquisition in accordance with a preferred embodiment of the present invention;
FIG. 14 is a schematic diagram illustrating the optimization of temperature uniformity between frames in a preferred embodiment of the present invention;
FIG. 15 is a flow chart of an alternate iteration algorithm in a preferred embodiment of the present invention;
FIG. 16 is a schematic diagram of camera pose changes in alternate iterations in a preferred embodiment of the present invention;
FIG. 17 is a diagram of an extended TSDF model in accordance with a preferred embodiment of the present invention;
FIG. 18 is a schematic diagram of symbol distances in a preferred embodiment of the present invention;
FIG. 19 is a diagram of the sdf function in a preferred embodiment of the invention;
FIG. 20 is a diagram of the tsdf function in a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
The human body three-dimensional surface temperature model construction technology uses a three-dimensional reconstruction technology to obtain human body surface three-dimensional geometric information, fuses the three-dimensional geometric information according to the matching relation of the three-dimensional geometric information and the temperature information, stores the three-dimensional geometric information in a human body temperature three-dimensional model, and improves the integrity of the temperature three-dimensional model construction in a multi-view data fusion mode. In a specific embodiment according to the invention, a set of three-dimensional reconstruction process shown in fig. 1 is provided based on a multi-source information fusion technology, and the rapid and accurate construction of a human body three-dimensional temperature field is completed.
Generally, multi-source information has the following characteristics: the time sequence consistency of the temperature information is good, but the texture is rough, the matching difficulty is small, but the precision is limited; the visible light information has rich texture and is easy to interfere, the matching difficulty is high, and the precision is high; the depth information is highly reliable but does not have temporal consistency.
Therefore, the plane transformation relationship between the images is optimized and calculated through the luminosity consistency between the input infrared image and the reference infrared image, so that the initialization of the point pair matching relationship in the two cloud clusters is completed (step 100 in fig. 1), and the problem of reconstruction algorithm failure when the overlap ratio of the input point cloud and the reference point cloud is low is solved. And then, solving the spatial pose of the platform in the current state by using a temperature measurement platform joint pose estimation algorithm based on multi-source information fusion (step 200 in fig. 1). Based on the characteristics of multi-source information, the platform space pose estimation process is divided into three stages of pose rapid initialization based on temperature consistency (step 201 in fig. 1), pose rough estimation based on space geometric consistency constraint (step 202 in fig. 1) and pose fine adjustment based on visible light luminosity consistency and geometric consistency (step 203 in fig. 1), and rapid and accurate estimation of the platform pose is achieved in a rough-to-fine mode. Due to the fact that data acquisition time of each camera is asynchronous, external parameters of the cameras can be changed due to movement of the platform in the process of constructing the three-dimensional temperature field of the human body, and the accuracy of the reconstructed three-dimensional model of the human body temperature is further influenced. Therefore, the method uses an inter-frame registration mode to calibrate the camera extrinsic parameters in real time through temperature consistency optimization (step 300 in fig. 1), and adopts an alternative strategy to circularly carry out platform pose estimation and camera extrinsic parameter optimization (step 400 in fig. 1), so as to inhibit the problem of multi-source information mismatch caused by camera extrinsic parameter change. And then updating the existing temperature three-dimensional model according to the fusion rule by using the information of the input image according to the result of the platform pose joint optimization (step 500 in FIG. 1). And finally, obtaining model data under the current platform visual angle by using a Ray Casting algorithm (Ray Casting) and taking the model data as reference data for the next frame calculation (step 600 in fig. 1), thereby realizing a model reconstruction process of a frame to model and further improving the precision and robustness of the human body three-dimensional temperature field construction.
Step 100:
The basic assumption of the Iterative nearest neighbor method (ICP) is: input current frame image IiWith the previous frame image Ii-1The camera displacement generated between the two frames is small, so that the camera space pose T corresponding to the previous frame image can be usedi-1As the current frame camera space pose TiIs started.
As shown in fig. 2, when the camera spatial pose does not change much, image IiAnd Ii-1Point P corresponding to the same pixel coordinate systemiAnd Pi-1The distance on the actual object is short, the camera pose can be adjusted through iterative calculation, and the input point cloud and the model data point are completedAlignment of the cloud. However, when the pose change of the camera is large and the coincident point of the input point cloud and the model data point cloud is small, the three-dimensional point P in the model data point cloud obtained by initializing the matching relationi-1Not in the input point cloud PiThe correct camera pose is difficult to find in subsequent iterations, leading to failure of the reconstruction algorithm. When the surface structure of the reconstructed object is simple, the geometric repeatability is strong, or the surface structure is too complex, the probability of the algorithm failure is further increased. In order to improve the problem and improve the robustness of the reconstruction algorithm, the invention optimizes the spatial point pair matching relationship initialization algorithm based on the two-dimensional input image and the model data.
Under the condition of room temperature, the temperature information of the human body is kept relatively stable, and the measurement result of the surface temperature of the human body is less influenced by the illumination condition and the measurement angle, so that the consistency of gray values of the same point on the human body can be better kept in front and back two frames of infrared images, and the initial matching relation of the point cloud can be better optimized by using the front and back two frames of infrared images.
The invention uses the translation transformation to simulate the plane transformation of the front and back two frames of infrared images, and the mode can completely provide enough effective initial matching point pairs, thereby improving the robustness of the human body three-dimensional reconstruction algorithm.
In calculating the parameter t of the image translation vectoruAnd tvIn time, the present invention takes the approach of maximizing the temperature consistency between the ir images of frame i and frame i-1. Let the translation vector omega between two imagesi=[tu tv]TThen, a temperature uniformity loss function shown in equation (5) can be constructed:
wherein u is the pixel coordinate in the infrared image plane omega, IiAnd Ii-1Respectively representing the infrared images of the ith frame and the (i-1) th frame. Loss function ETIs a non-linear minimumA two-fold function that can be solved using the gauss-newton method. At iteration k +1, translate vector ωiThe updating method comprises the following steps:
in the formula (I), the compound is shown in the specification,andrespectively representing the (k + 1) th and (k) th iterative translation vectors ωiThe calculation result of (2). The formula (5) and the formula (6) are combined and the function E of the temperature consistency loss is obtainedTLinearization:
in the formula (I), the compound is shown in the specification,infrared image I representing ith frameiIn gradient maps in u and v directions, the method uses a Sobel operator of 3 multiplied by 3 to calculate the image gradient; r isTIs to use the kth iteration translation vectorCalculating a vector formed by residual values of all pixel points in the image;is r obtained by the k-th iteration calculationTA jacobian matrix. According to the principle of gauss-newton method, V ω can be calculated by the loss function after linearization:
after determining the planar transformation relation between the infrared image of the ith frame and the infrared image of the (i-1) th frame, the invention uses the translation vector omegaiFor the ith frame point cloud ViAnd the (i-1) th frame point cloud Vi-1The spatial point in (3) optimizes the matching relationship.
Fig. 3 shows the matching relation comparison of two adjacent images before and after optimization. The left image is an ith frame infrared image, the right image is an ith-1 frame infrared image, a pixel point q in the ith frame infrared image directly corresponds to a pixel point p in the ith-1 frame infrared image through a small displacement hypothesis, and the corresponding relation is obviously wrong as can be known from the images. After the translation vector u obtained by calculation of the invention is added, the point corresponding to the pixel point q in the i-1 th frame infrared image can be found, so that the matching point pair generated between the input point cloud and the point cloud to be matched in the initialization process is closer physically, the difficulty of the reconstruction task is reduced, the algorithm can be applied to the condition that the point cloud superposition is smaller, such as large displacement or rapid movement, and the like, and the robustness of the reconstruction algorithm is greatly improved.
Step 200:
after the initialization of the matching relation between the current input point cloud and the model point cloud is completed, the pose of the temperature measuring platform can be estimated. The iterative nearest neighbor method used in the prior art only considers a single space geometric consistency constraint, but when a scene with a complex geometric relationship is reconstructed, the scene is easy to fall into local optimum, so that the reconstruction algorithm fails. In the reconstruction process of the three-dimensional temperature field of the human body, the multi-source data acquisition platform needs to surround the target human body for 360 degrees to acquire data, the change of the visual angle is usually accompanied with the change of the surface color of the object and the movement of the shadow, and therefore the visible light information is difficult to keep consistent in the data acquisition process. In the prior art, a loss function with stronger robustness is provided by using the consistency of the surface temperature of an object, but the sparsity of texture information of the surface temperature of the object causes the reduction of the reconstruction precision of a model to a certain extent.
The existing three-dimensional reconstruction algorithms use fixed loss functions when estimating the spatial pose of a camera, but do not consider that in reconstruction tasks of different scenes, the contribution degrees of various information to the estimation result of the camera pose are different, and for different iteration steps in the estimation of the same camera pose, the influence of various information on the calculation result is also different, so that the algorithms only have better reconstruction effect on partial scenes. The introduction of multi-source information can enable the human body three-dimensional temperature field reconstruction system to use appropriate information sources under different conditions, thereby improving the performance of the three-dimensional reconstruction system.
In order to design a multi-source information fusion strategy, the invention analyzes the characteristics of different kinds of information and optimizes the pose estimation algorithm process and the constraint item construction of the multi-source data acquisition platform.
The object surface temperature information is less influenced by the change of illumination conditions, measurement angles and environmental conditions, good consistency can be kept in a long image sequence, the difficulty of pose estimation is reduced to a certain extent, and therefore the object surface temperature information can be used as guiding information to be added to the initial stage of space pose estimation of a three-dimensional thermal imaging platform, and the calculation result is quickly converged to the adjacent area of the global optimal solution through temperature consistency constraint.
The depth information provides spatial position information and surface geometric information of a measured object, and the construction of the spatial geometric consistency constraint of two cloud clusters of points through the depth information is proved to be a reliable and effective camera spatial pose estimation mode and is also a mainstream loss function construction method when an iterative nearest neighbor method is used at present. However, due to the limitation of the precision of the depth camera and the influence of noise, certain errors exist in the object space geometric information acquired by the multi-source information data acquisition platform, and the pose estimation precision is reduced. Therefore, space geometric constraint is added after the rapid convergence stage is finished, and a reliable camera pose rough estimation result can be further obtained.
The visible light images usually contain rich color texture information, and details in a scene can be well restored, so that high-precision estimation of the camera pose can be realized by constructing visible light luminosity consistency constraint among the images. However, the problem of weak iteration guidance is brought by abundant detailed information, so that the number of iterations in the solving process is increased sharply, and the solution is easy to fall into local optimization. Meanwhile, the measurement result of the visible light information on the surface of the object is easily influenced by the illumination condition and the measurement angle, the consistency of the visible light information can be only kept in a shorter image sequence, and the capability of coping with long baseline transformation is slightly insufficient. In order to avoid adverse effects caused by visible light information on the basis of improving the reconstruction precision of the three-dimensional model of the human body temperature, the rough estimation result of the spatial pose of the camera can be added into the loss function after being obtained, so that the iteration result approaches to a more accurate numerical value.
Based on the above analysis, the invention divides the human body three-dimensional temperature field reconstruction algorithm into three stages as shown in fig. 4 from the calculation flow: the method comprises the steps of platform pose fast initialization (201) based on temperature consistency, platform pose rough estimation (202) based on space geometric consistency and platform pose fine adjustment (203) dominated by visible light luminosity consistency.
Step 201:
in the stage of quickly initializing the position and posture of the platform, the invention maximizes the ith frame of infrared imageAnd the i-1 th frame infrared imageThe spatial pose of the platform is estimated based on the temperature consistency. As shown in fig. 5Is a vertex in a point cloud picture generated in a world coordinate system under the view angle of an i-1 frame, and a vertex matched with the vertex in the i frame is Vi(h) Then, it is represented by the formula (20).
Wherein M is the number of elements in the temperature effective point set P,for rigid transformation of three-dimensional vertices, κ (v) denotes that vertex v ═ vx,vy,vz) The process of projection from three-dimensional space to pixel space.
In the formula (f)xAnd fyIs the focal length of the thermal infrared imager, (c)x,cy) The principal point coordinates of the thermal infrared imager.
Since equation (20) is a non-linear least squares problem, it can be solved using the gauss-newton method. The space rigid transformation matrix T is a nonlinear variable, and in order to calculate the derivative of T, the invention uses the lie group lie algebra to express the increment VT of the space transformation matrix T as a 6-dimensional vector xi ═ alpha, beta, lambda, Tx,ty,tz)T. Then, the updating mode of the k iteration of pose estimation is as follows:
Ti k=VTTi k-1≈(I+ξ^)Ti k-1 (23)
in the formula, Ti k-1And Ti kAnd xi ^ is an SE (3) Li algebraic form of a vector xi, which is the calculation result of the position and posture of the (k-1) th iteration platform and the k-th iteration platform respectively:
by substituting equations (23) and (24) into equation (20), the loss function of the platform pose fast initialization stage can be linearized as:
according to the chain-type derivation rule:
in the formula, JκThe Jacobian matrix having (ψ) as the function κ (ψ) can be derived from equation (22), JψAnd (ξ) is a Jacobian matrix of the function ψ (ξ), and can be derived from equation (21), equation (23), and equation (24). J. the design is a squarefiAnd rfiRespectively representing loss functions EfiThe Jacobian matrix and residual terms of (ξ), so it is possible to solve ξ according to equation (27) to achieve the optimization function EfiMinimization of (ξ).
Platform pose T is updated through continuous iterationiUntil after the kth and k-1 iterations a loss function EfiAnd if the value is less than a certain threshold value, the platform pose rapid initialization stage is finished and the platform pose rough estimation stage is started.
Step 202:
in the stage of rough estimation of the pose of the platform, the invention mainly maximizes the point cloud V of the ith frameiModel point cloud under view angle of i-1 frameThe spatial pose of the platform is estimated based on the spatial geometric consistency of the platform. In the rapid initialization stage, the platform pose is iterated to be close to the global optimal solution, and in order to prevent the platform pose from deviating in the subsequent iterative calculation process, the invention carries out the iteration on the ith frame of infrared imageAnd the i-1 th frame infrared imageThe temperature consistency is added into an optimization objective function in a certain weight mode to establish a geometric and temperature combined optimization mechanism, and a combined optimization loss function is shown as a formula (28):
in the formula, EsgAnd EtRespectively, a geometric consistency loss function and a temperature consistency optimization function are represented, and ω represents the weight occupied by the temperature consistency constraint in the loss function and is set to 0.1.
The spatial geometric consistency constraint is achieved by minimizing the point-to-plane distance of the matching point pairs between the current frame point cloud and the model point cloud. As shown in FIG. 6, the vector diagram of the model method under the previous frame view angle isThen the space vertex Vi(h) To its matching pointThe distance between the model planes is as follows:
In the formula, M is the number of elements in the matching point pair set O.Is a vector diagram of the model method under the previous frame view, and vi=Vi(h) Representing a pair of matching points in the current input point cloud and the model point cloud of the i-1 st frame. By substituting equation (23) and equation (24) into equation (30), the linearization result of the space geometric consistency optimization objective function can be obtained:
in the formula (I), the compound is shown in the specification,andrespectively representing loss functionsThe jacobian matrix and the residual terms.
Temperature consistency optimization objective function in rough estimation stage of platform poseThe construction of the method is similar to the pose rapid initialization stage,
solving xi by equation (33) to realize a joint optimization function EcrMinimization of (d):
in the formula (I), the compound is shown in the specification,platform pose T is updated through continuous iterationiUntil the kth and k-1 iteration post-loss functionsAnd if the value is less than a certain threshold value, the rough estimation stage of the platform pose is considered to be completed.
Step 203:
in the fine adjustment stage of the platform pose, the platform pose is adjusted more finely by utilizing abundant texture information in the visible light image and by the constraint of consistency of visible light luminosity. In order to ensure the correctness of the iteration direction and the reliability of the iteration result, the invention simultaneously reserves the constraint of space geometric consistency and the constraint of temperature consistency in the optimization of the stage, and constructs a multi-source combined optimization loss function shown as a formula (34):
in the formula (I), the compound is shown in the specification,an objective function is optimized for spatial geometric consistency,the objective function is optimized for temperature consistency,is the weight to which it corresponds,an objective function is optimized for visible photometric uniformity,is its corresponding weight.
The effectiveness of visible light information mainly depends on the illumination condition of a reconstruction scene, in order to avoid the problem of reconstruction failure caused by poor illumination or illumination change and improve the robustness of a reconstruction algorithm, the quality judgment of a visible light image is firstly carried out before a visible light luminosity consistency loss function is calculated, and then the weight of the visible light luminosity consistency loss function is adjusted according to the judgment result.
The visible light information reflected by the surface of the object contains both brightness information and color information. The color information is determined by the reflection characteristic of the object surface and the wave band of the incident light, the incident angle and the radiation intensity, so that the color information has stronger instability relative to the brightness information, and the mismatching of point cloud in the camera pose calculation is easily caused. In order to extract the brightness component in the visible light information and complete the conversion from the visible light image to the brightness image, the invention converts the visible light image collected by the human body three-dimensional thermal imaging platform from a three-primary color space to a YUV color space through color space conversion and carries out degree extraction.
The present invention uses the average brightness B of the visible luminance imagev(Y) and the maximum luminance difference Cv(Y) realizing the quantification of the quality of the visible light image, wherein the calculation method comprises the following steps:
Cv(Y)=max(Y(x))-min(Y(x)) x∈Ω (37)
in the formula, Ω is a pixel space of the visible light intensity image.
Average brightness B of imagev(Y) reflects the overall brightness level of the visible image, when the average brightness Bv(Y) out of the effective range of luminance (B)tl:Bth) The weight of the visible-light-luminosity-consistency-loss function should be reduced to prevent reconstruction failures.
Maximum luminance difference C of imagev(Y) shows the range of the luminance distribution in the image when the maximum luminance difference C of the imagev(Y) above threshold CtWeight of function of loss of visible light luminosity uniformityHeavy and Cv(Y) is in positive correlation. Therefore, in the platform space pose fine tuning optimization objective function, the visible light luminosity consistency constraint weightThe calculation method comprises the following steps:
wherein f (B) and g (C) are the average brightness B of the imagev(Y) and the maximum luminance difference CvThe weight component corresponding to (Y) is calculated as shown in equations (39) and (40). k is a radical ofBAnd kCThe coefficients of f (B) and g (C), respectively, since the average luminance has a greater influence on the image quality, the present invention empirically assigns k to kBAnd kCSet to 4 and 2.
Where H is the maximum value of the luminance image pixel value, and H is 255 in the present invention. FIGS. 7 and 8 show the weight component f (B) and the average luminance B, respectivelyv(Y), and the weight component g (C) and the maximum luminance difference Cv(Y) functional relationship between (Y).
Visible light luminosity consistency optimization objective functionFIG. 9 shows a method of constructing (A). Using i frame platform pose TiGenerating a vertex in the point cloud picture in a world coordinate system under the view angle of the (i-1) th frameConverting into ith frame through space rigid transformationUnder the coordinate system, the camera internal parameters are projected to the image coordinate system of the ith frame to obtain pixel points u', and the visible light brightness image Y of the (i-1) th frame is constrainedi-1And the ith frame of visible light brightness image YiThe consistency of the brightness is used for constructing an optimization objective function, namely:
in the formula, N is the number of elements in the visible light luminosity effective point set Q. This is linearized by substituting formula (23) and formula (24) for formula (39):
according to the chain-type derivation rule:
in the formula, JκThe Jacobian matrix having (ψ) as the function κ (ψ) can be derived from equation (22), JψAnd (ξ) is a Jacobian matrix of the function ψ (ξ), and can be derived from equation (21), equation (23), and equation (24).Andrespectively representing loss functionsThe jacobian matrix and the residual terms.
Space geometric consistency optimization objective function in platform pose fine tuning stageTemperature-consistent optimization objective functionThe construction of (2) is similar to the rough estimation stage of the platform pose.
Solving xi through a linearization method to realize multi-source information combined optimization function E in the fine adjustment stage of platform posefrMinimization of (d):
in the formula (I), the compound is shown in the specification,platform pose T is updated through continuous iterationiUntil after the kth and k-1 iterations a loss function EfrAnd if the value is less than a certain threshold value, the fine adjustment stage of the platform pose is considered to be completed.
Fig. 10 shows the residual variation of various camera pose estimation algorithms in the iterative computation process for the same set of data acquired by the human three-dimensional thermal imaging platform. As can be seen from the figure, the T-ICP algorithm using the temperature information and the space geometric information can realize the rapid convergence of the platform pose, but the final solving precision is not high; the RGB-ICP algorithm combining visible light information and geometric information has high calculation precision, but the iteration times required by result convergence are obviously increased; the camera pose estimation algorithm based on multi-source information fusion can realize rapid reduction of residual errors in the early stage and obtain more accurate results within iteration times far less than RGB-ICP.
Step 300:
the pose estimation of the multi-source information fusion platform requires a good matching relation between multi-source information, but simultaneous triggering between multi-source sensors is difficult to realize, so that external parameters among the multi-source sensors in the data acquisition process are changed, and errors are introduced into the registration result of the multi-source image. As shown in fig. 11 and 12, when the three-dimensional thermal imaging platform of the human body is fixed or moves slowly, the external parameter change caused by asynchronous triggering can be ignored, so that the registration between the multi-source images can still be effectively completed by using the calibrated external parameters of the camera. However, when the moving speed of the human body three-dimensional thermal imaging platform is increased, the platform generates larger pose change in the same trigger time difference, and at the moment, the calibrated external parameters of the camera are not enough to obtain a multi-source image registration result meeting the precision requirement.
In order to solve the problem, the invention provides a method for optimizing external parameters of a camera in real time by using an inter-frame registration mode so as to reduce the problem of multi-source image mismatching caused by camera motion.
FIG. 13 shows the acquisition process of the multi-source image pair of the ith frame of the three-dimensional thermal imaging platform of the human body. The trigger time of the depth camera is tdiThe trigger time of the thermal infrared imager is ttiThe triggering time of the visible light camera is tvi. Because the behaviors of the visible light camera and the thermal infrared imager are similar in the platform data acquisition process, the invention uses the thermal infrared imager as a representative to analyze the external parameter change problem in the multi-source data acquisition process. Marking the relative pose of the ith frame of the depth camera and the thermal infrared imager under the motion state of the platform asAs shown in FIG. 13, the relative pose of the camera in the motion state of the platform consists of two parts, namely the external parameters of the thermal infrared imager relative to the depth camera determined when the platform is set upAnd at tdiTo ttiPose transformation of depth cameraThen there are:
in the formula (I), the compound is shown in the specification,the fixed value is independent of the motion speed and the spatial position of the platform and can be obtained through the camera calibration process in chapter 2, so that the depth camera is solved at tdiTo ttiTemporal spatial pose transformationThe method is the key for optimizing the external parameters of the infrared camera in real time.
In order to improve the real-time performance of the algorithm, the invention provides that the inter-frame registration of the infrared images is completed by using the front and rear frames of infrared images and the depth image of the current frame through temperature consistency constraint, so that the real-time optimization of the external parameters of the thermal infrared imager is completed.
The manner of constructing the inter-frame temperature consistency constraint optimization objective function of the invention is shown in fig. 14. The coordinate of the vertex in the point cloud picture acquired from the ith frame of the platform in the current platform coordinate system is vi=Vi(u) v is represented by the formula (48)iConverting the coordinate system of the thermal infrared imager camera to the i-1 th frame to obtain a point vi-1:
In the formula, Ti-1And TiThe platform poses of the i-1 th frame and the i-th frame respectively,and representing real-time external parameters of the thermal infrared imager of the (i-1) th frame. V is converted according to internal parameters of the thermal infrared imageri-1Projecting the image plane to obtain the sum v of the i-1 th frame infrared imageiAnd (4) corresponding pixel point p.
Then according to the real-time external parameters of the thermal infrared imager of the ith frameV is to beiConverting the coordinate system of the camera of the frame i thermal infrared imager into a coordinate system of a camera of the frame i thermal infrared imager, projecting the coordinate system of the camera into a pixel coordinate system through an internal reference matrix to obtain a point h, and forming a front frame infrared image and a rear frame infrared image by the point h and a point pThe set of a pair of matching points in the image, all M pairs of matching points is denoted as S. Thus, obtaining an external parameter real-time optimization objective function of the thermal infrared imager:
it is linearized according to the lie group lie algebra:
in the formula, Jκ(ψ1) As a function k (psi)1) The jacobian matrix of (a) can be derived from equation (22),as a function ψ1(ξe) The jacobian matrix of (a) can be derived from the equations (21), (50) and (51). J. the design is a squareetAnd retRespectively representing loss functions EetAnd the residual terms, so a linearization method can be used to solve ξeTo implement an optimization function Eet(ξe) Is minimized.
Updating the depth camera at t through continuous iterationdiTo ttiTemporal spatial pose transformationUntil after the kth and k-1 iterations a loss function EetWhen the external parameter matrix of the thermal infrared imager is less than a certain threshold value, the external parameter matrix of the thermal infrared imager isAnd pixel-level matching of the temperature image and the depth image can be realized by using the optimized external parameters.
Step 400:
the camera extrinsic parameter real-time optimization based on the inter-frame registration adopts the assumption that the pose of the depth camera is unchanged, namely, the pose of the depth camera is accurate and determined when the extrinsic parameters of the thermal infrared imager and the visible light camera are optimized in real time. However, according to the current algorithm flow, the registered depth image, temperature image and visible light image obtained by the real-time optimization of the external parameters of the camera only participate in the update of the temperature three-dimensional model, but are not used for the estimation of the platform pose. In order to fully utilize the advantages of multi-source information in the process of reconstructing the human body temperature three-dimensional model, the invention adopts an alternate iteration strategy to realize effective combination of camera pose estimation based on multi-source information fusion and camera external parameter real-time optimization.
The specific flow of the alternating iteration algorithm designed by the invention is shown in fig. 15. After the i-th frame of multi-source information of the optimization of the initial matching relation of the space point pair is input, firstly, the platform pose estimation result is quickly converged to be near the true value through the platform pose quick initialization stage, and then the relatively accurate platform pose T is obtained through the platform pose rough estimation stage and the platform pose fine adjustment stagei 0. If T isi 0And if the accuracy of the infrared thermal imager and the external parameters of the visible light camera cannot meet the requirements, the camera pose obtained by current calculation is used for optimizing the external parameters of the infrared thermal imager and the visible light camera in real time, the matching relation of the temperature image, the visible light image and the depth image is optimized according to the calculation result, and the optimized multi-source image is input into the calculation flow of rough estimation of the platform pose, so that the alternate iteration of the platform pose estimation and the camera external parameter optimization is realized.
Fig. 16 visually shows the change of the poses of the depth camera, the infrared camera and the visible light camera during the p-th alternate iteration. Wherein A is the real value of each camera pose of the ith frame of the platform, and the spatial poses of the depth camera, the thermal infrared imager and the visible light camera are respectively T due to asynchronous trigger timedi、TtiAnd Tvi。Andrespectively representing the current poses of the thermal infrared imager and the visible light camera according to corresponding external parametersAndand current pose of depth cameraAnd calculating to obtain:
as can be seen from fig. 16, the implementation of the alternate iteration strategy does not affect the calculation process of the camera external parameter real-time optimization algorithm, but due to the addition of the external parameter change amounts of the thermal infrared imager and the visible light camera, the matching relationship between the pixel points of the front and rear two frames of temperature images and the pixel points of the visible light image in the platform pose estimation algorithm based on the multi-source information fusion is changed, so that the temperature consistency optimization objective function and the visible light luminosity consistency optimization objective function in the algorithm need to be modified.
The corrected temperature consistency optimization objective function is as follows:
the modified visible light luminosity consistency optimization objective function is as follows:
step 500:
when initializing, updating, outputting and the like the temperature three-dimensional model, the spatial characteristics, the temperature characteristics and the visible light characteristics of the temperature three-dimensional model need to be represented and stored through a certain spatial representation model. Therefore, the invention improves the technology of the TSDF space representation model based on the space voxel model, and provides an extended TSDF space representation model which is completely suitable for the reconstruction of the human body temperature three-dimensional model.
The extended TSDF spatial representation model proposed by the present invention is shown in fig. 17. In the initialization stage of the space model, the space size of a reconstructed target object is designated to be L multiplied by W multiplied by H according to the requirement of the construction of the three-dimensional temperature field of the human body, then the length, the width and the height of the target space are equally divided by N to obtain an N multiplied by N voxel space, and the voxelization of the target space is further completed. The spatial position and size of each voxel in voxel space is determined by the spatial coordinates of the voxel center point. The voxel is simultaneously stored with a truncated symbolic distance value tsdf representing the geometric information of the model space and temperature information ITAnd color information IC(R, G, B) and its corresponding geometric weight wGTemperature weight wTAnd color weight wC。
The extended TSDF model represents surface geometric information of the three-dimensional model by using a Truncated Signed Distance Function (TSDF) according to the Distance between a pixel point and the surface of the model. As shown in fig. 18, when the surface geometry information of the three-dimensional model is updated using the depth image, the center point coordinate of a certain spatial voxel isSpatial three-dimensional model surface acquired by depth cameraUpper and pointThe closest point is P, then the space pointThe symbolic distance of the located voxels is:
in the formula, d (P) represents a depth coordinate value of the acquired spatial point P in the world coordinate system.
As can be seen from the formula (63), the closer the sdf value is to 0, the closer the point is to the surface of the three-dimensional model, and the current point isWhen the surface of the three-dimensional model is out of the surface, the sdf value of the corresponding voxel is a positive value, and when the point is positionedWhen the model is in the three-dimensional model, the sdf value of the corresponding voxel is a negative value, so that the voxel with the sdf value of zero is extracted from the model and is interpolated to determine the spatial position and the geometric characteristic of the surface of the three-dimensional model. In the actual use process, the characterization capability of the point far away from the three-dimensional model surface is very limited, and because the absolute value of sdf is large, errors are easily introduced due to boundary problems and the like while computing resources are occupied, so that a threshold judgment mechanism is introduced on the basis of the sdf function shown in fig. 19. When the absolute value of sdf of a voxel is larger than the distance uncertainty μ, it is set to + μ and- μ respectively according to the sign of sdf, and then normalization processing is performed on all sdf values, resulting in a tsdf function image as shown in fig. 20.
The temperature information of the voxels in the model is stored as I in the form of single-precision floating point numbersTThe temperature in degrees celsius of the space represented by the voxel is recorded. Since the temperature information is gray scale information which is not beneficial for human eyes to observe, the temperature can be observed in three dimensionsThe method can quickly position on a real object after the characteristics of the model are achieved, the visualization effect of the three-dimensional temperature field model reconstruction result is improved, and the color information of the three-dimensional model is also recorded when the space model is constructed. Color information for each spatial voxel is recorded as ICAnd R, G, B are recorded in three channels respectively to record three color components of red, green and blue of the surface color of the three-dimensional model so as to ensure the authenticity of the object color.
After the reconstruction of the target space region is completed, the temperature information I of each voxel in the space is automatically processedTAnd color information ICInitialise to 0 and (0,0,0), set the tsdf value to-1, while weighting the geometric weights w in the voxelsGTemperature weight wTAnd color weight wCZeroed to obtain an initialized extended TSDF spatial representation model. After the platform pose and the camera external parameters of each frame are obtained through an alternating iteration algorithm, the currently obtained three-dimensional model surface information, temperature information and color information need to be converted into a temporary TSDF model according to the calculation result, and then the temporary TSDF model is fused into a global extended TSDF model to update the three-dimensional model data.
Marking the platform pose estimation result of the ith frame as TiThen, the tsdf value of the spatial voxel P with the center point coordinate P is calculated as:
in the formula (I), the compound is shown in the specification,indicating that vector u is rounded, Di(x) Indicating the depth of the ith frameAnd the depth value of the pixel point corresponding to the point p in the degree map. Because the depth value measured by the existing sensor at the edge of the object is not accurate, the invention uses the current geometric weight of the voxel corresponding to the edge of the object in the point cloud pictureIs set to 0. The contours in the depth map are extracted using equation (67):
in the formula (I), the compound is shown in the specification,is a 5 × 5 window neighborhood around the pixel u, and δ is an edge determination threshold. The geometric weight of the spatial voxel P is:
marking the temperature icon collected in the ith frame asThe temperature information corresponding to the spatial voxel P with the central point coordinate P is:
according to the known prior art, factors such as the measurement distance and the temperature measurement angle of infrared radiation temperature measurement have an influence on the measurement result. Therefore, the invention corrects the measurement distance and the measurement angle in the process of updating the temperature three-dimensional model. The invention abandons the temperature information outside the working distance range of the platform in the model updating process, and the method comprises the following steps:
according to the known prior art, the surface emissivity of the long-wave infrared radiation of the non-electrolyte material remains unchanged at normal angles of less than 60 ° and decreases sharply at normal angles of less than 60 °, so that the invention only retains the temperature information at normal angles of less than 60 °:
in the formula, ViAnd NiRespectively representing the point cloud picture and the normal vector picture of the ith frame.
According to the assumption of consistency of the surface temperature of the object, the temperature values of the same space point at different viewing angles are kept relatively stable, so that the temperature catastrophe point is removed when the temperature of the three-dimensional model is updated, and the influence of temperature measurement noise on the temperature information of the model is reduced:
in the formula, deltaTFor the temperature jump threshold, the present invention empirically sets it to 10 ℃. From this, the temperature weight of the spatial voxel P can be derived:
marking the visible light image collected by the ith frame as IciThen, the color information corresponding to the spatial voxel P with the central point coordinate P is:
because the change of the measurement angle is usually accompanied with the change of the illumination condition, the measurement result of the surface color of the object is greatly influenced, when the color information of the three-dimensional model is updated, only the measurement result of which the normal angle is less than 30 degrees is selected to ensure the accuracy of the color information of the three-dimensional model:
in order to reduce the influence of measurement noise on model color information, the invention also eliminates color mutation points:
in the formula (I), the compound is shown in the specification,representing R, G, B three components in the i-th frame of the visible light image. The color weight of the spatial voxel P can thus be derived:
and after the temporary TSDF model calculated by the multi-source information of the ith frame is obtained, the temporary TSDF model is fused into the global TSDF model through the following formula, and the updating of the three-dimensional model data is realized.
wηThe method is the maximum voxel weight value set for increasing the robustness of a reconstruction system and preventing data overflow, and the method is set to be 2000 according to the actual requirements of reconstruction of a three-dimensional temperature field of a human body.
The invention provides a set of three-dimensional reconstruction algorithm flow based on the multi-source information fusion technology and an iterative nearest neighbor method, and realizes the rapid and accurate reconstruction of a human body temperature three-dimensional model. The invention designs a fusion strategy of multi-source information and improves the precision and the robustness of a three-dimensional reconstruction algorithm. Aiming at the problem of external parameter change in the camera motion process, the invention adopts a frame-to-frame mode to optimize the external parameters of the camera in real time, and proposes to use an alternate iteration strategy to predict the position and the external parameters of the camera, thereby further improving the accuracy of a reconstruction model on the basis of ensuring the real-time performance of a reconstruction algorithm. The invention provides a temperature three-dimensional model on the basis of a TSDF (time-dependent dynamic distribution function) model, designs a model updating strategy for self-adaptive integer element weight according to temperature measurement distance and temperature measurement angle, avoids the problem of fuzzy temperature details easily generated in the process of multi-view data fusion, and realizes accurate construction and storage of a human body three-dimensional temperature field.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.
Claims (10)
1. A human body three-dimensional surface temperature model construction method based on multi-source information fusion is characterized by comprising the following steps:
(1) inputting multi-source images obtained by a depth camera, a visible light camera and an infrared camera;
(2) initializing a spatial point pair matching relation based on an iterative nearest neighbor method, calculating a plane translation transformation relation between images by maximizing temperature consistency between the ith frame of infrared image and the (i-1) th frame of infrared image, and optimizing the spatial point pair matching relation;
(3) by maximizing the ith frame of infrared imageAnd the i-1 th frame infrared imageInitializing the spatial pose of the camera according to the temperature consistency;
(4) by maximizing the ith frame point cloud ViPoint cloud with frame i-1Estimating the spatial pose of the camera by the spatial geometric consistency of the camera;
(5) further adjusting the camera pose estimated in step (4) based on the visible light luminosity consistency;
(6) calibrating external parameters of the camera in real time based on temperature consistency in an interframe registration mode, returning to the step (4) until the precision meets the requirement, and entering the step (7);
(7) updating the existing temperature three-dimensional model by using the information of the input image;
(8) and acquiring model data under the current camera view angle by using a ray casting algorithm and taking the model data as reference data for calculating the next frame.
2. The method for constructing the human body three-dimensional surface temperature model based on multi-source information fusion of claim 1, wherein in the step (2), a translation vector ω between the infrared images of the ith frame and the (i-1) th frame is enabledi=[tu tv]TConstructing a temperature consistency loss function:
wherein u is the pixel coordinate in the infrared image plane omega, IiAnd Ii-1Respectively representing the infrared images of the ith frame and the (i-1) th frame; solving the solution by using a Gaussian-Newton method, and at the k +1 th iteration, translating the vector omegaiThe updating method comprises the following steps:
in the formula (I), the compound is shown in the specification,andrespectively representing the (k + 1) th and (k) th iterative translation vectors ωiThe calculation result of (2); combining the formula (1) and the formula (2) and performing a temperature consistency loss function ETLinearization:
in the formula (I), the compound is shown in the specification,infrared image I representing ith frameiCalculating the gradient of the image in the gradient maps of u and v by using a 3 multiplied by 3 Sobel operator; r isTIs to use the kth iteration translation vectorThe residual value of each pixel point in the image is obtained by calculationA constructed vector;is r obtained by the k-th iteration calculationTV ω is calculated by the loss function after linearization:
using a translation vector omegaiFor the ith frame point cloud ViAnd the (i-1) th frame point cloud Vi-1The spatial point in (3) optimizes the matching relation; wherein Vi(ui) Is a point cloud ViMiddle pixel uiThree-dimensional vertex, point cloud V, under current camera coordinate systemi-1The three-dimensional vertex matched with the V-shaped vertex isi-1(hi-1) (ii) a In the initialization process, firstly, the camera space pose T of the ith frame is determinediIs set to Ti-1Then through Ti-1Will Vi(ui) Converting the coordinate system of the camera of the i-1 th frame, and projecting the coordinate system of the camera of the i-1 th frame to the pixel coordinate system of the i-1 th frame to obtain pixel pointsIn a pixel coordinate systemAnd the translation vector omegaiAre superposed to obtain hi-1Then is equal to Vi(ui) The matched point is
3. The human body three-dimensional surface temperature model construction method based on multi-source information fusion as claimed in claim 2, wherein in step (3), a temperature consistency constrained loss function is constructed:
in the formula (I), the compound is shown in the specification,a vertex in a point cloud picture generated in a world coordinate system for the i-1 th frame infrared image is matched with the vertex in the i-1 th frame infrared image, and the vertex matched with the vertex in the i-th frame infrared image is Vi(h) (ii) a M is the number of elements in the temperature effective point set P,for rigid transformation of three-dimensional vertices, κ (v) denotes that vertex v ═ vx,vy,vz) Projection from three-dimensional space to pixel space:
in the formula (f)xAnd fyIs the focal length of the infrared camera (c)x,cy) Is the principal point coordinate of the infrared camera;
representing delta VT of a spatial transformation matrix T as a 6-dimensional vector xi ═ (. alpha.,. beta.,. lambda., T using lie group lie algebrax,ty,tz)T(ii) a The updating mode of the kth iteration is as follows:
Ti k=VTTi k-1≈(I+ξ^)Ti k-1 (8);
in the formula, Ti k-1And Ti kRespectively the calculation results of the pose of the (k-1) th iterative camera and the (k) th iterative camera, xi^SE (3) lie algebraic form as vector xi:
substituting equations (8) and (9) for equation (5), linearizing the loss function as:
according to the chain-type derivation method, the following methods are provided:
in the formula, JκThe Jacobian matrix with (psi) as function kappa (psi) is derived from equation (7), Jψ(xi) is a Jacobian matrix of the function ψ (xi) derived from the equations (6), (8) and (9); j. the design is a squarefiAnd rfiRespectively representing loss functions EfiThe Jacobian matrix and residual terms of (ξ), so solving ξ to achieve E according to equation (12)fiMinimization of (ξ);
updating T through continuous iterationiUntil after the kth and k-1 iterations EfiAnd (5) if the value is smaller than the preset threshold value, entering the step (4).
4. The human body three-dimensional surface temperature model construction method based on multi-source information fusion of claim 3, wherein in the step (4), the ith frame of infrared image is usedAnd the i-1 th frame infrared imageThe temperature consistency of (2) is added into the optimization loss function in a weighting mode, and the optimization loss function is constructed according to the formula (13):
5. The human body three-dimensional surface temperature model construction method based on multi-source information fusion of claim 4, wherein the geometric consistency loss function is realized by minimizing the point-to-plane distance of the matching point pairs between the current frame point cloud and the model point cloud.
6. The human body three-dimensional surface temperature model construction method based on multi-source information fusion of claim 5, wherein the space vertex Vi(h) To its matching pointThe distance between the model planes is as follows:
in the formula (I), the compound is shown in the specification,constructing a model vector diagram under the i-1 frame view angleAs shown in equation (15):
in the formula, M is the number of elements in the matching point pair set O;and vi=Vi(h) Respectively representing a pair of matching points in the current input point cloud and the i-1 th frame of model point cloud;
by substituting formula (8) and formula (9) for formula (15) to giveThe linearization results of (1):
in the formula (I), the compound is shown in the specification,andrespectively representThe Jacobian matrix and the residual terms of;
solving ξ to realize E by equation (18)crMinimization of (d):
7. The human body three-dimensional surface temperature model building method based on multi-source information fusion of claim 6, wherein in step (5), a multi-source joint optimization loss function shown as formula (19) is built:
in the formula (I), the compound is shown in the specification,an objective function is optimized for spatial geometric consistency,the objective function is optimized for temperature consistency,is the weight to which it corresponds,an objective function is optimized for visible photometric uniformity,is its corresponding weight.
8. The human body three-dimensional surface temperature model construction method based on multi-source information fusion of claim 7, wherein the collected visible light image is converted from a three-primary color space to a YUV color space, wherein components U and V represent chrominance, and component Y represents luminance, and the calculation method is as follows:
Y=0.299R+0.587G+0.114B;
wherein R, G and B represent the three color components of red, green and blue, respectively, in the three primary color space;
after color conversion and brightness extraction, the average brightness B of the visible light brightness image is usedv(Y) and the maximum luminance difference Cv(Y) realizing the quantification of the visible light image quality, wherein the calculation mode is as follows:
Cv(Y)=max(Y(x))-min(Y(x))x∈Ω;
in the formula, Ω is a pixel space of the visible light brightness image;
wherein f (B) and g (C) are the average brightness B of the imagev(Y) and the maximum luminance difference Cv(Y) the weight component corresponding thereto is calculated as shown in the formulas (21) and (22); k is a radical ofBAnd kCCoefficients of f (B) and g (C), respectively;
in the formula, H is the maximum value of the pixel value of the luminance image, and is 255;
in the formula, N is the number of elements in the visible light luminosity effective point set Q; this is linearized by substituting formula (8) and formula (9) for formula (21):
according to the chain-type derivation rule:
in the formula, Jκ(ψ) is a Jacobian matrix of function κ (ψ), derived from equation (7), Jψ(xi) is a Jacobian matrix of the function ψ (xi) derived from the equations (6), (8) and (9);andrespectively representing loss functionsThe Jacobian matrix and the residual terms of;
substituting equations (24), (25) and (26) into equation (19), solving ξ to achieve EfrMinimization of (d):
updating T through continuous iterationiUntil after the kth and k-1 iterations EfrAnd (6) if the value is smaller than the preset threshold value, entering the step (6).
9. The method for constructing the human body three-dimensional surface temperature model based on the multi-source information fusion of claim 8, wherein in the step (6), the point cloud generated according to the depth image is transformed into the camera coordinate systems of the infrared camera and the visible light camera through the rigid spatial transformation by using the camera external parameters based on the depth image, and then is projected into the corresponding image coordinate systems to realize the correspondence between the images.
10. The human body three-dimensional surface temperature model building method based on multi-source information fusion of claim 9, wherein external parameters of the infrared camera relative to the depth cameraAnd at tdiTo ttiPose transformation of depth cameraThen there are:
the coordinate of the vertex in the point cloud picture acquired in the ith frame in the current camera coordinate system is vi=Vi(u) v is represented by the formula (29)iConverting the coordinate system of the infrared camera of the i-1 th frame to obtain a point vi-1:
In the formula, Ti-1And TiThe camera poses of the i-1 th frame and the i-th frame respectively,representing real-time extrinsic parameters of the i-1 th frame infrared camera; v is adjusted according to internal parameters of the infrared camerai-1Projecting the image plane to obtain the sum v of the i-1 th frame infrared imageiA corresponding pixel point p;
according to the real-time external parameters of the thermal infrared imager of the ith frameV is to beiAfter the coordinate system of the camera of the infrared camera of the ith frame is converted, the coordinate system is projected to a pixel coordinate system through an internal reference matrix to obtain a point h, the point h and the point p form a pair of matching points in front and rear two frames of infrared images, and a set formed by all M pairs of matching points is marked as S; thus, the real-time optimization objective function of the external parameters of the infrared camera is obtained:
converting increment VT of infrared camera external parameter matrix into six-dimensional vector xi according to lie group lie algebrae=(α,β,λ,tx,ty,tz)T(ii) a Then, the updating mode of the real-time optimization kth iteration of the external parameters of the infrared camera is as follows:
in the formula (I), the compound is shown in the specification,andrespectively calculating the variation of the external parameters of the infrared camera obtained from the k-1 th iteration and the k-th iteration,is a vector xieSE (3) lie algebraic form of (a):
combining formula (29) and formula (30), with EetLinearization is as follows:
in the formula, Jκ(ψ1) As a function k (psi)1) The jacobian matrix of (a) is derived from equation (7),as a function ψ1(ξe) A Jacobian matrix of (1), derived from equation (6), equation (31), and equation (32); j. the design is a squareetAnd retRespectively representing loss functions EetThe Jacobian matrix and the residual terms of; solving ξ according to equation (34)eTo realize Eet(ξe) Minimization of (d);
updating the depth camera at t through continuous iterationdiTo ttiTemporal spatial pose transformationUntil after the k and k-1 iterations EetWhen the external parameter matrix of the infrared camera is less than the preset threshold value, the external parameter matrix of the infrared camera isMatching the temperature image and the depth image by using the optimized external parameters;
depth camera triggering time tdiTo the triggering moment t of the visible light cameraviTo the spatial pose ofThe real-time extrinsic parameters of the visible light camera are:
respectively marking the visible light brightness images of the ith frame and the (i-1) th frame asAndthe set mark W formed by N pairs of matching point pairs in the two images is used for constructing an external parameter real-time optimization objective function E of the visible light cameraevComprises the following steps:
the linearization result of equation (35) is:
solving xi according to equation (37)eTo realize Eev(ξe) Minimization of (d):
continuously iteratively updating the depth camera at tdiTo tviTemporal spatial pose transformationUntil after the kth and k-1 iterations a loss function EevWhen the external parameter matrix of the infrared camera is less than the preset threshold value, the external parameter matrix of the infrared camera isAnd matching the visible light image and the depth image by using the optimized external parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110794691.7A CN113706707B (en) | 2021-07-14 | 2021-07-14 | Human body three-dimensional surface temperature model construction method based on multi-source information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110794691.7A CN113706707B (en) | 2021-07-14 | 2021-07-14 | Human body three-dimensional surface temperature model construction method based on multi-source information fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113706707A true CN113706707A (en) | 2021-11-26 |
CN113706707B CN113706707B (en) | 2024-03-26 |
Family
ID=78648522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110794691.7A Active CN113706707B (en) | 2021-07-14 | 2021-07-14 | Human body three-dimensional surface temperature model construction method based on multi-source information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113706707B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114998514A (en) * | 2022-05-16 | 2022-09-02 | 聚好看科技股份有限公司 | Virtual role generation method and equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833270A (en) * | 2017-09-28 | 2018-03-23 | 浙江大学 | Real-time object dimensional method for reconstructing based on depth camera |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN113012212A (en) * | 2021-04-02 | 2021-06-22 | 西北农林科技大学 | Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system |
-
2021
- 2021-07-14 CN CN202110794691.7A patent/CN113706707B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833270A (en) * | 2017-09-28 | 2018-03-23 | 浙江大学 | Real-time object dimensional method for reconstructing based on depth camera |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN113012212A (en) * | 2021-04-02 | 2021-06-22 | 西北农林科技大学 | Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114998514A (en) * | 2022-05-16 | 2022-09-02 | 聚好看科技股份有限公司 | Virtual role generation method and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113706707B (en) | 2024-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ishikawa et al. | Lidar and camera calibration using motions estimated by sensor fusion odometry | |
Lv et al. | LCCNet: LiDAR and camera self-calibration using cost volume network | |
CN112902953B (en) | Autonomous pose measurement method based on SLAM technology | |
Park et al. | Elastic lidar fusion: Dense map-centric continuous-time slam | |
CN114399554B (en) | Calibration method and system of multi-camera system | |
CN109712172A (en) | A kind of pose measuring method of initial pose measurement combining target tracking | |
WO2006127713A2 (en) | A fast 2d-3d image registration method with application to continuously guided endoscopy | |
Yan et al. | Dense visual SLAM with probabilistic surfel map | |
CN110796691B (en) | Heterogeneous image registration method based on shape context and HOG characteristics | |
CN110599578A (en) | Realistic three-dimensional color texture reconstruction method | |
CN110349257B (en) | Phase pseudo mapping-based binocular measurement missing point cloud interpolation method | |
Afzal et al. | Rgb-d multi-view system calibration for full 3d scene reconstruction | |
CN117197333A (en) | Space target reconstruction and pose estimation method and system based on multi-view vision | |
JP3668769B2 (en) | Method for calculating position / orientation of target object and method for calculating position / orientation of observation camera | |
CN113706707B (en) | Human body three-dimensional surface temperature model construction method based on multi-source information fusion | |
Munoz et al. | Point-to-hyperplane RGB-D pose estimation: Fusing photometric and geometric measurements | |
CN115222905A (en) | Air-ground multi-robot map fusion method based on visual features | |
CN113469886B (en) | Image splicing method based on three-dimensional reconstruction | |
Batlle et al. | Photometric single-view dense 3D reconstruction in endoscopy | |
Dai et al. | Multi-spectral visual odometry without explicit stereo matching | |
Nirmal et al. | Homing with stereovision | |
Herau et al. | MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal Calibration | |
Paudel et al. | 2D-3D camera fusion for visual odometry in outdoor environments | |
Hoegner et al. | Fusion of 3D point clouds with tir images for indoor scene reconstruction | |
RU2692970C2 (en) | Method of calibration of video sensors of the multispectral system of technical vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |