CN111242990A - 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching - Google Patents

360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching Download PDF

Info

Publication number
CN111242990A
CN111242990A CN202010010168.6A CN202010010168A CN111242990A CN 111242990 A CN111242990 A CN 111242990A CN 202010010168 A CN202010010168 A CN 202010010168A CN 111242990 A CN111242990 A CN 111242990A
Authority
CN
China
Prior art keywords
camera
dimensional
phase
point cloud
dense matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010010168.6A
Other languages
Chinese (zh)
Other versions
CN111242990B (en
Inventor
熊召龙
赖作镁
吴元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Electronic Technology Institute No 10 Institute of Cetc
Original Assignee
Southwest Electronic Technology Institute No 10 Institute of Cetc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Electronic Technology Institute No 10 Institute of Cetc filed Critical Southwest Electronic Technology Institute No 10 Institute of Cetc
Priority to CN202010010168.6A priority Critical patent/CN111242990B/en
Publication of CN111242990A publication Critical patent/CN111242990A/en
Application granted granted Critical
Publication of CN111242990B publication Critical patent/CN111242990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching, and provides a method for rapidly reconstructing a three-dimensional point cloud of a measured object by 360 degrees and performing nonlinear optimization on a reconstruction result, wherein the method is realized by the following scheme: firstly, calibrating a digital projector and a camera, acquiring a corresponding structural light deformation image, calculating the phase level of a deformation stripe pixel point, and simultaneously determining the polar line of the deformation stripe pixel point on different camera imaging planes of a camera array, thereby establishing epipolar geometry and equal phase joint constraint, calculating the dense matching of different viewing angle structural light images, and generating the dense matching relation of different angle deformation stripe phases; initializing a camera transformation matrix and an initial point of a three-dimensional point cloud by using a phase dense matching relation and a triangularization principle, constructing an objective function and a graph optimization model thereof, and solving; and performing triangularization curved surface reconstruction on the optimized three-dimensional point cloud to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target.

Description

360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
Technical Field
The invention relates to a three-dimensional reconstruction technology, in particular to a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching.
Background
The three-dimensional reconstruction is widely applied to the fields of industrial production, reverse engineering, aerial survey, virtual reality and the like. Based on structured light projection and three-dimensional reconstruction of image information, the three-dimensional information of a measured three-dimensional object with extremely high precision can be obtained, and meanwhile, a more-angle-range and more-complete three-dimensional model can be obtained by utilizing a multi-view geometric principle of a multi-camera or a motion camera, so that the structured light reconstruction method utilizing the multi-camera is an effective means for obtaining a high-precision 360-degree complete three-dimensional model. In the structured light reconstruction method, registration of three-dimensional point cloud and optimization of reconstruction results are two key technologies related to the reconstruction effect of a three-dimensional model.
The registration of the three-dimensional point cloud largely determines the reconstruction accuracy of the three-dimensional model, and is widely concerned by those skilled in the art. Generally, to obtain a complete three-dimensional model of an object, data sets from different viewing angles need to be transformed into the same coordinate system, and this process is called three-dimensional data registration. The three-dimensional data registration between different visual angles is particularly important, and is directly related to the reconstruction precision and the automation degree of three-dimensional reconstruction. When the three-dimensional model is reconstructed, the three-dimensional data of the surface of the object needs to be acquired from different angles respectively due to the limitation of the observation direction and the shape of the object, so that the surface of the three-dimensional object with real and natural texture and capable of simulating any illumination and view angle is obtained. Point cloud data is a collection of three-dimensional data points representing object surface information and spatial distribution obtained by various three-dimensional data acquisition devices, usually represented in the form of unstructured three-dimensional points, which are spatially discrete geometric points. The most basic constituent elements of a point cloud are spatially discrete points and their associated surface attributes. The difficulty of registration is the acquisition of the corresponding relationship between the point clouds of the two three-dimensional point cloud sets.
In the structured light reconstruction method, a complete three-dimensional model is mainly generated by fusing a plurality of local three-dimensional point clouds. The structured light three-dimensional measurement is to project a grating stripe modulated by a periodic function on the surface of a measured object through a projection device, and the phase of the grating stripe of each point is shifted due to the change of the surface height of the object. This method can acquire three-dimensional information of the surface of the object. Due to the limited visibility of the optical scanning system, a scanning blind area caused by shielding exists in single-view scanning, and multiple pieces of point clouds are registered and fused after multiple times of scanning in order to obtain a complete model, namely the point clouds collected at different views and having certain overlapping areas are registered together according to the characteristic that the overlapping areas are consistent, so that the point clouds can be fused into a whole in the same coordinate system. The key technology of three-dimensional point cloud fusion is a three-dimensional point cloud registration technology. The three-dimensional point cloud registration technology is characterized in that a mapping relation between point clouds under different visual angles is found, a certain algorithm is utilized, the object point clouds are subjected to rigid body transformation such as rotation and translation, the point clouds under different coordinate systems are subjected to transformation operation of matching and aligning, and the key is to obtain coordinate transformation parameters: and transforming the source point cloud to a coordinate system which is the same as the target point cloud by the rotation matrix and the translation vector. There are many interference factors in the structured light three-dimensional point cloud registration. Firstly, under the influence of registration noise, in the process of reconstructing structured light, point cloud data has a lot of small-amplitude noise and outliers due to man-made interference, the influence of environmental illumination and abrupt change of object surface type, so that a reconstructed model is rough and disordered. Secondly, the calculation amount is huge, and in large-scale data operation, the data amount of the point cloud has great influence on the later operation efficiency. Divergence of the registration may result due to differences in the initial state of the scan data and the model. The data of the point cloud is generally over thousands of levels, even millions of levels, and the huge data inevitably causes low efficiency if all the point cloud participates in calculation and repeated traversal search. The three-dimensional point cloud registration technology needs to perform feature matching between corresponding point sets, so that a large amount of calculation time is consumed.
And the optimization of the reconstruction result is to further finely adjust the three-dimensional point cloud registration result in the reconstruction process of the structural light, so that the reconstructed three-dimensional model and the camera pose have the global minimum error. Three-dimensional point cloud registration acquisition obtains rigid body transformation relations corresponding to a plurality of pieces of point clouds with different visual angles, however, due to various fine fluctuation noises in a single piece of point cloud, if a complete reconstruction model with higher precision is to be acquired, each point in each piece of three-dimensional point cloud needs to be subjected to fine adjustment of a spatial position, and the adjustment cannot be realized through rigid body transformation, so that an optimization model aiming at the complete reconstruction point cloud noise needs to be constructed, and the optimization of a reconstruction result is realized. How to design the reconstruction result optimization algorithm is also a technical difficulty in obtaining a three-dimensional reconstruction model with higher precision. In order to obtain a three-dimensional model of the whole object, when three-dimensional data of the object at different viewing angles are registered to the same reference coordinate system, registration errors are accumulated due to the continuous change of a reference viewing angle. Global optimization of the data registration as a whole may reduce registration errors. In the optimization of the reconstruction structure, the construction of an error function or a cost function is a key step for designing an optimization algorithm of a reconstruction result. In the multi-view passive three-dimensional reconstruction based on feature matching, the difference of pixel coordinates of the same space point under different view imaging conditions is measured by a re-projection error, and a two-norm of the whole size of the re-projection error is used as a key parameter of a construction error function. The solving of the error function is a process of searching a nonlinear optimization optimal value in an iteration mode, a disturbance model is used in the process, the derivative of a single error item about the quantity to be optimized is obtained, then continuous iteration is carried out, only one or more minimum values are obtained, and the quantity to be optimized corresponding to the global minimum value is judged as the global optimal value through judgment. In practical applications, the number of the point clouds is from thousands to millions, and the iterative solution of the error function requires a large amount of time cost and space cost, so that it is also necessary to design an error function with fast convergence, and an initial optimization value that can make the error function converge quickly.
Disclosure of Invention
The invention aims to provide a method which can quickly realize 360-degree reconstruction of three-dimensional point cloud of a measured object and perform nonlinear optimization on a reconstruction result, and the method has the advantages of high reconstruction precision, low texture dependence, few rotation times of the measured object, no contact, mutual independence of calculation of each point and the like.
The invention provides a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching, which has the following technical characteristics: in the structured light projection and camera array 1 acquisition, firstly, calibrating a digital projector 2 and a camera array 1, projecting structured light stripes after the digital projector 2 is calibrated, shooting deformed stripes with different angles after the camera array 1 is calibrated, acquiring corresponding structured light deformed images, further calculating phase levels of deformed stripe pixel points, and simultaneously determining polar lines of the deformed stripe pixel points on different camera imaging planes of the camera array 1, thereby establishing epipolar geometry and equiphase joint constraint, calculating dense matching of structured light images with different viewing angles, and generating dense matching relations of deformed stripe phases with different angles; initializing a camera transformation matrix and a three-dimensional initial point by utilizing a phase dense matching relation and a triangularization principle, designing a globally optimized objective function, representing an overall error, constructing an objective function graph optimization model and solving the model; calculating optimal solutions of different camera poses and the integral three-dimensional point cloud through iteration to complete iterative optimization calculation of the objective function; and generating a complete three-dimensional point cloud by using the optimized three-dimensional point cloud, performing triangularization curved surface reconstruction on the optimized three-dimensional model to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target, and completing the generation of the complete three-dimensional target model.
The invention has the following advantages compared with the required technology.
The method comprises the steps of utilizing a calibrated camera array 1 and a digital projector to simultaneously obtain corresponding structural light deformation images from different angles; then, calculating dense matching of the structured light images with different viewing angles by using the phase joint constraint of the polar geometry and the structured light and the like of the camera array 1, and calculating an initial value of optimization iteration by using a triangulation principle; designing an objective function representing the overall error, constructing an objective function graph optimization model, and iteratively calculating optimal solutions of different camera poses and the overall three-dimensional point cloud; and finally, performing triangularization curved surface reconstruction on the optimized three-dimensional model to obtain a complete 360-degree three-dimensional reconstruction model of the measured three-dimensional target. Through four processes of structured light projection and camera array 1 acquisition, different-view-angle continuous phase dense matching, objective function construction, iterative optimization calculation and complete three-dimensional target model generation, 360-degree three-dimensional point cloud rapid reconstruction of a measured object can be realized, and experimental results show that the method has the advantages of high reconstruction precision, low texture dependence, few rotation times of the measured object, no contact, mutual independence of calculation of each point and the like. Compared with the traditional iterative closest point three-dimensional registration method, the method is more efficient, accurate and stable.
The method comprises the steps of simultaneously obtaining corresponding structural light deformation images from different angles, calculating phase levels of deformation stripe pixel points, determining deformation stripe pixel point polar lines, establishing pair-level geometric and phase joint constraints, then utilizing the pair-level geometric and structural light equal phase joint constraints of a camera array 1, calculating the dense matching of the phases of the deformation stripe of the structural light with different visual angles, simultaneously designing a global optimization objective function, constructing a graph optimization model of the function, and further solving the problem. The objective function simultaneously considers the transformation matrix representing the pose of the camera and the spatial position of the three-dimensional point cloud, realizes the global optimization of the structured light three-dimensional reconstruction process, greatly improves the precision of the target optimization result due to the dense matching relation of continuous phases with different visual angles and the accurate initial value design of the objective function, and reduces the time consumption of the calculation process.
According to the continuous phase dense matching process of different viewing angles, the calibrated camera array and the digital projector are utilized to simultaneously obtain corresponding structural light deformation images from different angles, then the dense matching of the structural light images of different viewing angles is calculated by utilizing the joint constraint of the antipodal geometry, the structural light and other phases of the camera array, the initial value of the optimization iteration is calculated by utilizing the triangulation principle, and the method has the advantages of high reconstruction precision, low texture dependence, few rotation times of the measured object, no contact and mutual independence of calculation of each point.
Drawings
FIG. 1 is a schematic flow chart of a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching according to the present invention;
FIG. 2 is a schematic diagram of a 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching according to the present invention;
FIG. 3 is a schematic diagram of a epipolar geometry and equiphase joint constraint condition;
FIG. 4 is a schematic diagram of an optimized representation of an objective function graph.
In the figure: the method comprises the following steps of 1, 2 digital projectors of a camera array, 3, projecting structured light striations, 4, a three-dimensional target to be measured, 5, a first camera imaging plane, 6, a second camera imaging plane, 7, a three-dimensional target surface equiphase line, 8, a camera pose vertex and 9, and a three-dimensional point cloud vertex.
It should be understood that the above-described figures are merely schematic and are not drawn to scale.
Detailed Description
The following describes an exemplary embodiment of a 360 ° three-dimensional reconstruction optimization method based on continuous phase dense matching according to the present invention in detail, and the present invention is further described in detail. It should be noted that the following examples are only for illustrative purposes and should not be construed as limiting the scope of the present invention, and that the skilled person in the art may make modifications and adaptations of the present invention without departing from the scope of the present invention.
See fig. 1. According to the invention, the three-dimensional reconstruction optimization method comprises four processes of structured light projection and camera array 1 acquisition, different visual angle continuous phase dense matching, objective function construction and iterative optimization calculation and complete three-dimensional object model generation. In the structured light projection and camera array 1 acquisition, firstly, calibrating a digital projector 2 and a camera array 1, projecting structured light stripes after the digital projector 2 is calibrated, shooting deformed stripes with different angles after the camera array 1 is calibrated, acquiring corresponding structured light deformed images, further calculating phase levels of deformed stripe pixel points, and simultaneously determining polar lines of the deformed stripe pixel points on different camera imaging planes of the camera array 1, thereby establishing grade geometry and equal phase joint constraint, calculating dense matching of the structured light images with different viewing angles, and generating dense matching relations of the deformed stripe phases with different angles; initializing a camera transformation matrix and a three-dimensional initial point by utilizing a phase dense matching relation and a triangularization principle, designing a globally optimized objective function, representing an overall error, constructing an objective function graph optimization model and solving, calculating optimal solutions of different camera poses and an overall three-dimensional point cloud through iteration, and finishing objective function and iterative optimization calculation; and generating a complete three-dimensional point cloud by using the optimized three-dimensional point cloud, performing triangularization curved surface reconstruction on the optimized three-dimensional model to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target, and completing the generation of the complete three-dimensional target model.
See fig. 2-3. Structured light three-dimensional reconstruction optimization device based on continuous phase dense matching comprises: by MK×NK A camera array 1 composed of cameras, a digital projector 2 arranged at the central axis position of the plane where the camera array 1 is arranged, and the interval between the cameras is dccThe optical axes of the digital projector 2 and the camera array 1 converge the three-dimensional object 4 to be measured and generate projected structured light fringes 3 covering the three-dimensional object 4 to be measured, the first camera and the second camera having respective centers O1And O2,O1、O2The points correspond to a first camera imaging plane 5 and a second camera imaging plane 6, respectively.
In three-dimensional space, a point P on the surface of the measured object is on the imaging plane I of the first camera1And a second camera imaging plane I2Respectively forming pixel points p1And pixel point p2,O1、O2And a three-dimensional space midpoint P to form a triangle, the vertex of the triangle is positioned on an equal phase line 7 of the surface of the measured three-dimensional target, and the equal phase line 7 is imaged by the first camera and the second camera respectively.
In the structured light projection and camera array 1 acquisition process, a checkerboard calibration image is projected at a plane by using a digital projector, the checkerboard image is acquired by the camera array 1, the image corner points are detected, and then a basic matrix between each camera and the digital projector is obtained through calculation. Decomposing the basic matrix to obtain a rotation matrix R of the kth phase relative digital projectorkcAnd a translation matrix tkcWhere kc is the index number of the camera. Then utilizing digital projector to project structured light with sine light and shade distributionStripe Pi(u, v), structural light stripe Pi(u, v) satisfies:
Figure BDA0002356854660000041
wherein (u, v) is any one of the pixel coordinates of the projector pixel coordinate system, Ap(u, v) represents the intensity of the DC component, Bp(u, v) represents the amplitude of the stripe, 2 pi i/N represents the phase shift amount of the ith stripe, and N is the total phase shift step number of the structured light stripe and is an integer greater than or equal to 4. Meanwhile, a camera array is used for collecting a three-dimensional target to be detected to obtain a reflected deformed stripe image Ii(x, y) satisfies:
Figure BDA0002356854660000042
wherein (x, y) is any one of the pixel coordinates of the camera pixel coordinate system, Ac(x, y) represents the background intensity of the object to be measured, Bc(x, y) represents the acquired fringe amplitude. By using
Figure BDA0002356854660000048
The phase function representing the deformed fringes modulated by the object surface, i.e. the truncated phase, can be solved to obtain:
Figure BDA0002356854660000043
and phase function
Figure BDA0002356854660000044
The value range is (-pi, pi)]Then, determining the orders of different phase periods by utilizing Gray code projection of different frequencies, and unfolding the orders to obtain a corresponding absolute phase phi (x, y):
Figure BDA0002356854660000045
wherein, T is the phase period of the structured light, u is the truncation phase coordinate corresponding to the current pixel, the absolute phase has continuity, and the equal phase line direction is consistent with the original projection fringe direction.
See fig. 3. And determining the epipolar geometric constraint of the matching points. From the first camera perspective, p1Is a projection of a point P in space, with possible projection positions in the second camera imaging plane at e2And p2On the line of (2), i.e. polar line L2The above. The internal parameter matrix of the camera array 1 is unified as K, wherein the pixel point p1And p2Satisfying under homogeneous coordinates:
p1=KP,p2=K(RP+t) (5)
pixel point p1And p2Corresponding normalized plane coordinates
Figure BDA0002356854660000046
And
Figure BDA0002356854660000047
and polar constraint conditions are met:
Figure BDA0002356854660000051
wherein [ ] A]TRepresenting the transpose of the matrix, E12Is an essential matrix between two cameras. Pixel matching points that satisfy epipolar geometric constraints determine epipolar line positions that may correspond within another pixel coordinate system, but the specific coordinates to which the pixel corresponds cannot yet be determined. The structured-light equiphase constraints of the matching points are then determined. As shown in FIG. 3, the corresponding image I at the first cameralIn, pixel point p1Corresponding absolute phase Φ1(x, y) can be obtained from equation (4), and the absolute phase Φ1And (x, y) corresponding to the equiphase line 7 of the surface of the measured three-dimensional target in the three-dimensional space, and marking as a contour line S, wherein the space point P is positioned on the contour line. The contour line is projected by a second camera, and an equiphase curve S is also formed on the second camera2In the image I2The above can be expressed as:
S2(x2,y2)=Φ1(w1)=Φ1(x1,y1) (7)
then from the first camera perspective, diagram I1Pixel point p in1At the imaging plane I of the second camera2The reprojection point on can only be on the equiphase curve S2The above.
And finally, combining the P point camera array 1 with the epipolar geometric constraint and the structured light equal phase constraint. In picture I2Mean solution polar line L2And curve S2Intersection point p2The pixel coordinates of which are the image I1In (c) p1And (4) solving the accurate matching of the point P on the imaging planes of the first camera and the second camera according to the matching coordinates of the point. Similarly, traverse image I1Removing all the pixel points in the image I except the matching points2The correct corresponding relation can be found for the points in the coordinate range and other points, and then the image I is obtained1And image I2Dense matching. And obtaining the initial value of the three-dimensional point cloud and the initial value of the transformation matrix in each camera coordinate system by utilizing a triangulation principle and a multipoint projection algorithm.
In the process of target function construction and iterative optimization calculation, the initial value of the three-dimensional point cloud with large error is obtained by utilizing the principle of triangulation, and the error is reduced to the minimum through iterative optimization. Likewise, transformation matrix R for different cameraskc|tkcAnd optimizing to obtain the optimal three-dimensional reconstruction result of the measured object. The process first constructs an optimized objective function. Representing the three-dimensional point cloud and the transformation matrix R | t as a lie algebra by using the distance between the observed value and the estimated value of the three-dimensional point cloud corresponding to all cameras as an optimized objective function
Figure BDA0002356854660000052
Domain in which the transformation matrix R | t is
Figure BDA0002356854660000053
The above exponential mapping is expressed as exp (ξ ^), exp (ξ ^) satisfies:
Figure BDA0002356854660000054
Figure BDA0002356854660000055
where ρ is the front three-dimension of ξ, representing translation in the three-dimensional point cloud transformation, and Φ is the back three-dimension of ξ, representing rotation in the three-dimensional point cloud transformation.
The set of estimates for the three-dimensional point cloud is denoted as { Qj(x, y, z) }, the objective function is set to:
Figure BDA0002356854660000056
wherein the content of the first and second substances,
Figure BDA0002356854660000057
the depth distance corresponding to the jth three-dimensional point under the kth camera coordinate system,
Figure BDA0002356854660000058
is the pixel coordinate corresponding to the jth three-dimensional point under the kth camera coordinate system, K-1For the inverse of the intrinsic parameter matrix for each camera, M is the total number of three-dimensional point clouds. The process then solves the constructed objective function globally.
See fig. 4. The solution of the objective function is constructed as a graph optimization problem. The solid line triangle in fig. 4 represents a camera pose vertex, the dotted line triangle represents pose uncertainty of the vertex, and the farther the dotted line triangle is from the solid line triangle, the larger the angle is, the larger the deviation of the camera pose observed value from the true value is; the solid line circle represents the vertex of the three-dimensional point cloud, the dotted line circle represents the uncertainty of the point cloud, and the larger the dotted line circle is, the larger the deviation of the observed value representing the modified point cloud from the true value is; and a virtual straight line connecting the point cloud and the camera pose vertex represents an observation model. The top points of the graph are all three-dimensional space point clouds and the poses of the camera array 1, and represent optimization variables of the graph optimization problem; graph optimization problem where the edges of the graph connect vertices, representing observations between different vertices in a common view regionThe relationship is the error term of the graph optimization problem. The process optimizes the three-dimensional point cloud and the pose of the camera array 1 simultaneously, and sets a three-dimensional point cloud peak and a pose peak respectively. And simultaneously optimizing the three-dimensional point cloud and the pose of the camera array 1 in the process of constructing the objective function and performing iterative optimization calculation. In solving the optimization problem in detail, the types of vertices and edges are defined first. The three-dimensional point cloud has a vertex dimension of 3,
Figure BDA0002356854660000061
the camera pose node is a 6-dimensional lie algebra,
Figure BDA0002356854660000062
the edge is the concrete realization of an observation equation of each three-dimensional point in any camera, and particularly attention needs to be paid to the fact that position and pose nodes of 6-dimensional lie algebra need to be subjected to Rodrigues transformation, and after a transformation matrix R | t of each camera is obtained, the observation equation is used for projection. A map of the problem is then constructed. As shown in the objective function formula (10) and fig. 4, the structure of the graph mainly consists of observed three-dimensional coordinate values corresponding to the jth three-dimensional point in the kth camera coordinate system; the initial values of the graph are derived from camera array 1 calibration data and dense matching triangularization. An optimization algorithm is then selected. In the optimization problem, a descending strategy of a Levenberg-Marquardt method is selected, and meanwhile, an automatic derivation library of G2O is used, so that a Jacobian matrix for calculating first-order derivation of a high-dimensional matrix and a Haas matrix for calculating second-order derivation are omitted. Meanwhile, an edge method in the simultaneous positioning and map construction technology is introduced, Schur element elimination in a descending strategy is realized, and calculation of an optimization problem is accelerated. And finally, setting an optimization threshold value, and analyzing an iteration result until the result is converged.
In the complete three-dimensional target model generation process, the optimized three-dimensional point cloud is used for triangularization and curved surface reconstruction to obtain a complete 360-degree three-dimensional reconstruction model of the measured three-dimensional target, and meanwhile, the pose relation of the camera array 1 under a complete three-dimensional model coordinate system is determined according to the pose of each camera relative to the point cloud by using the optimized transformation matrix of each camera.
The method can realize the rapid reconstruction of the 360-degree three-dimensional point cloud of the measured object through four processes of structured light projection, camera array 1 acquisition, different visual angle continuous phase dense matching, target function construction, iterative optimization calculation and complete three-dimensional target model generation, and has the advantages of high reconstruction precision, low texture dependence, few rotation times of the measured object, no contact, mutual independence of calculation of each point and the like.
The present invention has been described in detail with reference to the drawings, but it should be understood that the above-described embodiments are merely preferred examples of the present invention, and not restrictive, and various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching has the following technical characteristics: in structured light projection and camera array (1) acquisition, firstly calibrating a digital projector (2) and a camera array (1), projecting structured light stripes after the digital projector (2) is calibrated, shooting different-angle deformed stripes after the camera array (1) is calibrated, acquiring corresponding structured light deformation images, further calculating phase levels of deformed stripe pixel points, simultaneously determining polar lines of the deformed stripe pixel points on different camera imaging planes of the camera array (1), establishing epipolar geometry and equiphase joint constraint, calculating dense matching of different-view structured light images, and generating dense matching relations of different-angle deformed stripe phases; initializing a camera transformation matrix and an initial point of a three-dimensional point cloud by using a phase dense matching relation and a triangularization principle, designing a global optimized objective function, representing an overall error, constructing an objective function graph optimization model and solving the objective function graph optimization model; calculating optimal solutions of different camera poses and the integral three-dimensional point cloud through iteration to complete iterative optimization calculation of the objective function; and generating a complete three-dimensional point cloud by using the optimized three-dimensional point cloud, performing triangularization curved surface reconstruction on the optimized three-dimensional point cloud to obtain a complete 360-degree three-dimensional target reconstruction model of the measured target, and completing the generation of the complete three-dimensional target model.
2. The continuous-phase dense matching-based 360-degree three-dimensional reconstruction optimization method of claim 1, wherein: in three-dimensional space, a point P on the surface of the measured object is on the imaging plane I of the first camera1And a second camera imaging plane I2Respectively forming pixel points p1And pixel point p2,O1、O2And a three-dimensional space midpoint P forms a triangle, the vertex of the triangle is positioned on an equal phase line (7) of the surface of the measured three-dimensional target, and the equal phase line (7) is imaged by the first camera and the second camera respectively.
3. The continuous-phase dense matching-based 360-degree three-dimensional reconstruction optimization method of claim 1, wherein: in the structured light projection and camera array (1) acquisition processes, a checkerboard calibration image is projected at a plane by using the digital projector (2), the checkerboard image is acquired by the camera array (1), the image corner points are detected, and then a basic matrix between each camera and the digital projector is calculated.
4. The continuous-phase dense matching-based 360-degree three-dimensional reconstruction optimization method according to claim 3, characterized in that: according to the index number kc of the camera, the basic matrix is decomposed to obtain a rotation matrix R of the k phase relative to the digital projector (2)kcAnd a translation matrix tkcThen, according to the phase period T of the structured light, the total phase shift step number N of the structured light stripes, the phase shift amount 2 pi i/N of any pixel coordinate (u, v) ith stripe of a projector pixel coordinate system and the intensity A of the direct current componentp(u, v) and fringe amplitude Bp(u, v) obtaining structured light streaks P satisfying sinusoidal light and shade distribution for digital projector projectioni(u,v),
Figure FDA0002356854650000011
Wherein, N is an integer of more than or equal to 4.
5. The continuous-phase dense matching-based 360-degree three-dimensional reconstruction optimization method according to claim 4, wherein: the three-dimensional target to be measured is collected by a camera array, and the background intensity A of the measured object is obtained according to any pixel coordinate (x, y) of a camera pixel coordinate systemc(x, y), obtaining the fringe amplitude Bc(x, y) phase function of deformed fringes modulated by object surface
Figure FDA0002356854650000012
Obtaining deformed stripe image I satisfying reflectioni(x,y):
Figure FDA0002356854650000013
Solving to obtain a truncation phase:
Figure FDA0002356854650000014
wherein the phase function
Figure FDA0002356854650000015
The value range is (-pi, pi)]。
6. The continuous-phase dense matching-based 360 ° three-dimensional reconstruction optimization method according to claim 5, wherein: determining the orders of different phase periods by utilizing Gray code projection of different frequencies, and unfolding the orders to obtain corresponding absolute phases phi (x, y):
Figure FDA0002356854650000021
wherein, T is the phase period of the structured light, u is the truncation phase coordinate corresponding to the current pixel, the absolute phase has continuity, and the equal phase line direction is consistent with the original projection fringe direction.
7. The base of claim 6The 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching is characterized by comprising the following steps of: determining epipolar geometric constraint of matching points: according to p from the perspective of the first camera1Is a projection of a point P in space, with possible projection positions in the second camera imaging plane at e2And p2On the line of (2), i.e. polar line L2In the above, the internal parameter matrix of the camera array 1 is unified as K, wherein the pixel point p1And p2Satisfying under homogeneous coordinates: p is a radical of1=KP,p2=K(RP+t),
Pixel point p1And p2Corresponding normalized plane coordinates
Figure FDA0002356854650000022
And
Figure FDA0002356854650000023
and polar constraint conditions are met:
Figure FDA0002356854650000024
wherein [ ] A]TRepresenting the transpose of the matrix, E12Is an essential matrix between two cameras.
8. The continuous-phase dense matching-based 360 ° three-dimensional reconstruction optimization method according to claim 7, wherein: image I corresponding to the first cameralIn, pixel point p1Corresponding absolute phase Φ1(x, y) is derived from the absolute phase Φ (x, y), which is1(x, y) corresponds to a measured three-dimensional target surface equiphase line (7) in a three-dimensional space, a space point P is positioned on the contour line S, the contour line S is projected through a second camera, and an equiphase curve S is also formed on the second camera2In the image I2The above is expressed as: s2(x2,y2)=Φ1(w1)=Φ1(x1,y1)(7)
And finally, combining the P point camera array (1) with the epipolar geometric constraint and the structured light equiphase constraint.
9. The continuous-phase dense matching-based 360 ° three-dimensional reconstruction optimization method according to claim 7, wherein: in picture I2Mean solution polar line L2And curve S2Intersection point p2The pixel coordinates of which are the image I1In (c) p1Matching coordinates of the points to obtain the accurate matching of the P points on the imaging planes of the first camera and the second camera, and traversing the image I in the same way1Removing all the pixel points in the image I except the matching points2Finding the correct corresponding relation for the points in the coordinate range and other points to obtain the image I1And image I2The three-dimensional point cloud initial value and the initial value of the transformation matrix in each camera coordinate system are obtained by utilizing the triangulation principle and the multi-point projection algorithm; representing the three-dimensional point cloud and the transformation matrix R | t as a lie algebra by using the distance between the observed value and the estimated value of the three-dimensional point cloud corresponding to all cameras as an optimized objective function
Figure FDA0002356854650000025
Domain, where the exponential mapping of the transformation matrix R | t on se (3) is denoted as exp (ξ ^), exp (ξ ^) satisfies:
Figure FDA0002356854650000026
Figure FDA0002356854650000027
where ρ is the front three-dimension of ξ representing translation in three-dimensional point cloud transformation, Φ is the rear three-dimension of ξ representing rotation in three-dimensional point cloud transformation, and the set of estimated values of three-dimensional point cloud is represented as { Qj(x, y, z) }, the objective function is set to:
Figure FDA0002356854650000028
wherein
Figure FDA0002356854650000031
The depth distance corresponding to the jth three-dimensional point under the kth camera coordinate system,
Figure FDA0002356854650000032
is the pixel coordinate corresponding to the jth three-dimensional point under the kth camera coordinate system, K-1For the inverse of the intrinsic parameter matrix for each camera, M is the total number of three-dimensional point clouds.
10. A structured light three-dimensional reconstruction optimization device based on continuous phase dense matching comprises: by MK×NKA camera array (1) composed of cameras, a digital projector (2) arranged at the central axis position of the plane where the camera array (1) is positioned, and the interval between the cameras is dccThe method is characterized in that: the optical axes of the digital projector (2) and the camera array (1) converge the three-dimensional object (4) to be measured and generate projected structured light fringes (3) covering the three-dimensional object (4) to be measured, the first camera and the second camera having respective centers of O1And O2,O1、O2The points correspond to a first camera imaging plane (5) and a second camera imaging plane (6), respectively.
CN202010010168.6A 2020-01-06 2020-01-06 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching Active CN111242990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010010168.6A CN111242990B (en) 2020-01-06 2020-01-06 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010010168.6A CN111242990B (en) 2020-01-06 2020-01-06 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching

Publications (2)

Publication Number Publication Date
CN111242990A true CN111242990A (en) 2020-06-05
CN111242990B CN111242990B (en) 2024-01-30

Family

ID=70877630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010010168.6A Active CN111242990B (en) 2020-01-06 2020-01-06 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching

Country Status (1)

Country Link
CN (1) CN111242990B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053432A (en) * 2020-09-15 2020-12-08 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization
CN112967342A (en) * 2021-03-18 2021-06-15 深圳大学 High-precision three-dimensional reconstruction method and system, computer equipment and storage medium
CN113074661A (en) * 2021-03-26 2021-07-06 华中科技大学 Projector corresponding point high-precision matching method based on polar line sampling and application thereof
CN113074667A (en) * 2021-03-22 2021-07-06 苏州天准软件有限公司 Global absolute phase alignment method based on mark points, storage medium and system
CN113205592A (en) * 2021-05-14 2021-08-03 湖北工业大学 Light field three-dimensional reconstruction method and system based on phase similarity
CN113256795A (en) * 2021-05-31 2021-08-13 中国科学院长春光学精密机械与物理研究所 Endoscopic three-dimensional detection method
CN113345039A (en) * 2021-03-30 2021-09-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimensional reconstruction quantization structure optical phase image coding method
CN113432550A (en) * 2021-06-22 2021-09-24 北京航空航天大学 Large-size part three-dimensional measurement splicing method based on phase matching
CN113516775A (en) * 2021-02-09 2021-10-19 天津大学 Three-dimensional reconstruction method for acquiring stamp auxiliary image by mobile phone camera
CN113587816A (en) * 2021-08-04 2021-11-02 天津微深联创科技有限公司 Array type large-scene structured light three-dimensional scanning measurement method and device
CN114004878A (en) * 2020-07-28 2022-02-01 株式会社理光 Alignment device, alignment method, alignment system, storage medium, and computer device
CN114708316A (en) * 2022-04-07 2022-07-05 四川大学 Structured light three-dimensional reconstruction method and device based on circular stripes and electronic equipment
CN114863036A (en) * 2022-07-06 2022-08-05 深圳市信润富联数字科技有限公司 Data processing method and device based on structured light, electronic equipment and storage medium
CN114972544A (en) * 2022-07-28 2022-08-30 星猿哲科技(深圳)有限公司 Method, device and equipment for self-calibration of external parameters of depth camera and storage medium
WO2023000703A1 (en) * 2021-07-23 2023-01-26 北京百度网讯科技有限公司 Image acquisition system, three-dimensional reconstruction method and apparatus, device and storage medium
CN116778066A (en) * 2023-08-24 2023-09-19 先临三维科技股份有限公司 Data processing method, device, equipment and medium
CN117333649A (en) * 2023-10-25 2024-01-02 天津大学 Optimization method for high-frequency line scanning dense point cloud under dynamic disturbance
CN117635875A (en) * 2024-01-25 2024-03-01 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and terminal
CN117635875B (en) * 2024-01-25 2024-05-14 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and terminal

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060540A1 (en) * 2010-02-12 2013-03-07 Eidgenossische Tehnische Hochschule Zurich Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information
CN104240289A (en) * 2014-07-16 2014-12-24 崔岩 Three-dimensional digitalization reconstruction method and system based on single camera
CN104331897A (en) * 2014-11-21 2015-02-04 天津工业大学 Polar correction based sub-pixel level phase three-dimensional matching method
JP2015158749A (en) * 2014-02-21 2015-09-03 株式会社リコー Image processor, mobile body, robot, device control method and program
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN106600686A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
CN106683173A (en) * 2016-12-22 2017-05-17 西安电子科技大学 Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching
CN107833181A (en) * 2017-11-17 2018-03-23 沈阳理工大学 A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108257089A (en) * 2018-01-12 2018-07-06 北京航空航天大学 A kind of method of the big visual field video panorama splicing based on iteration closest approach
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109064536A (en) * 2018-07-27 2018-12-21 电子科技大学 A kind of page three-dimensional rebuilding method based on binocular structure light
WO2019113531A1 (en) * 2017-12-07 2019-06-13 Ouster, Inc. Installation and use of vehicle light ranging system
CN109919876A (en) * 2019-03-11 2019-06-21 四川川大智胜软件股份有限公司 A kind of true face model building of three-dimensional and three-dimensional true face photographic system
CN110288642A (en) * 2019-05-25 2019-09-27 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimension object fast reconstructing method based on camera array

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060540A1 (en) * 2010-02-12 2013-03-07 Eidgenossische Tehnische Hochschule Zurich Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information
JP2015158749A (en) * 2014-02-21 2015-09-03 株式会社リコー Image processor, mobile body, robot, device control method and program
CN104240289A (en) * 2014-07-16 2014-12-24 崔岩 Three-dimensional digitalization reconstruction method and system based on single camera
CN104331897A (en) * 2014-11-21 2015-02-04 天津工业大学 Polar correction based sub-pixel level phase three-dimensional matching method
CN104952075A (en) * 2015-06-16 2015-09-30 浙江大学 Laser scanning three-dimensional model-oriented multi-image automatic texture mapping method
CN106600686A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Three-dimensional point cloud reconstruction method based on multiple uncalibrated images
CN106683173A (en) * 2016-12-22 2017-05-17 西安电子科技大学 Method of improving density of three-dimensional reconstructed point cloud based on neighborhood block matching
CN107833181A (en) * 2017-11-17 2018-03-23 沈阳理工大学 A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
WO2019113531A1 (en) * 2017-12-07 2019-06-13 Ouster, Inc. Installation and use of vehicle light ranging system
US20190179029A1 (en) * 2017-12-07 2019-06-13 Ouster, Inc. Monitoring of vehicles using light ranging systems
US20190178998A1 (en) * 2017-12-07 2019-06-13 Ouster, Inc. Installation and use of vehicle light ranging system
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108257089A (en) * 2018-01-12 2018-07-06 北京航空航天大学 A kind of method of the big visual field video panorama splicing based on iteration closest approach
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109064536A (en) * 2018-07-27 2018-12-21 电子科技大学 A kind of page three-dimensional rebuilding method based on binocular structure light
CN109919876A (en) * 2019-03-11 2019-06-21 四川川大智胜软件股份有限公司 A kind of true face model building of three-dimensional and three-dimensional true face photographic system
CN110288642A (en) * 2019-05-25 2019-09-27 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimension object fast reconstructing method based on camera array

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
DI JIA, MINGYUAN ZHAO: ""FDM: fast dense matching based on sparse matching"", 《SIGNAL, IMAGE AND VIDEO PROCESSING》 *
SERVIN M, GARNICA G, ESTRADA J C, ET AL.: ""High-resolution low-noise 360-degree digital solid reconstruction using phase stepping profilometry"", 《OPTICS EXPRESS HTTPS://DOI.ORG/10.1364/OE.22.010914》 *
XIONG Z L, WANG Q H, XING Y, ET AL.: ""Active integral imaging system based on multiple structured light method"", 《OPTICS EXPRESS DOI: 10.1364/OE.23.027094》 *
YI ZHOU, GUILLERMO GALLEGO, HENRI REBECQ, LAURENT KNEIP, HONGDONG LI, DAVIDE SCARAMUZZA: ""Semi-Dense 3D Reconstruction with a Stereo Event Camera"", 《EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV)》 *
杨振发;万刚;曹雪峰;李锋;谢理想: ""基于几何结构特征的点云表面重建方法"", 《系统仿真学报》 *
江泽涛;郑碧娜;吴敏: "一种基于立体像对稠密匹配的三维重建方法", 第八届全国信号与信息处理联合学术会议 *
赵碧霞;张华: "基于Bayes理论的散斑三维重建方法", 计算机工程, no. 12 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004878A (en) * 2020-07-28 2022-02-01 株式会社理光 Alignment device, alignment method, alignment system, storage medium, and computer device
CN112053432A (en) * 2020-09-15 2020-12-08 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization
CN112053432B (en) * 2020-09-15 2024-03-26 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization
CN113516775B (en) * 2021-02-09 2023-02-28 天津大学 Three-dimensional reconstruction method for acquiring stamp auxiliary image by mobile phone camera
CN113516775A (en) * 2021-02-09 2021-10-19 天津大学 Three-dimensional reconstruction method for acquiring stamp auxiliary image by mobile phone camera
CN112967342A (en) * 2021-03-18 2021-06-15 深圳大学 High-precision three-dimensional reconstruction method and system, computer equipment and storage medium
CN112967342B (en) * 2021-03-18 2022-12-06 深圳大学 High-precision three-dimensional reconstruction method and system, computer equipment and storage medium
CN113074667A (en) * 2021-03-22 2021-07-06 苏州天准软件有限公司 Global absolute phase alignment method based on mark points, storage medium and system
CN113074667B (en) * 2021-03-22 2022-08-23 苏州天准软件有限公司 Global absolute phase alignment method based on mark points, storage medium and system
CN113074661A (en) * 2021-03-26 2021-07-06 华中科技大学 Projector corresponding point high-precision matching method based on polar line sampling and application thereof
CN113074661B (en) * 2021-03-26 2022-02-18 华中科技大学 Projector corresponding point high-precision matching method based on polar line sampling and application thereof
CN113345039A (en) * 2021-03-30 2021-09-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimensional reconstruction quantization structure optical phase image coding method
CN113345039B (en) * 2021-03-30 2022-10-28 西南电子技术研究所(中国电子科技集团公司第十研究所) Three-dimensional reconstruction quantization structure optical phase image coding method
CN113205592A (en) * 2021-05-14 2021-08-03 湖北工业大学 Light field three-dimensional reconstruction method and system based on phase similarity
CN113205592B (en) * 2021-05-14 2022-08-05 湖北工业大学 Light field three-dimensional reconstruction method and system based on phase similarity
CN113256795B (en) * 2021-05-31 2023-10-03 中国科学院长春光学精密机械与物理研究所 Endoscopic three-dimensional detection method
CN113256795A (en) * 2021-05-31 2021-08-13 中国科学院长春光学精密机械与物理研究所 Endoscopic three-dimensional detection method
CN113432550A (en) * 2021-06-22 2021-09-24 北京航空航天大学 Large-size part three-dimensional measurement splicing method based on phase matching
WO2023000703A1 (en) * 2021-07-23 2023-01-26 北京百度网讯科技有限公司 Image acquisition system, three-dimensional reconstruction method and apparatus, device and storage medium
CN113587816A (en) * 2021-08-04 2021-11-02 天津微深联创科技有限公司 Array type large-scene structured light three-dimensional scanning measurement method and device
CN114708316A (en) * 2022-04-07 2022-07-05 四川大学 Structured light three-dimensional reconstruction method and device based on circular stripes and electronic equipment
CN114863036B (en) * 2022-07-06 2022-11-15 深圳市信润富联数字科技有限公司 Data processing method and device based on structured light, electronic equipment and storage medium
CN114863036A (en) * 2022-07-06 2022-08-05 深圳市信润富联数字科技有限公司 Data processing method and device based on structured light, electronic equipment and storage medium
CN114972544B (en) * 2022-07-28 2022-10-25 星猿哲科技(深圳)有限公司 Method, device and equipment for self-calibration of external parameters of depth camera and storage medium
CN114972544A (en) * 2022-07-28 2022-08-30 星猿哲科技(深圳)有限公司 Method, device and equipment for self-calibration of external parameters of depth camera and storage medium
CN116778066A (en) * 2023-08-24 2023-09-19 先临三维科技股份有限公司 Data processing method, device, equipment and medium
CN116778066B (en) * 2023-08-24 2024-01-26 先临三维科技股份有限公司 Data processing method, device, equipment and medium
CN117333649A (en) * 2023-10-25 2024-01-02 天津大学 Optimization method for high-frequency line scanning dense point cloud under dynamic disturbance
CN117635875A (en) * 2024-01-25 2024-03-01 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and terminal
CN117635875B (en) * 2024-01-25 2024-05-14 深圳市其域创新科技有限公司 Three-dimensional reconstruction method, device and terminal

Also Published As

Publication number Publication date
CN111242990B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN111242990B (en) 360-degree three-dimensional reconstruction optimization method based on continuous phase dense matching
Murali et al. Indoor Scan2BIM: Building information models of house interiors
CN107767440A (en) Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
Zhou et al. A novel laser vision sensor for omnidirectional 3D measurement
Guehring Reliable 3D surface acquisition, registration and validation using statistical error models
Teutsch Model-based analysis and evaluation of point sets from optical 3D laser scanners
Garrido-Jurado et al. Simultaneous reconstruction and calibration for multi-view structured light scanning
Elstrom et al. Stereo-based registration of ladar and color imagery
Moussa et al. Automatic fusion of digital images and laser scanner data for heritage preservation
Lin et al. Vision system for fast 3-D model reconstruction
Sansoni et al. 3-D optical measurements in the field of cultural heritage: the case of the Vittoria Alata of Brescia
Dubreuil et al. Mesh-Based Shape Measurements with Stereocorrelation: Principle and First Results
Graebling et al. Optical high-precision three-dimensional vision-based quality control of manufactured parts by use of synthetic images and knowledge for image-data evaluation and interpretation
Li 3D indoor scene reconstruction and layout based on virtual reality technology and few-shot learning
Bräuer-Burchardt et al. On the accuracy of point correspondence methods in three-dimensional measurement systems using fringe projection
Trebuňa et al. 3D Scaning–technology and reconstruction
Hao et al. Review of key technologies for warehouse 3D reconstruction
Wang et al. Implementation and experimental study on fast object modeling based on multiple structured stripes
CN116433841A (en) Real-time model reconstruction method based on global optimization
Ali Reverse engineering of automotive parts applying laser scanning and structured light techniques
CN116295113A (en) Polarization three-dimensional imaging method integrating fringe projection
Maestro-Watson et al. LCD screen calibration for deflectometric systems considering a single layer refraction model
Cai et al. High-precision and arbitrary arranged projection moiré system based on an iterative calculation model and the self-calibration method
Wu et al. Automated large scale indoor reconstruction using vehicle survey data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant