CN115082617A - Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium - Google Patents

Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium Download PDF

Info

Publication number
CN115082617A
CN115082617A CN202210578640.5A CN202210578640A CN115082617A CN 115082617 A CN115082617 A CN 115082617A CN 202210578640 A CN202210578640 A CN 202210578640A CN 115082617 A CN115082617 A CN 115082617A
Authority
CN
China
Prior art keywords
pipeline
image
dimensional
model
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210578640.5A
Other languages
Chinese (zh)
Inventor
董延超
李凌霄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202210578640.5A priority Critical patent/CN115082617A/en
Publication of CN115082617A publication Critical patent/CN115082617A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a pipeline three-dimensional reconstruction method based on multi-view optimization, which comprises the following steps: step 1) acquiring a multi-angle template image of a checkerboard template attached to a plane, which is shot by a camera, detecting feature points in the template image, and solving an internal reference matrix of the camera; step 2), acquiring an image of the interior of the pipeline; step 3) carrying out image feature extraction and primary matching on the internal image of the pipeline; step 4) performing multi-view optimization on the image feature extraction and primary matching result based on the twin network and the optical flow; step 5) performing sparse reconstruction and dense reconstruction on the pipeline model based on the multi-view optimization result; step 6) carrying out point cloud segmentation and geometric estimation on the pipeline model based on the reconstruction result; and 7) carrying out texture reconstruction on the pipeline model based on the geometric estimation result. Compared with the prior art, the method has the advantages of fine and smooth reconstructed image texture, good three-dimensional scene restoration effect and the like.

Description

Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium
Technical Field
The invention relates to the field of scene three-dimensional reconstruction, in particular to a pipeline three-dimensional reconstruction method and device based on multi-view optimization and a storage medium.
Background
The pipeline is widely used in the industries of water conservancy, military, chemical engineering, urban drainage and the like, and is an important conveying device. The quality problem of the pipeline can seriously affect the function of a related system, so that a professional must perform disease detection on the pipeline regularly. In recent years, how to assist workers in pipeline disease detection by using machine vision technology is one of the research hotspots in universities and scientific research institutions.
At present, a lot of researches for detecting the pipeline diseases are directly replaced by human beings based on deep learning, however, the three-dimensional information of the pipeline can be lost by the method, in addition, the internal image of the pipeline has the characteristics of high repeatability, simple structure, unobtrusive characteristics and the like, and for human beings, the method can be used for distinguishing the internal image by combining experience, the environment around the image and visual nuances; but not to computers.
The traditional three-dimensional reconstruction method comprises five steps of image acquisition, feature extraction and matching, sparse reconstruction, dense reconstruction and texture mapping. For the internal image of the pipeline, due to the existence of the above features, a large number of features can be mismatched into similar features by directly adopting a traditional three-dimensional reconstruction method, so that more noise exists in the model obtained by sparse reconstruction and texture reconstruction, texture mapping is difficult to succeed, and too many folds exist on the surface of the model with good mapping, and observation cannot be performed.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a pipeline three-dimensional reconstruction method based on multi-view optimization, which has fine texture and good three-dimensional scene restoration effect.
The purpose of the invention can be realized by the following technical scheme:
a pipeline three-dimensional reconstruction method based on multi-view optimization comprises the following steps:
step 1) acquiring a multi-angle template image of a checkerboard template attached to a plane, which is shot by a camera, detecting feature points in the template image, and solving an internal reference matrix of the camera;
step 2), acquiring an image of the interior of the pipeline;
step 3) carrying out image feature extraction and primary matching on the internal image of the pipeline;
step 4) performing multi-view optimization on the image feature extraction and primary matching result based on the twin network and the optical flow;
step 5) performing sparse reconstruction and dense reconstruction on the pipeline model based on the multi-view optimization result;
step 6) carrying out point cloud segmentation and geometric estimation on the pipeline model based on the reconstruction result;
and 7) reconstructing the texture of the pipeline model based on the geometric estimation result.
The image inside the pipeline in the step 2) is obtained by shooting the image inside the pipeline by a high-resolution single-lens reflex camera, the camera adopts manual exposure, the focal length of a lens is kept unchanged in the shooting process, the surface inside the pipeline is fully supplemented with light, 360-degree uniform shooting is carried out, meanwhile, a large overlapping area is ensured between adjacent images, and each part of the surface of the pipeline is ensured to be shot from a plurality of visual angles. The shooting pixel of the high-resolution single lens reflex is not less than 500 ten thousand.
The step 3) comprises the following steps:
step 3-1) establishing a Gaussian difference pyramid for the image inside the pipeline, detecting and positioning scale-invariant key points of the image, determining the direction of the key points of the image, and generating a feature descriptor;
and 3-2) determining the similarity between the feature points of different images according to the Euclidean distance, and performing primary matching on the images according to the similarity.
The step 4) comprises the following steps:
step 4-1), constructing a twin network, inputting images in pairs, and calculating an optical flow between feature points of the two images;
and 4-2) taking the optical flow as an edge connecting the feature points on different images, taking the feature points as points, establishing a graph model, minimizing the optical flow and the similarity, and adjusting the positions of the feature points on each image.
The step 5) comprises the following steps:
step 5-1) based on the multi-view optimization result in the step 4), combining the camera internal reference matrix obtained in the step 1), performing triangulation, attitude estimation and BA optimization, extracting sparse point clouds of pipeline scenes, and estimating a camera pose corresponding to each image;
step 5-2) estimating a depth map, and fusing the depth map by using a depth map registration principle to obtain a dense point cloud model of the pipeline scene;
and 5-3) performing Delaunay triangulation on the dense point cloud model to obtain a grid map of the pipeline model.
The step 6) comprises the following steps:
step 6-1) using RANSAC to segment the point cloud model to obtain parameters of all cylindrical surfaces in the pipeline model, wherein the parameters comprise radius r, axial direction vector v and a point p on the axial line c
Step 6-2), fitting by using parameters of all cylindrical surfaces to obtain an equation of a three-dimensional plane where the central axis of the pipeline is located:
a p x+b p y+c p z+d p =0
wherein, a p ,b p ,c p ,d p Is a fitting parameter;
using the grid diagram of the pipeline model obtained in the plane cutting step 5) to obtain a dense point cloud diagram, wherein points in the dense point cloud diagram are on the plane;
taking the plane as an xOy plane, and converting the three-dimensional point coordinates in the dense point cloud picture into a coordinate system where the xOy plane is located, so that the z value in the three-dimensional point coordinates (x, y, z) is 0;
and 6-3) mapping the torus on an xOy plane to obtain a ring, wherein the ring consists of 2 sections of circular arcs, and the circular arcs consist of central point coordinates (x) c ,y c ) Start and stop radian θ se And the radius r is uniquely determined, for any point (x, y) on the circular arc, the loss function is taken as an optimization target, the loss function is subjected to minimum optimization to obtain the center coordinate and the radius of the circular arc parameter, wherein the loss function L 2 Comprises the following steps:
Figure BDA0003661421000000031
wherein d is 2 =(x-x c ) 2 +(y-y c ) 2 M is the number of points on the arc;
obtaining the circle center and the inner and outer radii of the circular ring according to the circular arc parameters, obtaining the starting and stopping radian of the circular ring by combining the tangent relation between the central arc line of the circular ring and the central axis of the cylindrical surface, and obtaining the central axis of the pipeline by combining the parameters of the cylindrical surface obtained in the step 6-1).
The grid graph using the plane cutting pipeline model in the step 6-2) is specifically as follows:
for each triangle in the grid diagram of the pipeline model, calculating whether intersection points exist between the plane and three line segments of the triangle, and if no intersection point exists or only one intersection point exists, deleting the triangular surface; and if two intersection points exist, calculating the three-dimensional coordinates of the two intersection points, deleting the triangular surface, and keeping the two intersection points in the ply file.
The step 7) comprises the following steps:
step 7-1) obtaining a three-dimensional grid model of the pipeline according to the central axis and the arc radius of the pipeline, wherein the three-dimensional grid model does not contain color information;
and 7-2) performing texture mapping on the model by combining the internal reference matrix of the camera and the camera pose corresponding to each image obtained in the step 5) on the basis of the three-dimensional grid model to obtain a pipeline three-dimensional model image.
A pipeline three-dimensional reconstruction device based on multi-view optimization comprises a memory, a processor and a program stored in the memory, wherein the processor executes the program to realize the method.
A storage medium having a program stored thereon, wherein the program, when executed, implements the method as described above.
Compared with the prior art, the invention has the following beneficial effects:
(1) the method adopts a multi-view optimization mode, reduces the drift of the same characteristic point on different images, avoids mismatching a large number of characteristics into similar characteristics, reduces noise points of a model obtained by sparse reconstruction and texture reconstruction, and has good texture mapping result and strong observability;
(2) the method calculates the three-dimensional grid map of the pipeline by combining the actual geometric information of the pipeline on the basis of the traditional three-dimensional reconstruction, and the result is more real compared with the result of the traditional method.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of a camera shooting sample inside a pipeline according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a pipeline three-dimensional reconstruction sparse point cloud in the embodiment of the present invention;
FIG. 4 is a schematic diagram of a pipeline three-dimensional reconstruction dense point cloud in the embodiment of the present invention;
FIG. 5 is a schematic diagram of a Delaunay triangulation mesh for three-dimensional reconstruction of a pipeline in an embodiment of the present invention;
FIG. 6 is a schematic illustration of a central axis of a conduit according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a three-dimensional mesh model of a pipeline in an embodiment of the invention;
fig. 8 is a schematic diagram of a pipeline three-dimensional model after texture reconstruction in the embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides a pipeline three-dimensional reconstruction method based on multi-view optimization aiming at a pipeline with a diameter which can be passed by people and a length of not less than 10 meters, and the method for reconstructing the internal information of the pipeline comprises the following steps:
step 1) acquiring a multi-angle template image of a checkerboard template attached to a plane, which is shot by a camera, detecting feature points in the template image, and solving an internal reference matrix of the camera.
In this embodiment, a zhangnyou calibration method is used to calibrate the camera internal reference, so as to obtain an internal reference matrix K of the camera.
And 2) acquiring an image of the interior of the pipeline.
The inside image of pipeline is shot in the pipeline inside by the high resolution single lens reflex camera of 500 ten thousand pixels and is obtained, the camera adopts manual exposure, and the shooting process keeps the camera lens focus unchangeable, carries out abundant light filling to pipeline internal surface, carries out 360 degrees even shootings, guarantees simultaneously that there is great overlapping area, guarantees that each part of pipeline surface shoots from a plurality of visual angles between adjacent image. The image of the interior of the pipe obtained in this example is shown in fig. 2.
And 3) carrying out image feature extraction and primary matching on the image inside the pipeline.
And 3-1) introducing a Gaussian kernel function, and performing convolution on the Gaussian kernel function and the input image.
The Gaussian kernel function is:
Figure BDA0003661421000000051
where x, y are image coordinates, σ 2 Is the variance of the setting.
The convolution function is:
L(x,y,σ)=I(x,y)*G(x,y,σ)
wherein I (x, y) is an input function.
Selecting different Gaussian kernel functions for downsampling the same picture so as to realize scale invariance; then selecting key points from the DoG image, drawing a circle by taking the coordinates of the key points as the center of the circle, superposing all gradient directions in the circle to 8 directions, wherein the 8 directions are respectively 0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees and 315 degrees, and selecting the maximum value as the main direction of the key points, thereby realizing direction invariance; a box is then selected, the box is divided into 4 x 4 cells, the gradient direction of each cell is calculated and added to 8 directions to determine a 4 x 8 sized vector, and thus a feature descriptor.
Step 3-2) Euclidean distance using feature vector
Figure BDA0003661421000000052
The similarity of keypoints in the two images is calculated as a metric: taking two images A and B as an example, for a certain feature point in the target image A, the feature vectors of the feature point and all the feature points in the image B are obtainedThe Euclidean distance of (4) is obtained, the ratio of the minimum Euclidean distance to the second smallest Euclidean distance is obtained, and if the ratio is smaller than a threshold value, the Euclidean distance is selected as a matching point.
And 4) performing multi-view optimization on the image feature extraction and primary matching result based on the twin network and the optical flow.
The image features extracted in the step 3) are based on a single view, and when the multi-view feature matching is performed on the feature points obtained based on the single view, the same feature points may deviate in different viewing angles, so that the quality of the feature point matching is inevitably affected, and further the subsequent three-dimensional reconstruction work is affected, and the final reconstruction result has more noise points and cannot be observed.
The results of feature extraction are therefore optimized on the basis of multiple views:
step 4-1) twin network-based two-view optimization: establishing a twin network, pairing every two pictures with matched characteristic points, inputting blocks of corresponding key points on the two pictures into the twin network, performing inner product on the results of the two pictures, and obtaining the light streams of the two pictures through a regression layer;
step 4-2) multi-view optimization based on graph models: and (3) taking key points on different images as points of the graph model, taking the initial matching relation and the optical flow as edges of the graph model, and minimizing the optical flow and the similarity so as to adjust the positions of the key points.
And 5) carrying out sparse reconstruction and dense reconstruction on the pipeline model based on the multi-view optimization result.
Step 5-1) matching points x in the coordinate systems of the two image cameras based on the image feature matching result after multi-view optimization 1 And x 2 Of the type using
Figure BDA0003661421000000061
Solving an essential matrix E, and separating T (R | y) from E; and then combining the internal reference matrix K of the camera in the step 1), and carrying out triangulation to estimate the depth s of the point by using the following formula 1 And s 2
K -1 s 2 x 2 =s 1 K -1 Rx 1 +t
Wherein x is 1 And x 2 The pixel coordinates of two groups of characteristic points under an image coordinate system are shown, K is an internal reference matrix of the camera, and T is the posture of the camera;
and then minimizing the reprojection error by adopting a Levenberg-Marquardt method, thereby optimizing the three-dimensional point coordinates and the camera posture of the camera, namely, the optimized camera posture is as follows:
Figure BDA0003661421000000062
after the optimized three-dimensional point coordinates are obtained, a ply file can be constructed and the three-dimensional points and colors can be input into the ply file, as shown in fig. 3.
Step 5-2) estimating a depth map on the basis of the step 5-1), fusing the depth map by using a depth map registration principle, and recovering dense point clouds, wherein the dense point clouds are shown in a graph 4;
and 5-3) performing Delaunay triangulation on the dense point cloud obtained in the step 5-2), specifically, removing color information in the dense point cloud, only retaining three-dimensional point information, constructing the Delaunay triangulation on the three-dimensional points by using a Bowyer-Watson algorithm, and obtaining a mesh graph after the triangulation as shown in FIG. 5.
And 6) carrying out point cloud segmentation and geometric estimation on the pipeline model based on the reconstruction result.
Step 6-1) the pipeline is composed of two basic elements of a circular ring surface and a cylindrical surface, and the cylindrical surface can be subjected to point cloud segmentation by using a RANSAC algorithm: randomly sampling model inner points and fitting a cylindrical surface model, counting the number of the model inner points in other points, repeating for many times, and solving the parameters of all cylindrical surfaces in the pipeline model: radius r, axial vector v, point p on the axis c
Step 6-2) Using all Cylinder parameters (v, p) c ) Equation a of three-dimensional plane P where central axis of pipeline is located p x+b p y+c p z+d p 0, wherein a p ,b p ,c p ,d p Is a fitting parameter; specifically, a plane parallel to the central axis of the cylinder and having the same distance to the central axis of the cylinderAs plane P.
Cutting the grid map obtained in the step 5-3) by using the plane P, specifically, calculating whether intersection points exist between the P and three line segments of the triangle for each triangle in the grid map, and if no intersection point exists or only one intersection point exists, deleting the triangular surface from the ply file; if two intersection points exist, the three-dimensional coordinates of the two intersection points are calculated, then the triangular surface is deleted, and only the two intersection points are reserved in the ply file. After the processing, a dense point cloud picture can be obtained, and points in the point cloud picture are all on P.
And taking the P as an xOy plane, converting the three-dimensional point coordinates in the point cloud picture into the coordinate system, so that the z value in the three-dimensional coordinates (x, y, z) is 0, and thus performing optimal estimation on the parameters of the circular ring part of the pipeline on the two-dimensional plane.
And 6-3) mapping the torus on an xOy plane to obtain a ring, wherein the ring consists of 2 sections of circular arcs, and the circular arcs consist of central point coordinates (x) c ,y c ) Start and stop radian θ se And the radius r is uniquely determined, for any point (x, y) on the circular arc, the loss function is taken as an optimization target, the loss function is subjected to minimum optimization to obtain the center coordinate and the radius of the circular arc parameter, wherein the loss function L 2 Comprises the following steps:
Figure BDA0003661421000000071
wherein d is 2 =(x-x c ) 2 +(y-y c ) 2 M is the number of points on the arc;
optimizing the formula by using an Adam algorithm, obtaining the circle center and the inner and outer radiuses of the ring according to the two-dimensional arc parameters, obtaining the starting and stopping radian of the ring by combining the tangent relation between the central arc line of the ring and the central axis of the cylindrical surface, and obtaining the central axis of the pipeline by combining the parameters of the cylindrical surface obtained in the step 6-1), as shown in FIG. 6.
And 7) carrying out texture reconstruction on the pipeline model based on the geometric estimation result.
Step 7-1) after obtaining the central axis as shown in fig. 6 (actually, uniform and dense points), a direction vector n along the central axis can be obtained for each point on the line, a uniform point on the circle is generated by taking the point as the center of the circle and the radius r of the ring surface and the cylindrical surface as the radius on a plane taking n as a normal vector, so that a dense, uniform and noiseless pipe model point cloud picture is obtained, then the points in the point cloud picture are connected according to a rule to obtain a uniform grid picture, then the inverse transformation of the coordinate transformation in the step 6-2) is applied to the grid picture, and finally the uniform grid picture of the pipe, which does not contain color information under a three-dimensional reconstruction coordinate system, can be obtained as shown in fig. 7;
and 7-2) selecting a view angle for a triangular surface of the pipeline grid map by combining the internal reference matrix of the camera and the camera pose corresponding to each image obtained in the step 5) on the basis of the step 7-1), performing texture mapping on the model, generating a fusion texture mapping, and obtaining a pipeline three-dimensional model map as shown in FIG. 8.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. A pipeline three-dimensional reconstruction method based on multi-view optimization is characterized by comprising the following steps:
step 1) acquiring a multi-angle template image of a checkerboard template attached to a plane, which is shot by a camera, detecting feature points in the template image, and solving an internal reference matrix of the camera;
step 2), acquiring an image of the interior of the pipeline;
step 3) carrying out image feature extraction and primary matching on the internal image of the pipeline;
step 4) performing multi-view optimization on the image feature extraction and primary matching result based on the twin network and the optical flow;
step 5) performing sparse reconstruction and dense reconstruction on the pipeline model based on the multi-view optimization result;
step 6) carrying out point cloud segmentation and geometric estimation on the pipeline model based on the reconstruction result;
and 7) carrying out texture reconstruction on the pipeline model based on the geometric estimation result.
2. The pipeline three-dimensional reconstruction method based on multi-view optimization according to claim 1, wherein the image inside the pipeline in step 2) is obtained by shooting the image inside the pipeline with a high-resolution single-lens reflex camera, the camera adopts manual exposure, the focal length of the lens is kept unchanged in the shooting process, the light is fully supplemented to the surface inside the pipeline, 360-degree uniform shooting is performed, meanwhile, a large overlapping area between adjacent images is ensured, and each part of the surface of the pipeline is ensured to be shot from multiple visual angles.
3. The pipeline three-dimensional reconstruction method based on multi-view optimization according to claim 1, wherein the step 3) comprises:
step 3-1) establishing a Gaussian difference pyramid for the image inside the pipeline, detecting and positioning scale-invariant key points of the image, determining the direction of the key points of the image, and generating a feature descriptor;
and 3-2) determining the similarity between the feature points of different images according to the Euclidean distance, and performing primary matching on the images according to the similarity.
4. The pipeline three-dimensional reconstruction method based on multi-view optimization according to claim 1, wherein the step 4) comprises:
step 4-1), constructing a twin network, inputting images in pairs, and calculating an optical flow between feature points of the two images;
and 4-2) taking the optical flow as an edge connecting the feature points on different images and taking the feature points as points, establishing a graph model, minimizing the optical flow and the similarity, and adjusting the positions of the feature points on each image.
5. The pipeline three-dimensional reconstruction method based on multi-view optimization according to claim 1, wherein the step 5) comprises:
step 5-1) based on the multi-view optimization result in the step 4), combining the camera internal reference matrix obtained in the step 1), performing triangulation, attitude estimation and BA optimization, extracting sparse point clouds of pipeline scenes, and estimating a camera pose corresponding to each image;
step 5-2) estimating a depth map, and fusing the depth map by using a depth map registration principle to obtain a dense point cloud model of the pipeline scene;
and 5-3) performing Delaunay triangulation on the dense point cloud model to obtain a grid map of the pipeline model.
6. The pipeline three-dimensional reconstruction method based on multi-view optimization according to claim 5, wherein the step 6) comprises:
step 6-1) segmenting the point cloud model by using RANSAC to obtain parameters of all cylindrical surfaces in the pipeline model, wherein the parameters comprise a radius r, an axial direction vector v and a point p on an axial line c
Step 6-2), fitting by using parameters of all cylindrical surfaces to obtain an equation of a three-dimensional plane where the central axis of the pipeline is located:
a p x+b p y+c p z+d p =0
wherein, a p ,b p ,c p ,d p Is a fitting parameter;
using the grid diagram of the pipeline model obtained in the plane cutting step 5) to obtain a dense point cloud diagram, wherein points in the dense point cloud diagram are on the plane;
taking the plane as an xOy plane, and converting the three-dimensional point coordinates in the dense point cloud picture into a coordinate system where the xOy plane is located, so that the z value in the three-dimensional point coordinates (x, y, z) is 0;
and 6-3) mapping the torus on an xOy plane to obtain a ring, wherein the ring consists of 2 sections of circular arcs, and the circular arcs consist of central point coordinates (x) c ,y c ) Start and stop radian θ s ,θ e And the radius r is uniquely determined, for any point (x, y) on the circular arc, the loss function is taken as an optimization target, the loss function is subjected to minimum optimization to obtain the center coordinate and the radius of the circular arc parameter, wherein the loss function L 2 Comprises the following steps:
Figure FDA0003661420990000021
wherein d is 2 =(x-x c ) 2 +(y-y c ) 2 M is the number of points on the arc;
obtaining the circle center and the inner and outer radii of the circular ring according to the circular arc parameters, obtaining the starting and stopping radian of the circular ring by combining the tangent relation between the central arc line of the circular ring and the central axis of the cylindrical surface, and obtaining the central axis of the pipeline by combining the parameters of the cylindrical surface obtained in the step 6-1).
7. The pipeline three-dimensional reconstruction method based on multi-view optimization according to claim 6, wherein the grid map using the plane cutting pipeline model in step 6-2) is specifically:
for each triangle in the grid diagram of the pipeline model, calculating whether intersection points exist between the plane and three line segments of the triangle, and if no intersection point exists or only one intersection point exists, deleting the triangular surface; and if two intersection points exist, calculating the three-dimensional coordinates of the two intersection points, deleting the triangular surface, and keeping the two intersection points in the ply file.
8. The pipeline three-dimensional reconstruction method based on multi-view optimization according to claim 1, 5 or 6, wherein the step 7) comprises:
step 7-1) obtaining a three-dimensional grid model of the pipeline according to the central axis and the arc radius of the pipeline, wherein the three-dimensional grid model does not contain color information;
and 7-2) performing texture mapping on the model by combining the internal reference matrix of the camera and the camera pose corresponding to each image obtained in the step 5) on the basis of the three-dimensional grid model to obtain a pipeline three-dimensional model image.
9. A multi-view optimization-based three-dimensional pipeline reconstruction apparatus comprising a memory, a processor, and a program stored in the memory, wherein the processor implements the method according to any one of claims 1-8 when executing the program.
10. A storage medium having a program stored thereon, wherein the program, when executed, implements the method of any of claims 1-8.
CN202210578640.5A 2022-05-25 2022-05-25 Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium Pending CN115082617A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210578640.5A CN115082617A (en) 2022-05-25 2022-05-25 Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210578640.5A CN115082617A (en) 2022-05-25 2022-05-25 Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium

Publications (1)

Publication Number Publication Date
CN115082617A true CN115082617A (en) 2022-09-20

Family

ID=83248933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210578640.5A Pending CN115082617A (en) 2022-05-25 2022-05-25 Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium

Country Status (1)

Country Link
CN (1) CN115082617A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363302A (en) * 2023-03-06 2023-06-30 郑州大学 Pipeline three-dimensional reconstruction and pit quantification method based on multi-view geometry
CN116486008A (en) * 2023-04-12 2023-07-25 荣耀终端有限公司 Three-dimensional reconstruction method, display method and electronic equipment
CN117437237A (en) * 2023-12-22 2024-01-23 珠海格力电器股份有限公司 Defect detection method and device for U-shaped tube, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363302A (en) * 2023-03-06 2023-06-30 郑州大学 Pipeline three-dimensional reconstruction and pit quantification method based on multi-view geometry
CN116363302B (en) * 2023-03-06 2024-05-28 郑州大学 Pipeline three-dimensional reconstruction and pit quantification method based on multi-view geometry
CN116486008A (en) * 2023-04-12 2023-07-25 荣耀终端有限公司 Three-dimensional reconstruction method, display method and electronic equipment
CN116486008B (en) * 2023-04-12 2023-12-12 荣耀终端有限公司 Three-dimensional reconstruction method, display method and electronic equipment
CN117437237A (en) * 2023-12-22 2024-01-23 珠海格力电器股份有限公司 Defect detection method and device for U-shaped tube, electronic equipment and storage medium
CN117437237B (en) * 2023-12-22 2024-05-24 珠海格力电器股份有限公司 Defect detection method and device for U-shaped tube, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Lin et al. Dynamic spatial propagation network for depth completion
CN109461180B (en) Three-dimensional scene reconstruction method based on deep learning
Wang et al. 360sd-net: 360 stereo depth estimation with learnable cost volume
CN108648240B (en) Non-overlapping view field camera attitude calibration method based on point cloud feature map registration
CN110135455B (en) Image matching method, device and computer readable storage medium
CN115082617A (en) Pipeline three-dimensional reconstruction method and device based on multi-view optimization and storage medium
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN111981982B (en) Multi-directional cooperative target optical measurement method based on weighted SFM algorithm
CN110580720B (en) Panorama-based camera pose estimation method
CN110969667A (en) Multi-spectrum camera external parameter self-correction algorithm based on edge features
KR102638632B1 (en) Methods, devices, electronic devices, storage media and programs for building point cloud models
Lin et al. Cylindrical panoramic image stitching method based on multi-cameras
CN108519102A (en) A kind of binocular vision speedometer calculation method based on reprojection
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
CN110807815B (en) Quick underwater calibration method based on corresponding vanishing points of two groups of mutually orthogonal parallel lines
CN112085790A (en) Point-line combined multi-camera visual SLAM method, equipment and storage medium
CN111524168A (en) Point cloud data registration method, system and device and computer storage medium
WO2023116430A1 (en) Video and city information model three-dimensional scene fusion method and system, and storage medium
CN114125269A (en) Mobile phone real-time panoramic shooting method based on deep learning
CN116958419A (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding
Morelli et al. Photogrammetry now and then–from hand-crafted to deep-learning tie points–
Tamas et al. Relative pose estimation and fusion of omnidirectional and lidar cameras
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN116579962A (en) Panoramic sensing method, device, equipment and medium based on fisheye camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination