CN116310099A - Three-dimensional reconstruction method of steel bridge component based on multi-view images - Google Patents

Three-dimensional reconstruction method of steel bridge component based on multi-view images Download PDF

Info

Publication number
CN116310099A
CN116310099A CN202310191513.4A CN202310191513A CN116310099A CN 116310099 A CN116310099 A CN 116310099A CN 202310191513 A CN202310191513 A CN 202310191513A CN 116310099 A CN116310099 A CN 116310099A
Authority
CN
China
Prior art keywords
point cloud
eye
component
dimensional
steel bridge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310191513.4A
Other languages
Chinese (zh)
Inventor
严钢
李枝军
严锴
徐秀丽
李雪红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Gongda Traffic Technology Co ltd
Suzhou Port And Shipping Development Center
Nanjing Tech University
Original Assignee
Nanjing Gongda Traffic Technology Co ltd
Suzhou Port And Shipping Development Center
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Gongda Traffic Technology Co ltd, Suzhou Port And Shipping Development Center, Nanjing Tech University filed Critical Nanjing Gongda Traffic Technology Co ltd
Priority to CN202310191513.4A priority Critical patent/CN116310099A/en
Publication of CN116310099A publication Critical patent/CN116310099A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a steel bridge member three-dimensional reconstruction method based on multi-view images, which comprises the following steps: arranging a target at key geometric feature points of the steel bridge component, and measuring three-dimensional coordinates of the target under a world coordinate system through a binocular vision system; acquiring a multi-view image of a steel bridge member containing a target through a camera of the unmanned aerial vehicle; based on the multi-view image, recovering the pose of the camera of the unmanned aerial vehicle; generating a dense point cloud model of the steel bridge member based on the multi-view image of the target and the camera pose; calculating a proportionality coefficient, and recovering a real model size point cloud model of the steel bridge component; the solid three-dimensional model of the steel bridge component is reversely rebuilt based on the real model size point cloud model, the steel bridge component model of the construction site can be rebuilt well, the method is used for checking manufacturing errors and assembly errors of the steel bridge component, feeding back to a constructor, timely taking countermeasures, and greatly improving construction efficiency and bridge formation quality.

Description

Three-dimensional reconstruction method of steel bridge component based on multi-view images
Technical Field
The invention relates to a three-dimensional reconstruction method of a steel bridge component based on multi-view images.
Background
The steel bridge component is manufactured by factory processing and then transported to a construction site to be assembled into a bridge. Due to machining errors and other reasons, the steel bridge components are required to be subjected to component machining quality detection and field pre-assembly before assembly, so that whether the deviation of the components meets the requirements or not is checked, and whether the components can be assembled or not is checked, so that the normal working procedures of field operation are ensured. The processing quality inspection and the on-site pre-assembly process of the steel member are complicated, difficult and high in cost, and with the development of computer technology, the three-dimensional model of the reconstructed member is utilized to carry out virtual assembly, so that the method can assist or even replace entity pre-assembly. However, the existing laser scanning technology and photogrammetry technology for three-dimensional reconstruction of the construction steel bridge components have defects.
Firstly, the cost of the laser scanner is too high, the scanning efficiency is low, the shielding problem is serious, the reconstruction of large-scale steel bridge components is not facilitated, meanwhile, the construction environment of the steel bridge components is mostly water operation, and part of the hoisted components cannot erect the laser scanner.
Secondly, the existing photogrammetry technology has the problems of low data processing efficiency, high requirements on hardware equipment in the reconstruction process, huge time consumption for reconstructing a large scene model and the like.
Disclosure of Invention
The invention aims to provide a three-dimensional reconstruction method of a steel bridge component based on multi-view images.
The three-dimensional reconstruction method of the steel bridge component based on the multi-view image is characterized by comprising the following steps of:
s1, respectively arranging checkerboard targets at a plurality of positions on the surface of a component, and acquiring left-eye and right-eye pixel coordinates corresponding to corner points in the checkerboard targets through a binocular camera system; solving to obtain three-dimensional coordinates of the corner points in the checkerboard target under a world coordinate system based on the left-eye and right-eye pixel coordinates corresponding to the corner points;
s2, planning a course route of the unmanned aerial vehicle, and acquiring multi-distance surrounding type through the unmanned aerial vehicle based on the course route of the unmanned aerial vehicle to obtain a multi-view image of a component containing a checkerboard target, and preprocessing the multi-view image to obtain a preprocessed multi-view image;
s3, taking the preprocessed multi-view images as input, calculating the camera pose of the unmanned aerial vehicle through a motion recovery technology, and storing the camera pose of the unmanned aerial vehicle and camera parameters of the unmanned aerial vehicle as a transform.
S4, obtaining a dense point cloud model of the component based on the preprocessed multi-view image, camera pose information and camera parameters in the transformation file and the preprocessed multi-view image;
s5, performing noise reduction hole repair and secondary sampling treatment on the dense point cloud model to obtain a refined point cloud model of the component; acquiring relative three-dimensional coordinates of corner points in the checkerboard target under a refined point cloud model coordinate system, calculating a model scaling factor delta by combining the three-dimensional coordinates of the corner points acquired by binocular under a world coordinate system, and multiplying the relative size of the point cloud by the scaling factor delta to obtain a component real-size point cloud model;
and S6, performing point cloud triangularization surface fitting based on the real-size point cloud model of the member to obtain a solid three-dimensional model of the steel bridge member.
Further, in the three-dimensional reconstruction method of the steel bridge member based on the multi-view image, S2, the course route of the unmanned aerial vehicle is planned, including:
setting a course route of the unmanned aerial vehicle as three circles of flying surrounding components, wherein the pixel overlapping rate of two adjacent frames of images acquired in each circle of flyingMore than 80%, the vertical distance between the unmanned plane and the component in the first circle and the second circle is d 1 Flying heights respectively H 1 =d.425H max 、H 2
0.575H max The vertical distance between the unmanned plane and the component in the third round of flight is d 2 The flying height is H 3 =0.5H max
When the vertical distance is d 1 When the vertical imaging is the vertical maximum dimension H of the component max 85%, i.e
Figure BDA0004105668840000031
f is the focal length of the camera;
when the vertical distance is d 2 When imaged vertically as the vertical maximum dimension H of the member max I.e.
Figure BDA0004105668840000032
h is the imaging plane size.
Further, in the above three-dimensional reconstruction method for a steel bridge member based on multi-view images, preprocessing the multi-view images to obtain preprocessed multi-view images, the method includes:
marking the outline of a component such as a arch springing in a multi-view image by using VIA marking software, and deriving a json file, wherein the json file contains outline coordinate information of the component;
writing an analysis code by using python language, loading a json file, and acquiring a Mask image of a foreground region in a multi-view image based on the json file, wherein a background pixel in the Mask image is 0, and the foreground pixel is; and calculating each pixel of each image in the multi-view image and the corresponding pixel of the mask image, extracting a component image, and carrying out data enhancement on the extracted component image.
Further, in the three-dimensional reconstruction method of the steel bridge member based on the multi-view image, S1, checkerboard targets are respectively arranged at a plurality of positions on the surface of the member, and left-eye and right-eye pixel coordinates corresponding to corner points in the checkerboard targets are obtained through a binocular camera system; based on the left-eye and right-eye pixel coordinates corresponding to the corner points, solving to obtain the three-dimensional coordinates of the corner points in the checkerboard target under the world coordinate system, wherein the three-dimensional coordinates comprise:
collecting checkerboard targets on the components through the calibrated binocular camera system, compiling a corner recognition algorithm by using python language, calling an Excel algorithm library Openpyrl, numbering and writing left-eye and right-eye pixel coordinates of the corner in batches, and storing a calculation result as an Excel file;
establishing a relation between left-eye and right-eye pixel coordinates and world coordinates by using a projection matrix;
and writing and calling left-eye and right-eye pixel coordinates and world coordinate relation, and left-eye and right-eye pixel coordinates of corner points in an Excel file through a python language, and calculating a three-dimensional coordinate algorithm of a corner point space by utilizing a least square method to realize batch calculation of the left-eye and right-eye pixel coordinates of the corner points of the checkerboard to the three-dimensional coordinates under the world coordinate system.
Further, in the three-dimensional reconstruction method of a steel bridge component based on multi-view images, by collecting checkerboard targets on the component through a calibrated binocular camera system, writing a corner recognition algorithm by using python language, calling an Excel algorithm library Openpyxl, and carrying out batch numbering and writing of left-eye and right-eye pixel coordinates of the corner, wherein a calculation result is stored as an Excel file, and the method comprises the following steps:
the method comprises the steps of inputting the size information of a calibration plate through images of the calibration plate collected by a binocular camera system under different angles, wherein the size information of the calibration plate comprises the following steps: calibrating transverse and longitudinal angular points of the plate and the checkerboard size of the calibrating plate; removing images with errors larger than a preset threshold value to obtain residual images; acquiring internal and external parameters of the binocular camera based on the residual images, and correcting left and right eye images based on the internal and external parameters of the binocular camera;
and writing a corner recognition algorithm by using a python language, calling an Excel algorithm library Openpyrl, numbering and batched writing of corrected left-eye and right-eye pixel coordinates, and storing a left-eye and right-eye pixel coordinate calculation result as an Excel file.
Further, in the three-dimensional reconstruction method of the steel bridge member based on the multi-view image, the method for establishing the relation between the left-eye and right-eye pixel coordinates and the world coordinates by using the projection matrix comprises the following steps:
through a projection matrix of the binocular camera system, four linear equations of the pixel coordinates and world coordinates of the nth corner of the ith checkerboard target are established:
Figure BDA0004105668840000041
Figure BDA0004105668840000042
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004105668840000043
respectively the pixel coordinates of the nth corner point of the i-number checkerboard target under a pixel coordinate system; />
Figure BDA0004105668840000051
The two-dimensional projection matrix is respectively a left-eye projection matrix and a right-eye projection matrix, and is obtained by multiplying the inner parameter matrix and the outer parameter matrix which are obtained by double-target calibration; />
Figure BDA0004105668840000052
Is the three-dimensional coordinate of the nth corner point of the i-number checkerboard target under the world coordinate.
Further, in the above three-dimensional reconstruction method of a steel bridge member based on multi-view images, by compiling and calling the relational expression between the left-eye and right-eye pixel coordinates and the world coordinate by python language, and the left-eye and right-eye pixel coordinates of the corner points in the Excel file, and using the algorithm of calculating the three-dimensional coordinates of the corner point space by the least square method, the batch calculation from the left-eye and right-eye pixel coordinates of the corner points of the checkerboard to the three-dimensional coordinates under the world coordinate system is realized, including:
and compiling and calling the left-eye and right-eye pixel coordinates of the corner points in the four linear equations and Excel files through python language, and calculating the three-dimensional coordinates of the corner point space by utilizing a least square method to perform batched calculation from the left-eye and right-eye pixel coordinates of the corner points of the checkerboard to the three-dimensional coordinates under a world coordinate system.
Further, in the above three-dimensional reconstruction method for a steel bridge member based on multi-view images, S4, obtaining a dense point cloud model of the member based on the preprocessed multi-view images, camera pose information and camera parameters in the transform. Json file, and the preprocessed multi-view images, and storing the dense point cloud model as a ply point cloud format file, including:
and taking the preprocessed multi-view image and camera pose information and camera parameters in the transformation.json file and the preprocessed multi-view image as input conditions, accelerating the nerf dense reconstruction by utilizing a preset deep learning network model through a hash search-based coding method, reconstructing a dense point cloud model of a component, and outputting the dense point cloud model as a ply point cloud format file.
Further, in the three-dimensional reconstruction method of the steel bridge member based on the multi-view image, the model scaling factor delta is obtained by the following formula:
Figure BDA0004105668840000061
in the above-mentioned formula(s),
delta is the scaling factor for the dense point cloud model to be transformed to true dimensions,
Figure BDA0004105668840000062
three-dimensional coordinates of nth corner of i target measured by binocular camera system, +.>
Figure BDA0004105668840000063
Three-dimensional coordinates of nth corner of j target measured by binocular camera system, +.>
Figure BDA0004105668840000064
For the nth angular point three-dimensional coordinates of the i-target in the dense point cloud model,
Figure BDA0004105668840000065
respectively is the instituteAnd the nth angular point three-dimensional coordinate of the j targets in the dense point cloud model is that k is the total number of angular points in the checkerboard targets, and k is more than or equal to 2.
Further, in the three-dimensional reconstruction method of the steel bridge member based on the multi-view image, noise reduction hole repair and secondary sampling processing are performed on the dense point cloud model to obtain a refined point cloud model of the member, including:
noise reduction processing is carried out on the component dense point cloud model by utilizing statistical filtering, the adjacent point number is set to be b, and the average distance from each point in the component dense point cloud to the nearest b points is calculated
Figure BDA0004105668840000066
And standard deviation sigma, a threshold value is set to +.>
Figure BDA0004105668840000067
a is a standard deviation multiplier, the removable average distance is greater than a threshold +.>
Figure BDA0004105668840000068
Obtaining dense point cloud of the noise-reduced component, continuously debugging and confirming the parameter b through the noise reduction effect, and reducing the value of the number b of nearest points when the main point cloud is mistakenly considered to be the noise point to be removed; repeating the operation until the noise is removed, wherein the component main body point cloud is not removed;
extracting three-dimensional coordinates of adjacent c points of the holes of the dense point cloud of the noise-reduced component, fitting a curved surface by using the three-dimensional coordinates of the c points, and generating the point cloud in the holes through discrete sampling to obtain the dense point cloud of the component after hole repair, wherein the parameter c is the number of points around the holes consistent with the curvature change of the holes;
and (3) performing secondary sampling on the dense point cloud of the component subjected to hole repair by using a geometric sampling method, setting the number of sampling target points as T, the sampling rate as f ', calculating the curvature of the point cloud, setting the curvature threshold of the point cloud as k, and when the curvature threshold of the point cloud is larger than k, setting the sampling number as T (1-f '), otherwise, setting the sampling number as Tf ', so as to obtain the refined point cloud model of the component.
According to the invention, the targets are arranged on key geometric feature points of the steel bridge component, and the three-dimensional coordinates of the targets under a world coordinate system are measured through a binocular vision system; acquiring a multi-view image of the steel bridge member containing the target through a camera of the unmanned aerial vehicle, and processing the multi-view image of the steel bridge member containing the target by utilizing an image processing technology; recovering a camera pose of the unmanned aerial vehicle by a motion recovery technique (SFM) based on a multi-view image of a steel bridge member containing the target; taking the multi-view image and the camera pose of the target as input conditions to generate a dense point cloud model of the steel bridge component; carrying out fine processing of point cloud on the dense point cloud model by utilizing a point cloud processing technology to obtain a fine steel bridge point cloud model; calculating the proportionality coefficient from the refined steel bridge point cloud model to the real size model through the relative three-dimensional coordinates of the target in the refined steel bridge point cloud model and the three-dimensional coordinates of the target under the world coordinate system measured by the binocular camera system, and recovering the real model size point cloud model of the steel bridge member; and reversely reconstructing the solid three-dimensional model of the steel bridge component based on the real model size point cloud model. The invention solves the problems of low reconstruction efficiency and high cost of the steel bridge components, can realize the mass three-dimensional reconstruction of the steel bridge components on the construction site, can well reconstruct the steel bridge component model on the construction site, is used for checking the manufacturing error and the assembly error of the steel bridge components, feeds back to a construction party, takes countermeasures in time, and can greatly improve the construction efficiency and the bridge formation quality.
Drawings
FIG. 1 is a schematic overall flow chart of an embodiment of the present invention;
fig. 2 is a point cloud dense reconstruction of the arch springing member.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and this embodiment.
As shown in fig. 1 and 2, the present invention provides a three-dimensional reconstruction method of a steel bridge member based on multi-view images, comprising steps S1 to S6:
step S1, respectively arranging checkerboard targets at a plurality of positions on the surface of a component, and acquiring left-eye and right-eye pixel coordinates corresponding to corner points in the checkerboard targets through a binocular camera system; solving to obtain three-dimensional coordinates of the corner points in the checkerboard target under a world coordinate system based on the left-eye and right-eye pixel coordinates corresponding to the corner points;
the steel bridge element may be, for example, the abutment element shown in fig. 2;
preferably, in step S1, the method may include:
step S11, firstly calibrating a binocular camera, acquiring checkerboard targets on arch leg members through the calibrated binocular camera, carrying out image correction through calibrated binocular camera parameters, compiling a corner recognition algorithm by using python language, calling an Excel algorithm library Openpyrl, carrying out batched numbering and writing of left-eye and right-eye pixel coordinates of the corner, and storing a calculation result as an Excel file;
step S12, establishing a relation between left-eye and right-eye pixel coordinates and world coordinates by using a projection matrix;
and S13, compiling and calling the relation between the left-eye and right-eye pixel coordinates and the world coordinates through python language, and calculating the three-dimensional coordinates of the corner point space by utilizing a least square method to realize batch calculation of the left-eye and right-eye pixel coordinates of the corner points of the checkerboard to the three-dimensional coordinates under the world coordinate system.
Specifically, step S11 includes:
step S111, inputting the size information of the calibration plate through the images of the calibration plate acquired by the binocular camera system under different angles, wherein the size information of the calibration plate comprises: calibration plate transverse and longitudinal corner points and calibration plate checkerboard size (calibration plate specification in the example is 12×9, and checkerboard size is 25 mm); removing the image with larger reprojection error, controlling the reprojection error to be lower than 0.2, and obtaining the internal and external parameters of the binocular camera to obtain the residual image; acquiring internal and external parameters of the binocular camera based on the residual images; two checkerboard targets (specification is 5 multiplied by 4, and the size of the checkerboard is 50 mm) are arranged on the surface of the arch foot member, the checkerboard target images are collected by using a calibrated binocular camera, and the collected checkerboard target images are corrected by using the internal and external parameters of the binocular camera, so that the same characteristic points of the left and right eye images are positioned on the same horizontal line.
And step S112, utilizing python language to write a corner recognition algorithm, calling an Excel algorithm library Openpyrxl, carrying out recognition number on the corner of the corrected checkerboard target image, calculating pixel coordinates of each corner under the left and right cameras, and storing the calculation result as an Excel file.
Step S12, including:
through a projection matrix of the binocular camera system, four linear equations of the pixel coordinates and world coordinates of the nth corner of the ith checkerboard target are established:
Figure BDA0004105668840000091
Figure BDA0004105668840000092
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004105668840000093
respectively the pixel coordinates of the nth corner point of the i-number checkerboard target under a pixel coordinate system; />
Figure BDA0004105668840000094
The left eye projection matrix and the right eye projection matrix are respectively multiplied by the inner parameter matrix and the outer parameter matrix which are obtained by the double-target calibration to obtain +.>
Figure BDA0004105668840000095
Is the three-dimensional coordinate of the nth corner point of the i-number checkerboard target under the world coordinate.
Step S13, including:
writing and calling left-eye and right-eye pixel coordinates of corner points in the four linear equations and Excel files through python language, calculating algorithm of three-dimensional coordinates of corner point space by utilizing a least square method, carrying out batched calculation of left-eye and right-eye pixel coordinates of corner points of a checkerboard to three-dimensional coordinates under a world coordinate system, and marking three-dimensional sitting marks of nth corner points of an ith checkerboard target as follows
Figure BDA0004105668840000096
Step S2, planning a course route of the unmanned aerial vehicle, and based on the course route of the unmanned aerial vehicle, acquiring multi-distance surrounding type through the unmanned aerial vehicle to obtain a multi-view image of a component comprising a checkerboard target, and preprocessing the multi-view image to obtain a preprocessed multi-view image;
preferably, the course route of the unmanned aerial vehicle is planned, which comprises:
setting a course route of the unmanned aerial vehicle as three circles of flying surrounding components, wherein the overlapping rate of pixels of two adjacent frames of images acquired in each circle of flying is more than 80%, and the vertical distance between the unmanned aerial vehicle and the components in the first circle of flying and the second circle of flying is d 1 Flying heights respectively H 1 =0.425Hmax、H 2 =0.575H max The vertical distance between the unmanned plane and the component in the third round of flight is d 2 The flying height is H 3 =0.5H max
When the vertical distance is d 1 When the vertical imaging is the vertical maximum dimension H of the component max 85%, i.e
Figure BDA0004105668840000101
When the vertical distance is d 2 When imaged vertically as the vertical maximum dimension H of the member max I.e.
Figure BDA0004105668840000102
f is the focal length of the camera, and h is the imaging surface size.
The vertical maximum dimension H of the arch springing in the example max The camera focal length is 9mm, h is imaging plane size and is 8.8mm, so the vertical distance of unmanned aerial vehicle camera of first round is 2782mm, fly height is 1360mm, the vertical distance of unmanned aerial vehicle camera of first round is 2782mm, fly height is 1840mm, the vertical distance of unmanned aerial vehicle camera of third round is 3273mm, fly height is 1600mm.
The pretreatment step comprises the following steps: and (3) through analyzing the local gray value of the target in each view and the gray value of the whole view, adjusting the gray threshold value in the algorithm, and enhancing the information of the target. The method avoids the influence of background factors in the multi-view images on reconstruction efficiency and results, and extracts foreground information of the components and separates the backgrounds outside the components.
Specifically, preprocessing the multi-view image to obtain a preprocessed multi-view image, including:
firstly, marking the outline of a component such as a arch springing in a multi-view image by using VIA marking software, and deriving a json file, wherein the json file contains outline coordinate information of the component;
writing an analysis code by using python language, loading a json file, and acquiring a Mask image of a foreground region in a multi-view image based on the json file, wherein a background pixel in the Mask image is 0, and the foreground pixel is; and calculating each pixel of each image in the multi-view image and the corresponding pixel of the mask image, extracting a component image, and carrying out data enhancement on the extracted component image.
S3, taking the preprocessed multi-view images as input, calculating the camera pose of the unmanned aerial vehicle through a motion recovery technology (SFM), and storing the camera pose of the unmanned aerial vehicle and camera parameters of the unmanned aerial vehicle as a transform. Json file;
the SFM technology implementation principle is as follows:
feature point detection is carried out on the component images extracted by the SIFT operator, feature point matching and error matching removal of the extracted component images are carried out by combining an exhaustion matching algorithm with a RANSAC algorithm, and optimization is carried out by an incremental three-dimensional reconstruction mode and a beam adjustment method, so that the pose of the camera is obtained.
The specific implementation steps can be as follows: configuring an operation environment required by three-dimensional reconstruction in an Anacongda terminal Anacongda Prompt, and writing the following script in the created virtual environment:
Python scripts/colmap2nerf--colmap_match erexhaustive--run_colmap-aabb_scale 16–images date/component;
and obtaining camera pose information of the data set, and storing the pose information and unmanned aerial vehicle camera parameters in a data set folder by using a transform. With the exhaustive feature matching scheme, the voxel resolution change is divided into 16 levels, and the component multi-view image is stored in the images date/component relative path.
Step S4, obtaining a dense point cloud model of the component based on the preprocessed multi-view image, camera pose information and camera parameters in the transformation file and the preprocessed multi-view image;
preferably, step S4 includes:
and taking the preprocessed multi-view image, camera pose information and camera parameters in the transformation.json file and the preprocessed multi-view image as input conditions, performing dense reconstruction by using a technology of accelerating nerve radiation transmission three-dimensional reconstruction based on hash search, obtaining a dense point cloud model of a component, and outputting the dense point cloud model as a ply point cloud format file. The three-dimensional reconstruction technology for accelerating nerve radiation transmission based on hash search is to accelerate the reasoning speed of a nerve radiation reconstruction model by introducing the hash search technology, and a hash table is used for storing the position and normal vector of each triangle in a scene and the parameters of a nerve radiation field model corresponding to the position and normal vector; for each pixel point, intersection detection is first performed using the ray with triangles in the hash table to determine which triangles in the scene the ray intersects. If the ray intersects a plurality of triangles, the intersected triangles are ordered in the order of relative distance by indexes in the hash table, and then only the nearest triangles are selected for sampling, so that the calculation amount is reduced. And for each intersected triangle, reasoning is carried out according to the neural radiation field model parameters stored in the hash table, and the corresponding color and transparency values of the pixel points are calculated.
The specific implementation steps can be realized by writing the following scripts:
instant-ngp>\build\testbed--scene data/component
and running a three-dimensional reconstruction program based on hash search to accelerate nerve radiation transmission, and realizing high-speed reconstruction of the component multi-view image. Where < instance-ngp > is the absolute path of the instance-ngp program, building\testbed-scale data/component is the path of the data set component in the system.
S5, performing noise reduction hole repair and secondary sampling treatment on the dense point cloud model to obtain a refined point cloud model of the component; acquiring relative three-dimensional coordinates of corner points in the checkerboard target under a refined point cloud model coordinate system, calculating a model scaling factor delta by combining the three-dimensional coordinates of the corner points acquired by binocular under a world coordinate system, and multiplying the relative size of the point cloud by the scaling factor delta to obtain a component real-size point cloud model;
preferably, the model scaling factor δ is obtained by the following formula:
Figure BDA0004105668840000121
in the above formula, delta is the proportionality coefficient of the transformation of the dense point cloud model into the true size,
Figure BDA0004105668840000122
three-dimensional coordinates of nth corner of i target measured by binocular camera system, +.>
Figure BDA0004105668840000123
Three-dimensional coordinates of nth corner of j target measured by binocular camera system, +.>
Figure BDA0004105668840000131
For the nth angular point three-dimensional coordinates of the i-target in the dense point cloud model,
Figure BDA0004105668840000132
and the n-th angular point three-dimensional coordinates of the j targets in the dense point cloud model are respectively, k is the total number of angular points in the checkerboard targets, and k is more than or equal to 2.
For example, the checkerboard targets used in the examples are 5×4, so the total number k of target corner points is 12, the corner point three-dimensional coordinates measured by the binocular camera and the corner point three-dimensional coordinates in the dense point cloud are brought into the above, the size coefficient delta=20.1 of the arch foot point cloud model can be calculated, the size information of the arch foot point cloud model is changed to be 20.1 times of the original size information, the component full-scale point cloud model can be obtained, the reconstruction result is shown in fig. 2, and the result is saved as a txt format point cloud file.
Preferably, step S5, performing noise reduction hole repair and subsampling processing on the dense point cloud model to obtain a refined point cloud model of the component, including:
step S51, noise reduction processing is carried out on the component dense point cloud model by utilizing statistical filtering, the adjacent point number is set to be b, and the average distance from each point in the component dense point cloud to the nearest b points is calculated
Figure BDA0004105668840000133
And standard deviation sigma, a threshold value is set to +.>
Figure BDA0004105668840000134
a is a standard deviation multiplier, in this case 1, the removable average distance is greater than a threshold +.>
Figure BDA0004105668840000135
Obtaining dense point cloud of the noise-reduced component, continuously debugging and confirming the parameter b through the noise reduction effect, and reducing the value of the number b of nearest points when the main point cloud is mistakenly considered to be the noise point to be removed; repeating the above operation until the noise removing effect is optimal and the component subject point cloud is not deleted. Multiple trial runs are carried out, when the adjacent point is set to be 50 and the standard deviation multiple is 1, noise points of arch leg components can be well removed, and component point clouds are not removed
Step S52, hole repairing specifically includes: extracting three-dimensional coordinates of adjacent c points of the holes of the dense point cloud of the noise-reduced component, fitting a curved surface by using the three-dimensional coordinates of the c points, and generating the point cloud in the holes through discrete sampling to obtain the dense point cloud of the component after hole repair, wherein the parameter c is the number of points around the holes consistent with the curvature change of the holes;
and S53, performing secondary sampling on the dense point cloud of the component after hole repair by using a geometric sampling method, setting the sampling target point number as T, the sampling rate as f ', calculating the curvature of the point cloud, setting the curvature threshold as k, and when the curvature threshold is larger than k, setting the sampling number as T (1-f '), otherwise, setting the sampling number as Tf ', so as to obtain the component refined point cloud model.
And S6, performing point cloud triangularization surface fitting based on the real-size point cloud model of the member to obtain a solid three-dimensional model of the steel bridge member.

Claims (10)

1. A three-dimensional reconstruction method of a steel bridge member based on multi-view images is characterized by comprising the following steps:
s1, respectively arranging checkerboard targets at a plurality of positions on the surface of a component, and acquiring left-eye and right-eye pixel coordinates corresponding to corner points in the checkerboard targets through a binocular camera system; solving to obtain three-dimensional coordinates of the corner points in the checkerboard target under a world coordinate system based on the left-eye and right-eye pixel coordinates corresponding to the corner points;
s2, planning a course route of the unmanned aerial vehicle, and acquiring multi-distance surrounding type through the unmanned aerial vehicle based on the course route of the unmanned aerial vehicle to obtain a multi-view image of a component containing a checkerboard target, and preprocessing the multi-view image to obtain a preprocessed multi-view image; the method comprises the steps of carrying out a first treatment on the surface of the
S3, taking the preprocessed multi-view images as input, calculating the camera pose of the unmanned aerial vehicle through a motion recovery technology, and storing the camera pose of the unmanned aerial vehicle and camera parameters of the unmanned aerial vehicle as a transform.
S4, obtaining a dense point cloud model of the component based on the preprocessed multi-view image, camera pose information and camera parameters in the transformation file and the preprocessed multi-view image;
s5, performing noise reduction hole repair and secondary sampling treatment on the dense point cloud model to obtain a refined point cloud model of the component; acquiring relative three-dimensional coordinates of corner points in the checkerboard target under a refined point cloud model coordinate system, calculating a model scaling factor delta by combining the three-dimensional coordinates of the corner points acquired by binocular under a world coordinate system, and multiplying the relative size of the point cloud by the scaling factor delta to obtain a component real-size point cloud model;
and S6, performing point cloud triangularization surface fitting based on the real-size point cloud model of the member to obtain a solid three-dimensional model of the steel bridge member.
2. The multi-view image-based three-dimensional reconstruction method of steel bridge members according to claim 1, wherein S2, planning a course route of the unmanned aerial vehicle comprises:
setting a course route of the unmanned aerial vehicle as three circles of flying surrounding components, wherein the overlapping rate of pixels of two adjacent frames of images acquired in each circle of flying is more than 80%, and the vertical distance between the unmanned aerial vehicle and the components in the first circle of flying and the second circle of flying is d 1 Flying heights respectively H 1 =0.425H max 、H 2 =0.575H max The vertical distance between the unmanned plane and the component in the third round of flight is d 2 The flying height is H 3 =0.5H max
When the vertical distance is d 1 When the vertical imaging is the vertical maximum dimension H of the component max 85%, i.e
Figure FDA0004105668800000021
When the vertical distance is d 2 When imaged vertically as the vertical maximum dimension H of the member max I.e.
Figure FDA0004105668800000022
f is the focal length of the unmanned aerial vehicle camera, and h is the size of the imaging surface of the unmanned aerial vehicle camera.
3. The multi-view image-based steel bridge member three-dimensional reconstruction method according to claim 1, wherein preprocessing the multi-view image to obtain a preprocessed multi-view image comprises:
marking the outline of a component such as a arch springing in a multi-view image by using VIA marking software, and deriving a json file, wherein the json file contains outline coordinate information of the component;
writing an analysis code by using python language, loading a json file, and acquiring a Mask image of a foreground region in a multi-view image based on the json file, wherein a background pixel in the Mask image is 0, and the foreground pixel is; and calculating each pixel of each image in the multi-view image and the corresponding pixel of the mask image, extracting a component image, and carrying out data enhancement on the extracted component image.
4. The three-dimensional reconstruction method of a steel bridge component based on multi-view images according to claim 1, wherein S1, checkerboard targets are respectively arranged at a plurality of positions on the surface of the component, and left-eye and right-eye pixel coordinates corresponding to corner points in the checkerboard targets are obtained through a binocular camera system; based on the left-eye and right-eye pixel coordinates corresponding to the corner points, solving to obtain the three-dimensional coordinates of the corner points in the checkerboard target under the world coordinate system, wherein the three-dimensional coordinates comprise:
collecting checkerboard targets on the components through the calibrated binocular camera system, compiling a corner recognition algorithm by using python language, calling an Excel algorithm library Openpyrl, numbering and writing left-eye and right-eye pixel coordinates of the corner in batches, and storing a calculation result as an Excel file;
establishing a relation between left-eye and right-eye pixel coordinates and world coordinates by using a projection matrix;
and writing and calling left-eye and right-eye pixel coordinates and world coordinate relation, and left-eye and right-eye pixel coordinates of corner points in an Excel file through a python language, and calculating a three-dimensional coordinate algorithm of a corner point space by utilizing a least square method to realize batch calculation of the left-eye and right-eye pixel coordinates of the corner points of the checkerboard to the three-dimensional coordinates under the world coordinate system.
5. The three-dimensional reconstruction method of steel bridge components based on multi-view images according to claim 4, wherein the calibrated two-eye camera system is used for collecting checkerboard targets on the components, the python language is used for compiling a corner recognition algorithm, and an Excel algorithm library Openpyxl is called for carrying out batched numbering and writing of left-eye and right-eye pixel coordinates of the corner, and a calculation result is stored as an Excel file, and the method comprises the following steps:
the method comprises the steps of inputting the size information of a calibration plate through images of the calibration plate collected by a binocular camera system under different angles, wherein the size information of the calibration plate comprises the following steps: calibrating transverse and longitudinal angular points of the plate and the checkerboard size of the calibrating plate; removing images with errors larger than a preset threshold value to obtain residual images; acquiring internal and external parameters of the binocular camera based on the residual images, and correcting left and right eye images based on the internal and external parameters of the binocular camera;
and writing a corner recognition algorithm by using a python language, calling an Excel algorithm library Openpyrl, numbering and batched writing of corrected left-eye and right-eye pixel coordinates, and storing a left-eye and right-eye pixel coordinate calculation result as an Excel file.
6. The multi-view image based steel bridge member three-dimensional reconstruction method according to claim 5, wherein establishing left-eye and right-eye pixel coordinates and world coordinate relation by using a projection matrix comprises:
through a projection matrix of the binocular camera system, four linear equations of the pixel coordinates and world coordinates of the nth corner of the ith checkerboard target are established:
Figure FDA0004105668800000031
Figure FDA0004105668800000041
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004105668800000042
respectively the pixel coordinates of the nth corner point of the i-number checkerboard target under a pixel coordinate system; />
Figure FDA0004105668800000043
The two-dimensional projection matrix is respectively a left-eye projection matrix and a right-eye projection matrix, and is obtained by multiplying the inner parameter matrix and the outer parameter matrix which are obtained by double-target calibration; />
Figure FDA0004105668800000044
Is the three-dimensional coordinate of the nth corner point of the i-number checkerboard target under the world coordinate.
7. The three-dimensional reconstruction method of a steel bridge member based on a multi-view image according to claim 6, wherein the method is characterized in that the calculation of the three-dimensional coordinates of the corner points in the Excel file from the left-eye and right-eye pixel coordinates to the world coordinate system in batches is realized by compiling and calling the left-eye and right-eye pixel coordinates and the world coordinate relation by python language and calculating the three-dimensional coordinates of the corner points in the space by using a least square method, and comprises the following steps:
and compiling and calling the left-eye and right-eye pixel coordinates of the corner points in the four linear equations and Excel files through python language, and calculating the three-dimensional coordinates of the corner point space by utilizing a least square method to perform batched calculation from the left-eye and right-eye pixel coordinates of the corner points of the checkerboard to the three-dimensional coordinates under a world coordinate system.
8. The three-dimensional reconstruction method of a steel bridge member based on multi-view images according to claim 1, wherein S4, based on the preprocessed multi-view images and the unmanned aerial vehicle camera pose information and the camera parameters in the transform.
And taking the preprocessed multi-view images and pose information and camera parameters of the unmanned aerial vehicle in the transformation file as input conditions, performing dense reconstruction by using a technology of accelerating nerve radiation transmission three-dimensional reconstruction based on hash search, obtaining a dense point cloud model of the component, and outputting the dense point cloud model as a ply point cloud format file.
9. The multi-view image-based three-dimensional reconstruction method of a steel bridge member according to claim 1, wherein the model scaling factor δ is found by the following formula:
Figure FDA0004105668800000051
in the above formula, delta is the proportionality coefficient of the transformation of the dense point cloud model into the true size,
Figure FDA0004105668800000052
three-dimensional coordinates of nth corner of i target measured by binocular camera system, +.>
Figure FDA0004105668800000053
Three-dimensional coordinates of nth corner of j target measured by binocular camera system, +.>
Figure FDA0004105668800000054
For the nth angular point three-dimensional coordinates of the i-target in the dense point cloud model,
Figure FDA0004105668800000055
and the n-th angular point three-dimensional coordinates of the j targets in the dense point cloud model are respectively, k is the total number of angular points in the checkerboard targets, and k is more than or equal to 2.
10. The multi-view image-based three-dimensional reconstruction method of a steel bridge member according to claim 1, wherein the noise reduction hole repair and subsampling process are performed on the dense point cloud model to obtain a refined point cloud model of the member, comprising:
noise reduction processing is carried out on the component dense point cloud model by utilizing statistical filtering, the adjacent point number is set to be b, and the average distance from each point in the component dense point cloud to the nearest b points is calculated
Figure FDA0004105668800000056
And standard deviation sigma, a threshold value is set to +.>
Figure FDA0004105668800000057
a is a standard deviation multiplier, which can be removedAverage distance greater than threshold->
Figure FDA0004105668800000058
Obtaining dense point cloud of the noise-reduced component, continuously debugging and confirming the parameter b through the noise reduction effect, and reducing the value of the number b of nearest points when the main point cloud is mistakenly considered to be the noise point to be removed; repeating the operation until the noise is removed and the component main body point cloud is not removed;
extracting three-dimensional coordinates of adjacent c points of the holes of the dense point cloud of the noise-reduced component, fitting a curved surface by using the three-dimensional coordinates of the c points, and generating the point cloud in the holes through discrete sampling to obtain the dense point cloud of the component after hole repair, wherein the parameter c is the number of points around the holes consistent with the curvature change of the holes;
and (3) performing secondary sampling on the dense point cloud of the component subjected to hole repair by using a geometric sampling method, setting the number of sampling target points as T, the sampling rate as f ', calculating the curvature of the point cloud, setting the curvature threshold of the point cloud as k, and when the curvature threshold of the point cloud is larger than k, setting the sampling number as T (1-f '), otherwise, setting the sampling number as Tf ', so as to obtain the refined point cloud model of the component.
CN202310191513.4A 2023-03-01 2023-03-01 Three-dimensional reconstruction method of steel bridge component based on multi-view images Pending CN116310099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310191513.4A CN116310099A (en) 2023-03-01 2023-03-01 Three-dimensional reconstruction method of steel bridge component based on multi-view images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310191513.4A CN116310099A (en) 2023-03-01 2023-03-01 Three-dimensional reconstruction method of steel bridge component based on multi-view images

Publications (1)

Publication Number Publication Date
CN116310099A true CN116310099A (en) 2023-06-23

Family

ID=86825134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310191513.4A Pending CN116310099A (en) 2023-03-01 2023-03-01 Three-dimensional reconstruction method of steel bridge component based on multi-view images

Country Status (1)

Country Link
CN (1) CN116310099A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116734759A (en) * 2023-08-14 2023-09-12 四川省公路规划勘察设计研究院有限公司 Bridge body detection method and system based on three-dimensional laser and multi-beam underwater scanning
CN117128861A (en) * 2023-10-23 2023-11-28 常州市建筑材料研究所有限公司 Monitoring system and monitoring method for station-removing three-dimensional laser scanning bridge

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116734759A (en) * 2023-08-14 2023-09-12 四川省公路规划勘察设计研究院有限公司 Bridge body detection method and system based on three-dimensional laser and multi-beam underwater scanning
CN117128861A (en) * 2023-10-23 2023-11-28 常州市建筑材料研究所有限公司 Monitoring system and monitoring method for station-removing three-dimensional laser scanning bridge

Similar Documents

Publication Publication Date Title
CN116310099A (en) Three-dimensional reconstruction method of steel bridge component based on multi-view images
CN109410256B (en) Automatic high-precision point cloud and image registration method based on mutual information
CN111353969B (en) Method and device for determining road drivable area and computer equipment
CN109000557B (en) A kind of nuclear fuel rod pose automatic identifying method
DE102013211240B4 (en) Range measuring device and range measuring method
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
DE112013003214T5 (en) Method for registering data
CN111044037B (en) Geometric positioning method and device for optical satellite image
DE102014000982A1 (en) Position and orientation measuring device, information processing device and information processing method
CN111507901A (en) Aerial image splicing and positioning method based on aerial belt GPS and scale invariant constraint
CN113358091B (en) Method for producing digital elevation model DEM (digital elevation model) by using three-linear array three-dimensional satellite image
CA2918511A1 (en) A method for reducing blur of tdi-ccd camera images
WO2023207257A1 (en) Tailing dam surface deformation patrol method based on rail-mounted rail robot
CN113744315A (en) Semi-direct vision odometer based on binocular vision
CN108076341A (en) A kind of video satellite is imaged in-orbit real-time digital image stabilization method and system
Sivov et al. Computer simulation of the intrinsic parameters decalibration for the stereo system of video cameras
CN114419259B (en) Visual positioning method and system based on physical model imaging simulation
CN116188723A (en) Nerve radiation field and two-dimensional volume density segmentation modeling method based on Hashgrid
JP4747293B2 (en) Image processing apparatus, image processing method, and program used therefor
CN113670268B (en) Binocular vision-based unmanned aerial vehicle and electric power tower distance measurement method
CN116432360B (en) Flexible interactive camera network optimal layout adjustment method and space repositioning method
CN115690205B (en) Visual relative pose measurement error estimation method based on point-line comprehensive characteristics
CN112991524B (en) Three-dimensional reconstruction method, electronic device and storage medium
CN117893582A (en) Point cloud registration technology research based on preprocessing of region growing segmentation algorithm
CN117522853A (en) Fault positioning method, system, equipment and storage medium of photovoltaic power station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination