CN110889901B - Large-scene sparse point cloud BA optimization method based on distributed system - Google Patents

Large-scene sparse point cloud BA optimization method based on distributed system Download PDF

Info

Publication number
CN110889901B
CN110889901B CN201911132373.3A CN201911132373A CN110889901B CN 110889901 B CN110889901 B CN 110889901B CN 201911132373 A CN201911132373 A CN 201911132373A CN 110889901 B CN110889901 B CN 110889901B
Authority
CN
China
Prior art keywords
blocks
scene
sub
sparse
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911132373.3A
Other languages
Chinese (zh)
Other versions
CN110889901A (en
Inventor
杜文祥
齐越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Research Institute Of Beihang University
Original Assignee
Qingdao Research Institute Of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Research Institute Of Beihang University filed Critical Qingdao Research Institute Of Beihang University
Priority to CN201911132373.3A priority Critical patent/CN110889901B/en
Publication of CN110889901A publication Critical patent/CN110889901A/en
Application granted granted Critical
Publication of CN110889901B publication Critical patent/CN110889901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

According to the large-scene sparse point cloud BA optimization method based on the distributed system, the divide-and-conquer principle is adopted to get rid of the limitation of the existing hardware resources, BA optimization in the large-scale scene three-dimensional reconstruction process is achieved, and the BA optimization method is used as a basis for reconstructing a larger-scale three-dimensional scene sparse three-dimensional point cloud. The method comprises the following steps: step A, preprocessing and converting a data format by taking a large scene sparse three-dimensional point cloud and a camera pose as input; step B, dividing the whole scene into different sub-blocks by a k-path multi-level division method based on a graph cut principle; step C, performing BA optimization adjustment on all sub-blocks by adopting distribution, and returning the result to the host; step D, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to the results of different sub-blocks; e, global BA optimization is carried out, and a transformation matrix among different sub-blocks of the whole scene is obtained; and F, converting the sparse three-dimensional point clouds of different sub-blocks into the same world coordinate system to obtain the three-dimensional sparse point clouds of the scene.

Description

Large-scene sparse point cloud BA optimization method based on distributed system
Technical Field
The invention is applied to the field of computer vision, and particularly relates to a large-scene sparse point cloud BA (Bundle adaptation-event) optimization method based on a distributed system.
Background
With the development of virtual reality technology, three-dimensional reconstruction has become a hotspot in the fields of computer graphics and animation. The three-dimensional reconstruction of the real scene is one of important research directions in the fields of computer vision and computer graphics, and can reconstruct a three-dimensional model of the real scene. Three-dimensional reconstruction is a process of acquiring real object or scene data by using specific equipment and methods, and then reproducing the real object or scene data in a computer by using various theories, technologies and methods. People can obtain the surface profile of the object and the geometric and color characteristics of each grid vertex through the three-dimensional reconstruction technology, compared with the traditional three-dimensional modeling software (such as AutoCAD, maya and the like) which is used for manually constructing a three-dimensional model, the three-dimensional reconstruction technology can more conveniently and rapidly obtain a more realistic three-dimensional model, and has low cost and simple operation, so the three-dimensional reconstruction technology has wide application in the aspects of heritage protection, medical research, augmented reality, man-machine interaction, urban planning, game animation industry, reverse engineering and the like, and is an irreplaceable key technology. Three-dimensional reconstruction of outdoor large scenes, and even urban level scenes, is also a difficulty in the fields of computer graphics and animation.
In recent years, many research subjects adopt an image-based three-dimensional reconstruction theory to reconstruct a three-dimensional model of an outdoor large-scale scene, the core problem of the image-based modeling research is how to recover three-dimensional geometric information of an object or a target scene from an image set, and a grid model of the target is constructed by applying a related technology, and the most typical method is SfM (structure from motion), which mainly comprises feature extraction and matching of images, camera parameter estimation, three-dimensional point cloud generation, BA (Bundle adaptation-event) optimization and the like. The method for recovering the structure from the motion obtains sparse point cloud of the target scene and camera gestures corresponding to each photo through the processes of feature extraction and matching, camera parameter estimation, point cloud calculation, optimization and the like; the method is mainly divided into two types according to different treatment modes: progressive SfM methods and global SfM methods. The progressive SfM method usually uses two or three images as initial images to reconstruct a small scene, then progressively adds images, and repeats the process until the whole scene is finally reconstructed; the global SfM method calculates camera gestures corresponding to all images, then calculates point clouds and optimizes the whole scene.
When the SfM method is used for three-dimensional reconstruction of an outdoor large-scale scene, for the problem of three-dimensional reconstruction of the outdoor large-scale scene based on images, the coordinates of the same three-dimensional point need to be calculated through a plurality of viewpoints, so that enough images need to be acquired to ensure the integrity of the finally reconstructed scene, the amount of data to be processed is huge, the problem of three-dimensional scene reconstruction based on images is challenged, and the number of images is larger along with the continuous increase of the scenes. The modeling method based on the image needs to establish a plurality of corresponding relations in the reconstruction process, including the corresponding relation of the characteristic points between the images, the corresponding relation between the three-dimensional points in the three-dimensional point cloud model and the characteristic points, the corresponding relation between the three-dimensional points and all the visible viewpoints, and the like, the data volume to be processed is too large, and the realization has a certain difficulty, so that new challenges are brought to the three-dimensional reconstruction based on the image, and especially when the number of the images of the input image set is large, the size of the reconstructed scene is limited by the bottleneck of the memory. In addition, for large-scale scene reconstruction problems, the time required for reconstruction determines the practicability of an algorithm, the problem is particularly remarkable in a BA optimization stage, in the image-based three-dimensional reconstruction process, BA optimization is one of the most time-consuming and memory modules, the whole scene is required to be optimized through nonlinear optimization based on three-dimensional point clouds and corresponding pixel coordinates and camera information, many students research the properties and rules of BA optimization related algorithms, try to split the whole optimization problem into smaller and easier-to-control modules, control the complexity of the algorithm by limiting the increase of the number of images and the number of feature points, and determine that the number of images input by the whole algorithm cannot be increased all the time, thus limiting the scale of the reconstructed scene. Because the SfM algorithm is based on the corresponding relation of characteristic points among the images of the whole image set to reconstruct three-dimensionally, all the images are always needed to be loaded into a memory in the reconstruction process, and the problem that the memory is insufficient cannot be fundamentally solved only by simply improving the performance of BA optimization.
In view of this, the present patent application is specifically filed.
Disclosure of Invention
The invention discloses a large-scene sparse point cloud BA optimization method based on a distributed system, which aims to solve the problems in the prior art and adopts a divide-and-conquer principle to get rid of the limitation of the existing hardware resources (memory, CPU and the like) so as to realize BA optimization in the large-scale scene three-dimensional reconstruction process, thereby obtaining a more realistic reconstruction effect and being used as a basis for reconstructing a larger-scale three-dimensional scene sparse three-dimensional point cloud.
In order to achieve the above design objective, the large-scene sparse point cloud BA optimization method based on the distributed system comprises the following steps:
step A, preprocessing and converting a data format by taking a large scene sparse three-dimensional point cloud and a camera pose as input;
step B, based on graph cut principle, dividing the whole scene into different sub-blocks B= { B by a k-path multi-level division method 1 ,B 2 ……B M -a }; at the same time, recording the common camera information of two adjacent sub-blocksAnd so on;
step C, performing BA optimization adjustment on all sub-blocks by adopting distribution, and returning the result to a Master host;
step D, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to the results of different sub-blocks;
e, global BA optimization is carried out, and a transformation matrix lambda [ R|T ] among different sub-blocks of the whole scene is obtained;
and F, converting the sparse three-dimensional point clouds of different sub-blocks in the step C into the same world coordinate system according to the result of the step E so as to obtain the three-dimensional sparse point clouds of the scene.
The method and the device improve the BA optimization mode in the existing large-scale scene three-dimensional reconstruction based on images, and divide the whole scene into different sub-blocks through k-path multi-level division, so that the reconstruction problem of one large scene is divided into the reconstruction problems of a plurality of small-scale scenes.
And further, distributed scene reconstruction and BA optimization are adopted, and the pose and three-dimensional point cloud information of a camera are fully utilized to splice different sub-blocks, so that parameters required by global BA are further optimized, the BA optimization process of the sparse point cloud of the large-scale scene can be completed, and the sparse three-dimensional point cloud of the large-scale scene is output.
Further, in the step B, a k-way multi-level division manner is adopted to divide the sparse three-dimensional point cloud of the whole scene into different sub-blocks, and the geometric error between two camera viewpoints is used as the boundary weight.
In the step D, based on the overlapped camera pose information, solving a rigid body transformation matrix lambda [ R|T ] between adjacent sub-blocks; the formula used is as follows,
wherein ,for the camera orientation of the same camera i in two adjacent sub-blocks, +.>For the camera position of the same camera i in two adjacent sub-blocks, n is the number of overlapping cameras in two adjacent sub-blocks.
In the step E, the corresponding relation of the three-dimensional point cloud between the two sub-blocks is found by a reverse tracking mode
Obtaining a rigid body transformation matrix and a scale transformation factor among different sub-blocks of the whole scene through nonlinear BA optimization; the formula used is as follows,
wherein ,in pi (n)The ith three-dimensional point in sub-block b 1.
In summary, the large-scene sparse point cloud BA optimization method based on the distributed system has the advantages that the limitation of the existing hardware resources can be effectively eliminated, BA optimization in the large-scale scene three-dimensional reconstruction process is achieved, and an optimization result can be used as a basis for reconstructing a larger-scale three-dimensional scene sparse three-dimensional point cloud.
Drawings
FIG. 1 is a schematic flow chart of the optimization method described in the present application;
fig. 2 and fig. 3 are schematic diagrams of different three-dimensional sparse point clouds obtained by applying the optimization method described in the present application;
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples of implementation, which are provided in part to make the objects, technical solutions and advantages of the present invention more apparent, and the scope of the present application is not limited to the following.
As shown in fig. 1, the large-scene sparse point cloud BA optimization method based on the distributed system described in the present application includes the following steps:
step A, preprocessing and converting a data format by taking a large scene sparse three-dimensional point cloud and a camera pose as input; the calculation formula of the geometric error is as follows, which is constructed as a nuclear point diagram of the whole scene:
and tracking the corresponding relation among the characteristic points, the camera pose and the three-dimensional point cloud.
Step B, dividing the whole scene into different sub-blocks B= { B by a k-path multi-level division method according to the kernel diagram determined in the step A based on the graph cutting principle 1 ,B 2 ……B M -a }; at the same time, recording the common camera information of two adjacent sub-blocksAnd so on;
dividing a sparse three-dimensional point cloud of the whole scene into different sub-blocks by adopting a k-path multi-level dividing mode, and taking geometric errors between two camera viewpoints as boundary weights;
step C, performing BA optimization adjustment on all sub-blocks by adopting distribution, and returning the result to a Master host;
tracking out corresponding relation three-dimensional point sets existing between different sub-blocks according to the overlapping camera pose setsAnd performing BA optimization on each sub-block by using a nonlinear optimization library Ceres-Solver in a distributed mode, and returning the result to a Master host.
Step D, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to the results of different sub-blocks;
based on the overlapped camera pose information, solving a rigid body transformation matrix lambda [ R|T ] between adjacent sub-blocks; the formula used is as follows,
wherein ,for the camera orientation of the same camera i in two adjacent sub-blocks, +.>For the camera position of the same camera i in two adjacent sub-blocks, n is the number of overlapping cameras in two adjacent sub-blocksOrder (1).
E, global BA optimization is carried out, and a transformation matrix lambda [ R|T ] among different sub-blocks of the whole scene is obtained;
from the above steps, the rigid transformation parameters (R, T) and the scale transformation factors lambda between the two sub-blocks are obtained, the corresponding relation of the three-dimensional point cloud between the two sub-blocks is found by a back tracking mode,
obtaining a rigid body transformation matrix and a scale transformation factor among different sub-blocks of the whole scene through nonlinear BA optimization; the formula used is as follows,
wherein ,is the three-dimensional point of the ith in sub-block b1 in pi.
And F, converting the sparse three-dimensional point clouds of different sub-blocks in the step C into the same world coordinate system according to the result of the step E so as to obtain the three-dimensional sparse point clouds of the scene.
In the optimization process, the used equipment is a Spark cluster built by 7 PC machines, and each PC machine is configured as follows: NVIDIA GeForce GTX1080, intel (R) Core (TM) i7-6700CPU (3.40 GHz,4 cores) and 32GB RAM, the system environment is Ubuntu 14.04, and the programming languages are C++ and Python.
It will be understood that modifications and variations will be apparent to those skilled in the art from the foregoing description, and it is intended that all such modifications and variations be included within the scope of the following claims.

Claims (3)

1. A large-scene sparse point cloud BA optimization method based on a distributed system is characterized by comprising the following steps of: comprises the steps of,
step A, preprocessing and converting a data format by taking a large scene sparse three-dimensional point cloud and a camera pose as input;
step B, based on graph cut principle, dividing the whole scene into different sub-blocks B= { B by a k-path multi-level division method 1 ,B 2 ……B M -a }; at the same time, recording the common camera information of two adjacent sub-blocksAnd so on;
step C, performing BA optimization adjustment on all sub-blocks by adopting distribution, and returning the result to a Master host;
step D, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to the results of different sub-blocks;
based on the overlapped camera pose information, solving a rigid body transformation matrix lambda [ R|T ] between adjacent sub-blocks; the formula used is as follows,
R:
wherein ,for the camera orientation of the same camera i in two adjacent sub-blocks, T i 1 ,T i 2 For the camera pose of the same camera i in two adjacent sub-blocks, n is the number of overlapping cameras in the two adjacent sub-blocks;
e, global BA optimization is carried out, and a transformation matrix lambda [ R|T ] among different sub-blocks of the whole scene is obtained;
and F, converting the sparse three-dimensional point clouds of different sub-blocks in the step C into the same world coordinate system according to the result of the step E so as to obtain the three-dimensional sparse point clouds of the scene.
2. The distributed system-based large-scene sparse point cloud BA optimization method of claim 1, wherein: in the step B, a k-path multi-level division mode is adopted, the sparse three-dimensional point cloud of the whole scene is divided into different sub-blocks, and geometric errors between two camera viewpoints are used as boundary weights.
3. The distributed system-based large-scene sparse point cloud BA optimization method of claim 1, wherein: in the step E, the corresponding relation of the three-dimensional point cloud between the two sub-blocks is found by a reverse tracking mode
Obtaining a rigid body transformation matrix and a scale transformation factor among different sub-blocks of the whole scene through nonlinear BA optimization;
the formula used is as follows,
wherein ,is the three-dimensional point in sub-block b1 of the ith in pi.
CN201911132373.3A 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system Active CN110889901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911132373.3A CN110889901B (en) 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911132373.3A CN110889901B (en) 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system

Publications (2)

Publication Number Publication Date
CN110889901A CN110889901A (en) 2020-03-17
CN110889901B true CN110889901B (en) 2023-08-08

Family

ID=69747912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911132373.3A Active CN110889901B (en) 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system

Country Status (1)

Country Link
CN (1) CN110889901B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553985B (en) * 2020-04-30 2023-06-13 四川大学 O-graph pairing European three-dimensional reconstruction method and device
CN112365541B (en) * 2020-11-24 2022-09-02 北京航空航天大学青岛研究院 Large-scene camera posture registration method based on similarity transformation
CN113177999B (en) * 2021-03-25 2022-12-16 杭州易现先进科技有限公司 Visual three-dimensional reconstruction method, system, electronic device and storage medium
CN113284227B (en) * 2021-05-14 2022-11-22 安徽大学 Distributed motion inference structure method for large-scale aerial images

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034238A (en) * 2010-12-13 2011-04-27 西安交通大学 Multi-camera system calibrating method based on optical imaging test head and visual graph structure
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
CN102158524A (en) * 2010-12-30 2011-08-17 北京像素软件科技股份有限公司 Rendering-based distributed behavior control system
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map
CN110009732A (en) * 2019-04-11 2019-07-12 司岚光电科技(苏州)有限公司 Based on GMS characteristic matching towards complicated large scale scene three-dimensional reconstruction method
CN110264517A (en) * 2019-06-13 2019-09-20 上海理工大学 A kind of method and system determining current vehicle position information based on three-dimensional scene images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11288412B2 (en) * 2018-04-18 2022-03-29 The Board Of Trustees Of The University Of Illinois Computation of point clouds and joint display of point clouds and building information models with project schedules for monitoring construction progress, productivity, and risk for delays

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
CN102034238A (en) * 2010-12-13 2011-04-27 西安交通大学 Multi-camera system calibrating method based on optical imaging test head and visual graph structure
CN102158524A (en) * 2010-12-30 2011-08-17 北京像素软件科技股份有限公司 Rendering-based distributed behavior control system
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map
CN110009732A (en) * 2019-04-11 2019-07-12 司岚光电科技(苏州)有限公司 Based on GMS characteristic matching towards complicated large scale scene three-dimensional reconstruction method
CN110264517A (en) * 2019-06-13 2019-09-20 上海理工大学 A kind of method and system determining current vehicle position information based on three-dimensional scene images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于RGB-D相机数据的SLAM算法;洪亮;冯常;;电子设计工程(第09期) *

Also Published As

Publication number Publication date
CN110889901A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN110889901B (en) Large-scene sparse point cloud BA optimization method based on distributed system
WO2019157924A1 (en) Real-time detection method and system for three-dimensional object
CN104376594B (en) Three-dimensional face modeling method and device
CN103279980B (en) Based on the Leaf-modeling method of cloud data
CN110288695A (en) Single-frame images threedimensional model method of surface reconstruction based on deep learning
Lu et al. Attention-based dense point cloud reconstruction from a single image
CN1818977A (en) Fast human-face model re-construction by one front picture
CN110176079B (en) Three-dimensional model deformation algorithm based on quasi-conformal mapping
CN105261062B (en) A kind of personage's segmentation modeling method
CN106803094A (en) Threedimensional model shape similarity analysis method based on multi-feature fusion
CN103530907A (en) Complicated three-dimensional model drawing method based on images
CN110633628A (en) RGB image scene three-dimensional model reconstruction method based on artificial neural network
CN104318552B (en) The Model registration method matched based on convex closure perspective view
CN101794459A (en) Seamless integration method of stereoscopic vision image and three-dimensional virtual object
CN104392484B (en) A kind of Three-dimension Tree modeling method and device
CN113724394A (en) Method for realizing lightweight three-dimensional model
Zhang et al. An improved ℓ 1 median model for extracting 3D human body curve-skeleton
Shi et al. Geometric granularity aware pixel-to-mesh
CN109785443B (en) Three-dimensional model simplification method for large ocean engineering equipment
CN111047684A (en) Model simplification method based on three-dimensional model characteristics
CN113487713B (en) Point cloud feature extraction method and device and electronic equipment
CN113808006B (en) Method and device for reconstructing three-dimensional grid model based on two-dimensional image
Athanasiadis et al. Parallel computation of spherical parameterizations for mesh analysis
CN111583098B (en) Line segment clustering and fitting method and system based on sequence image
Ma et al. Research and application of personalized human body simplification and fusion method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant