CN110889901A - Large-scene sparse point cloud BA optimization method based on distributed system - Google Patents

Large-scene sparse point cloud BA optimization method based on distributed system Download PDF

Info

Publication number
CN110889901A
CN110889901A CN201911132373.3A CN201911132373A CN110889901A CN 110889901 A CN110889901 A CN 110889901A CN 201911132373 A CN201911132373 A CN 201911132373A CN 110889901 A CN110889901 A CN 110889901A
Authority
CN
China
Prior art keywords
scene
blocks
point cloud
sparse
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911132373.3A
Other languages
Chinese (zh)
Other versions
CN110889901B (en
Inventor
杜文祥
齐越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Research Institute Of Beihang University
Original Assignee
Qingdao Research Institute Of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Research Institute Of Beihang University filed Critical Qingdao Research Institute Of Beihang University
Priority to CN201911132373.3A priority Critical patent/CN110889901B/en
Publication of CN110889901A publication Critical patent/CN110889901A/en
Application granted granted Critical
Publication of CN110889901B publication Critical patent/CN110889901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The BA optimization method of the large-scale scene sparse three-dimensional point cloud based on the distributed system adopts a divide-and-conquer principle to get rid of the limitation of the existing hardware resources, realizes BA optimization in the process of large-scale scene three-dimensional reconstruction, and is used as a basis for reconstructing a larger-scale three-dimensional scene sparse three-dimensional point cloud. The method comprises the following steps: step A, preprocessing and converting a data format by taking a large-scene sparse three-dimensional point cloud and a camera pose as input; b, dividing the whole scene into different sub-blocks by a k-path multi-level division method based on a graph cut principle; step C, performing BA optimization adjustment on all the subblocks in a distributed mode, and returning results to the host; d, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to results of different sub-blocks; e, carrying out global BA optimization to obtain a transformation matrix among different sub-blocks of the whole scene; and F, converting the sparse three-dimensional point clouds of different sub-blocks into the same world coordinate system to obtain the three-dimensional sparse point cloud of the scene.

Description

Large-scene sparse point cloud BA optimization method based on distributed system
Technical Field
The invention is applied to the field of computer vision, and particularly relates to a large-scene sparse point cloud BA (Bundle attachment-tment) optimization method based on a distributed system.
Background
With the development of virtual reality technology, three-dimensional reconstruction has become a hot spot in the field of computer graphics and animation. The three-dimensional reconstruction of a real scene is one of important research directions in the fields of computer vision and computer graphics, and a three-dimensional model of the real scene can be reconstructed. Three-dimensional reconstruction is the process of acquiring data of real objects or scenes by using a specific device and method, and then reproducing the data in a computer by applying various theories, techniques and methods. Compared with the manual construction of a three-dimensional model by using the existing three-dimensional modeling software (such as AutoCAD, Maya and the like), the three-dimensional reconstruction technology can conveniently and quickly obtain a more vivid three-dimensional model, is low in cost and simple to operate, has wide application in the aspects of heritage protection, medical research, augmented reality, human-computer interaction, city planning, game animation industry, reverse engineering and the like, and is an irreplaceable key technology. Three-dimensional reconstruction of large outdoor scenes, even urban level scenes, is also a difficult point in the field of computer graphics and animation.
In recent years, many research subjects adopt a three-dimensional reconstruction theory based on images to reconstruct a three-dimensional model of an outdoor large-scale scene, and the core problem of the modeling research based on images is how to recover three-dimensional geometric information of an object or a target scene from an image set, and a grid model of the target is constructed by using a correlation technique, wherein the most typical method is SfM (structure from motion), and the method mainly comprises the steps of feature extraction and matching of images, camera parameter estimation, three-dimensional point cloud generation, BA (bubble add-content) optimization and the like. The method for recovering the structure from motion obtains sparse point clouds of a target scene and a camera posture corresponding to each photo through processes of feature extraction and matching, camera parameter estimation, point cloud calculation and optimization and the like; the method is mainly divided into two types according to different treatment modes: progressive SfM method and global SfM method. The progressive SfM method usually reconstructs a small scene by taking two or three images as initial images, then progressively adds the images, and repeats the process until the whole scene is finally reconstructed; the global SfM method firstly calculates camera poses corresponding to all images, and then calculates point cloud and optimizes the whole scene.
When the SfM method is used for three-dimensional reconstruction of an outdoor large-scale scene, for the problem of three-dimensional reconstruction of the outdoor large-scale scene based on an image, the coordinates of the same three-dimensional point need to be calculated simultaneously through a plurality of viewpoints, so that enough images need to be acquired to ensure the integrity of the finally reconstructed scene, the data volume needing to be processed is huge, the problem of reconstruction of the three-dimensional scene based on the image is challenged, and the image quantity is larger along with the continuous increase of the scene. In the image-based modeling method, a large number of corresponding relations need to be established in the reconstruction process, including the corresponding relation of the feature points among the images, the corresponding relation of the three-dimensional points and the feature points in the three-dimensional point cloud model, the corresponding relation of the three-dimensional points and all visible viewpoints, and the like, the data amount to be processed is too large, and certain difficulty is brought to realization, so that new challenges are brought to the image-based three-dimensional reconstruction, and particularly, when the number of images of an input image set is large, the size of a reconstructed scene is limited by the bottleneck of a memory. In addition, for large scale scene reconstruction problems, the time required for reconstruction determines the practicability of the algorithm, which requires that the entire reconstruction process must be completed within a certain time, the problem is particularly obvious in the stage of BA optimization, in the process of three-dimensional reconstruction based on images, BA optimization is one of the most time and memory consuming modules, the whole scene needs to be optimized through nonlinear optimization based on three-dimensional point cloud, corresponding pixel coordinates and camera information, many scholars study the properties and rules of BA optimization related algorithms to try to split the whole optimization problem into smaller and more easily controllable modules, the complexity of the algorithm is controlled by limiting the increase of the number of images and the number of feature points, which determines that the number of images input by the whole algorithm cannot be increased all the time, thereby limiting the scale of the reconstructed scene. Because the SfM algorithm is used for three-dimensional reconstruction based on the characteristic point corresponding relation between the images of the whole image set, all the images are often required to be loaded into a memory in the reconstruction process, the problem of insufficient memory cannot be fundamentally solved only by simply improving the BA optimization performance, and because the improvement of the performance cannot be normally operated when the number of the images is increased, the whole image set is divided into sub-image sets with a certain scale, and then the BA optimization problem of processing large-scale sparse point clouds based on a distributed system can fundamentally break through the bottleneck limit of the memory, so that a scene model with a larger scale is reconstructed.
In view of this, the present patent application is specifically proposed.
Disclosure of Invention
The invention discloses a BA optimization method of a large-scene sparse point cloud based on a distributed system, which aims to solve the difficulties in the prior art and adopts a division principle to get rid of the limitation of the existing hardware resources (memory, CPU and the like) so as to realize BA optimization in the process of large-scale scene three-dimensional reconstruction, thereby obtaining a more vivid reconstruction effect and being used as the basis for reconstructing a larger-scale three-dimensional scene sparse three-dimensional point cloud.
In order to achieve the design purpose, the distributed system-based large-scene sparse point cloud BA optimization method comprises the following steps:
step A, preprocessing and converting a data format by taking a large-scene sparse three-dimensional point cloud and a camera pose as input;
step B, based on the graph cut principle, dividing the whole scene into different sub-blocks B-B by a k-path multi-level division method1,B2……BM}; at the same time, recording the common camera information of two adjacent sub-blocks
Figure BDA0002278678530000031
And so on;
step C, performing BA optimization adjustment on all the subblocks in a distributed mode, and returning results to the Master host;
d, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to results of different sub-blocks;
e, carrying out global BA optimization to obtain a transformation matrix lambda [ R | T ] among different subblocks of the whole scene;
and F, converting the sparse three-dimensional point clouds of different sub-blocks in the step C into the same world coordinate system according to the result of the step E so as to obtain the three-dimensional sparse point cloud of the scene.
According to the design concept, the BA optimization mode in the existing three-dimensional reconstruction of the large-scale scene based on the image is improved, and the whole scene is divided into different sub-blocks through k-path multi-level division, so that the reconstruction problem of one large scene is divided into the reconstruction problems of a plurality of small-scale scenes.
And then distributed scene reconstruction and BA optimization are adopted, camera pose and three-dimensional point cloud information are fully utilized to carry out splicing among different sub-blocks, so that parameters required by global BA are further optimized, the BA optimization process of the large-scale scene sparse point cloud can be completed, and the large-scale scene sparse three-dimensional point cloud is output.
Further, in the step B, a k-way multi-level division mode is adopted to divide the sparse three-dimensional point cloud of the whole scene into different sub-blocks, and the geometric error between two camera viewpoints is used as the boundary weight.
In the step D, solving a rigid body transformation matrix lambda [ R | T ] between adjacent subblocks based on the overlapped camera pose information; the formula used is as follows,
Figure RE-GDA0002357440080000032
Figure RE-GDA0002357440080000033
Figure RE-GDA0002357440080000034
wherein ,
Figure BDA0002278678530000033
for the camera orientation of the same camera i in two adjacent sub-blocks, Ti 1,Ti 2The camera positions of the same camera i in two adjacent sub-blocks are defined, and n is the number of overlapped cameras in the two adjacent sub-blocks.
In the step E, finding the corresponding relation of the three-dimensional point cloud between the two sub-blocks in a back tracking mode
Figure BDA0002278678530000034
Obtaining rigid body transformation matrixes and scale transformation factors among different sub-blocks of the whole scene through nonlinear BA optimization; the formula used is as follows,
Figure BDA0002278678530000035
wherein ,
Figure BDA0002278678530000036
the ith three-dimensional point in sub-block b1 in Π.
In summary, the distributed system-based large-scene sparse point cloud BA optimization method has the advantages that the limitation of existing hardware resources can be effectively eliminated, BA optimization in a large-scale scene three-dimensional reconstruction process is achieved, and an optimization result can be used as a basis for reconstructing a larger-scale three-dimensional scene sparse three-dimensional point cloud.
Drawings
FIG. 1 is a schematic flow chart of an optimization method described herein;
fig. 2 and fig. 3 are schematic diagrams of different three-dimensional sparse point clouds obtained by applying the optimization method of the present application, respectively;
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and embodiments, which are partly for the purpose of making clear the objects, solutions and advantages of the present invention, and the scope of protection claimed by the present application is not limited to the following.
As shown in fig. 1, the method for optimizing a large-scene sparse point cloud BA based on a distributed system according to the present application includes the following steps:
step A, preprocessing and converting a data format by taking a large-scene sparse three-dimensional point cloud and a camera pose as input; constructing an epipolar graph of the whole scene, wherein the calculation formula of the geometric error is as follows:
Figure BDA0002278678530000041
so as to track the corresponding relation among the feature points, the camera pose and the three-dimensional point cloud.
Step B, based on the graph cutting principle, dividing the whole scene into different sub-blocks B-B by a k-way multi-level division method according to the nuclear point graph determined in the step A1,B2……BM}; at the same time, recording the common camera information of two adjacent sub-blocks
Figure BDA0002278678530000042
And so on;
dividing the sparse three-dimensional point cloud of the whole scene into different sub-blocks by adopting a k-path multi-level division mode, and taking a geometric error between two camera viewpoints as boundary weight;
step C, performing BA optimization adjustment on all the subblocks in a distributed mode, and returning results to the Master host;
tracking a corresponding relation three-dimensional point set existing between different sub-blocks according to an overlapped camera pose set
Figure BDA0002278678530000043
And performing BA optimization on each sub-block by using a nonlinear optimization library Ceres-Solver in a distributed manner, and returning the result to the Master host.
D, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to results of different sub-blocks;
based on the overlapped camera pose information, solving a rigid body transformation matrix lambda [ R | T ] between adjacent subblocks; the formula used is as follows,
Figure RE-GDA0002357440080000051
Figure RE-GDA0002357440080000052
Figure RE-GDA0002357440080000053
wherein ,
Figure BDA0002278678530000052
for the camera orientation of the same camera i in two adjacent sub-blocks, Ti 1,Ti 2The camera positions of the same camera i in two adjacent sub-blocks are defined, and n is the number of overlapped cameras in the two adjacent sub-blocks.
E, carrying out global BA optimization to obtain a transformation matrix lambda [ R | T ] among different subblocks of the whole scene;
from the above steps, rigid body transformation parameters (R, T) and scale transformation factors lambda between two sub-blocks are obtained, the corresponding relation of three-dimensional point cloud between two sub-blocks is found by a back tracking mode,
Figure BDA0002278678530000053
obtaining rigid body transformation matrixes and scale transformation factors among different sub-blocks of the whole scene through nonlinear BA optimization; the formula used is as follows,
Figure BDA0002278678530000054
wherein ,
Figure BDA0002278678530000055
the ith three-dimensional point in sub-block b1 in Π.
And F, converting the sparse three-dimensional point clouds of different sub-blocks in the step C into the same world coordinate system according to the result of the step E so as to obtain the three-dimensional sparse point cloud of the scene.
In the above optimization process, the used devices are Spark clusters built by 7 PC machines, and each PC machine is configured as follows: NVIDIA GeForce GTX1080, Intel (R) core (TM) i7-6700CPU (3.40GHz, 4cores) and 32GBRAM, the system environment is Ubuntu 14.04, and the programming languages are C + + and Python.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (4)

1. A large-scene sparse point cloud BA optimization method based on a distributed system is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
step A, preprocessing and converting a data format by taking a large-scene sparse three-dimensional point cloud and a camera pose as input;
step B, based on the graph cut principle, dividing the whole scene into different sub-blocks B-B by a k-path multi-level division method1,B2……BM}; at the same time, recording the common camera information of two adjacent sub-blocks
Figure FDA0002278678520000011
And so on;
step C, performing BA optimization adjustment on all the subblocks in a distributed mode, and returning results to the Master host;
d, solving a rigid body transformation matrix and a scale factor between two adjacent sub-blocks according to results of different sub-blocks;
e, carrying out global BA optimization to obtain a transformation matrix lambda [ R | T ] among different subblocks of the whole scene;
and F, converting the sparse three-dimensional point clouds of different sub-blocks in the step C into the same world coordinate system according to the result of the step E so as to obtain the three-dimensional sparse point cloud of the scene.
2. The distributed system based large scene sparse point cloud BA optimization method of claim 1, wherein: in the step B, a k-path multi-level division mode is adopted to divide the sparse three-dimensional point cloud of the whole scene into different sub-blocks, and the geometric error between two camera viewpoints is used as boundary weight.
3. The distributed system based large scene sparse point cloud BA optimization method of claim 2, wherein: in the step D, solving a rigid body transformation matrix lambda [ R | T ] between adjacent subblocks based on the overlapped camera pose information; the formula used is as follows,
Figure RE-FDA0002357440070000012
Figure RE-FDA0002357440070000013
Figure RE-FDA0002357440070000014
wherein ,
Figure RE-FDA0002357440070000015
for the camera orientation of the same camera i in two adjacent sub-blocks, Ti 1,Ti 2The camera positions of the same camera i in two adjacent sub-blocks are defined, and n is the number of overlapped cameras in the two adjacent sub-blocks.
4. The distributed system based large scene sparse point cloud BA optimization method of claim 3, wherein: in the step E, finding the corresponding relation of the three-dimensional point cloud between the two sub-blocks in a back tracking mode
Figure FDA0002278678520000014
Obtaining rigid body transformation matrixes and scale transformation factors among different sub-blocks of the whole scene through nonlinear BA optimization;
the formula used is as follows,
Figure FDA0002278678520000021
wherein ,
Figure FDA0002278678520000022
three-dimensional in sub-block b1 for the ith of piAnd (4) point.
CN201911132373.3A 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system Active CN110889901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911132373.3A CN110889901B (en) 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911132373.3A CN110889901B (en) 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system

Publications (2)

Publication Number Publication Date
CN110889901A true CN110889901A (en) 2020-03-17
CN110889901B CN110889901B (en) 2023-08-08

Family

ID=69747912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911132373.3A Active CN110889901B (en) 2019-11-19 2019-11-19 Large-scene sparse point cloud BA optimization method based on distributed system

Country Status (1)

Country Link
CN (1) CN110889901B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553985A (en) * 2020-04-30 2020-08-18 四川大学 Adjacent graph pairing type European three-dimensional reconstruction method and device
CN112365541A (en) * 2020-11-24 2021-02-12 北京航空航天大学青岛研究院 Large-scene camera posture registration method based on similarity transformation
CN113177999A (en) * 2021-03-25 2021-07-27 杭州易现先进科技有限公司 Visual three-dimensional reconstruction method, system, electronic device and storage medium
CN113284227A (en) * 2021-05-14 2021-08-20 安徽大学 Distributed motion inference structure method for large-scale aerial images

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034238A (en) * 2010-12-13 2011-04-27 西安交通大学 Multi-camera system calibrating method based on optical imaging test head and visual graph structure
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
CN102158524A (en) * 2010-12-30 2011-08-17 北京像素软件科技股份有限公司 Rendering-based distributed behavior control system
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map
CN110009732A (en) * 2019-04-11 2019-07-12 司岚光电科技(苏州)有限公司 Based on GMS characteristic matching towards complicated large scale scene three-dimensional reconstruction method
CN110264517A (en) * 2019-06-13 2019-09-20 上海理工大学 A kind of method and system determining current vehicle position information based on three-dimensional scene images
US20190325089A1 (en) * 2018-04-18 2019-10-24 Reconstruct Inc. Computation of point clouds and joint display of point clouds and building information models with project schedules for monitoring construction progress, productivity, and risk for delays

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
CN102034238A (en) * 2010-12-13 2011-04-27 西安交通大学 Multi-camera system calibrating method based on optical imaging test head and visual graph structure
CN102158524A (en) * 2010-12-30 2011-08-17 北京像素软件科技股份有限公司 Rendering-based distributed behavior control system
US20190325089A1 (en) * 2018-04-18 2019-10-24 Reconstruct Inc. Computation of point clouds and joint display of point clouds and building information models with project schedules for monitoring construction progress, productivity, and risk for delays
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map
CN110009732A (en) * 2019-04-11 2019-07-12 司岚光电科技(苏州)有限公司 Based on GMS characteristic matching towards complicated large scale scene three-dimensional reconstruction method
CN110264517A (en) * 2019-06-13 2019-09-20 上海理工大学 A kind of method and system determining current vehicle position information based on three-dimensional scene images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
洪亮;冯常;: "基于RGB-D相机数据的SLAM算法" *
洪亮;冯常;: "基于RGB-D相机数据的SLAM算法", 电子设计工程, no. 09 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553985A (en) * 2020-04-30 2020-08-18 四川大学 Adjacent graph pairing type European three-dimensional reconstruction method and device
CN112365541A (en) * 2020-11-24 2021-02-12 北京航空航天大学青岛研究院 Large-scene camera posture registration method based on similarity transformation
CN112365541B (en) * 2020-11-24 2022-09-02 北京航空航天大学青岛研究院 Large-scene camera posture registration method based on similarity transformation
CN113177999A (en) * 2021-03-25 2021-07-27 杭州易现先进科技有限公司 Visual three-dimensional reconstruction method, system, electronic device and storage medium
CN113284227A (en) * 2021-05-14 2021-08-20 安徽大学 Distributed motion inference structure method for large-scale aerial images
CN113284227B (en) * 2021-05-14 2022-11-22 安徽大学 Distributed motion inference structure method for large-scale aerial images

Also Published As

Publication number Publication date
CN110889901B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN110889901B (en) Large-scene sparse point cloud BA optimization method based on distributed system
WO2019157924A1 (en) Real-time detection method and system for three-dimensional object
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
Hu et al. Robust hair capture using simulated examples
CN110288695B (en) Single-frame image three-dimensional model surface reconstruction method based on deep learning
CN108921926A (en) A kind of end-to-end three-dimensional facial reconstruction method based on single image
CN104376594A (en) Three-dimensional face modeling method and device
CN110633628B (en) RGB image scene three-dimensional model reconstruction method based on artificial neural network
WO2014117447A1 (en) Virtual hairstyle modeling method of images and videos
Lu et al. Attention-based dense point cloud reconstruction from a single image
CN110570522A (en) Multi-view three-dimensional reconstruction method
CN110176079B (en) Three-dimensional model deformation algorithm based on quasi-conformal mapping
CN103530907A (en) Complicated three-dimensional model drawing method based on images
CN112489083A (en) Image feature point tracking matching method based on ORB-SLAM algorithm
CN105261062A (en) Character segmented modeling method
Sun et al. Ssl-net: Point-cloud generation network with self-supervised learning
Chen et al. Autosweep: Recovering 3d editable objects from a single photograph
Sun et al. Quadratic terms based point-to-surface 3D representation for deep learning of point cloud
Cao et al. Accurate 3-D reconstruction under IoT environments and its applications to augmented reality
Shi et al. Geometric granularity aware pixel-to-mesh
CN116385667B (en) Reconstruction method of three-dimensional model, training method and device of texture reconstruction model
CN116822100A (en) Digital twin modeling method and simulation test system thereof
CN113487713B (en) Point cloud feature extraction method and device and electronic equipment
Lin et al. High-resolution multi-view stereo with dynamic depth edge flow
Bhardwaj et al. SingleSketch2Mesh: generating 3D mesh model from sketch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant